Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Measure-driven impulsive systems : stabilization, optimal control and applications Code, Warren Joseph 2009

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2010_spring_code_warren.pdf [ 1.11MB ]
Metadata
JSON: 24-1.0068836.json
JSON-LD: 24-1.0068836-ld.json
RDF/XML (Pretty): 24-1.0068836-rdf.xml
RDF/JSON: 24-1.0068836-rdf.json
Turtle: 24-1.0068836-turtle.txt
N-Triples: 24-1.0068836-rdf-ntriples.txt
Original Record: 24-1.0068836-source.json
Full Text
24-1.0068836-fulltext.txt
Citation
24-1.0068836.ris

Full Text

Measure-Driven Impulsive Systems Stabilization, Optimal Control and Applications by Warren Joseph Code B.Sc., The University of British Columbia, 2001 M.Sc., The University of Saskatchewan, 2003 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in The Faculty of Graduate Studies (Mathematics) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) December 2009 c© Warren Joseph Code 2009 Abstract This dissertation studies various standard facets of nonlinear control prob- lems in the impulsive setting, using a framework of measure-driven systems. Containing a Borel measure in their dynamics, these systems model signif- icant time scale discrepancies; the measure may weight actions at instants, producing trajectories that mix discrete and continuous dynamics on the “fast” and “slow” time scales, respectively. A central feature of our work is the careful use of a time reparametrization to transform these systems into standard, non-impulsive ones, so that the wealth of recent results in nonlinear control may be applied. Closed-loop stabilization of impulsive control systems containing a mea- sure in the dynamics is addressed. It is proved that, as for regular affine systems, an almost everywhere continuous stabilizing impulsive feedback control law exists for such impulsive systems. An example illustrating the loop closing features is also presented. Necessary conditions for optimal control have recently been developed in the non-convex case by Clarke and Vinter, among others. We extend these results to generalized differential inclusions where a signed, vector-valued measure appears. In particular, we offer a set of stratified necessary condi- tions in optimal control of measure-driven systems, as well as a set of stan- dard (global) conditions under weak regularity hypotheses on the differential inclusion maps. An auxiliary result essential to our proof extends existing free end-time necessary conditions results to Clarke’s stratified framework. We work in the context of pseudo-Lipschitz multifunctions, which provide localized Lipschitz-like properties in the absence of convexity. We take a well-evolved solution concept framework in new directions, introducing a workable system of state-dependent measures and measure- ii Abstract based constraints, such as a forced impulse schedule, a restriction to purely discrete impulse dynamics or a state-dependent impulse restriction, and prove necessary conditions in optimal control for this new framework. This is an important step in the renewed use of measure-driven systems in mod- eling a broad range of applications within a familiar, mathematically sound framework. Taken together, these results span a broad range of topics in nonlinear, state-space control in the impulsive context, and refresh the measure-driven framework, paving the way for future research and further value in applica- tions. iii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Statement of Collaboration . . . . . . . . . . . . . . . . . . . . . ix 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1 Function Regularity . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 State-Space Control Theory . . . . . . . . . . . . . . . . . . 5 2.2.1 Classical Control Systems . . . . . . . . . . . . . . . . 5 2.2.2 Linearity in Control Systems . . . . . . . . . . . . . . 7 2.2.3 The Differential Inclusion Setting . . . . . . . . . . . 9 2.3 Nonsmooth Analysis . . . . . . . . . . . . . . . . . . . . . . . 11 2.3.1 Normal Vectors and Cones . . . . . . . . . . . . . . . 12 2.3.2 Subgradients . . . . . . . . . . . . . . . . . . . . . . . 13 3 Impulsive Solution Concepts . . . . . . . . . . . . . . . . . . . 15 3.1 Explicit Jump Maps and Hybrid Systems . . . . . . . . . . . 16 3.2 Measure-Driven Systems . . . . . . . . . . . . . . . . . . . . 18 3.2.1 Impulsive System Solution Concept . . . . . . . . . . 19 3.2.2 Recent Results in Measure-Driven Systems . . . . . . 25 iv Table of Contents 4 Closed-Loop Stabilization . . . . . . . . . . . . . . . . . . . . . 29 4.1 Classical Stabilization via Lyapunov Functions . . . . . . . . 30 4.1.1 Smooth Case . . . . . . . . . . . . . . . . . . . . . . . 31 4.1.2 Semiconcave Case . . . . . . . . . . . . . . . . . . . . 32 4.1.3 SRS Feedback . . . . . . . . . . . . . . . . . . . . . . 33 4.2 Stabilization for Impulsive Systems . . . . . . . . . . . . . . 34 4.3 Closed-Loop Feedback Stabilization Results . . . . . . . . . . 37 4.4 Stabilization Example . . . . . . . . . . . . . . . . . . . . . . 44 4.4.1 Feedback By Inspection . . . . . . . . . . . . . . . . . 45 4.4.2 Feedback Via CLF and Formula . . . . . . . . . . . . 47 4.5 Final Considerations . . . . . . . . . . . . . . . . . . . . . . . 48 5 Necessary Conditions in Optimal Control . . . . . . . . . . 54 5.1 Current Necessary Conditions for Nonimpulsive Problems . . 57 5.2 Statement of Main Results . . . . . . . . . . . . . . . . . . . 60 5.3 PLC, TG and ESSINF in the Stretched-Time System . . . . 65 5.4 Free End-Time Problems . . . . . . . . . . . . . . . . . . . . 73 5.5 Proof of Main Results . . . . . . . . . . . . . . . . . . . . . . 87 5.5.1 Proof of Theorem 5.4 . . . . . . . . . . . . . . . . . . 88 5.5.2 Proof of Theorem 5.5 . . . . . . . . . . . . . . . . . . 94 5.6 Final Considerations . . . . . . . . . . . . . . . . . . . . . . . 97 6 Applications via Measure Constraints . . . . . . . . . . . . . 100 6.1 Measure Constraints . . . . . . . . . . . . . . . . . . . . . . . 102 6.1.1 Impulse Budget . . . . . . . . . . . . . . . . . . . . . 102 6.1.2 Time-Dependent Measure Constraints . . . . . . . . . 104 6.1.3 Necessary Conditions for Systems Including Impulse- Only Dynamics . . . . . . . . . . . . . . . . . . . . . 107 6.1.4 Case Study: Pest Control Using Natural Predators . 120 6.2 State-Dependent Measure Constraints . . . . . . . . . . . . . 127 6.3 Final Considerations . . . . . . . . . . . . . . . . . . . . . . . 130 7 Conclusion and Open Problems . . . . . . . . . . . . . . . . . 131 v Table of Contents Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 vi List of Figures 3.1 Sample time reparametrization function η and its completion η̄. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.1 “Flow” field f and “jump” field g . . . . . . . . . . . . . . . . 44 4.2 a(y) and b(y) versus polar angle φ(y) . . . . . . . . . . . . . . 46 4.3 Trajectory starting at (1.5,−0.9). . . . . . . . . . . . . . . . . 49 4.4 Reparametrization θ(s). . . . . . . . . . . . . . . . . . . . . . 49 4.5 A CLF for the auxiliary system . . . . . . . . . . . . . . . . . 50 4.6 Feedback b0 in auxiliary system . . . . . . . . . . . . . . . . . 51 4.7 Feedback a . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.8 Feedback b0 in modified system . . . . . . . . . . . . . . . . . 52 4.9 Feedback (1− |b(z)|) a in modified system . . . . . . . . . . . 52 4.10 Trajectories (s up to 1000); dashes are used in jump regions (|b| = 1 border is indicated). . . . . . . . . . . . . . . . . . . . 53 4.11 First five s-seconds of the parameterization θ(s) for the bold trajectory in Figure 4.10. . . . . . . . . . . . . . . . . . . . . 53 6.1 Approximate optimal trajectories in pest control system. . . . 125 vii Acknowledgements I would like to express my deep gratitude to my supervisor, Philip Loewen, who has been a primary source of inspiration and encouragement throughout my mathematical career in his roles as teacher, supervisor and research colleague. I am indebted to Geraldo Silva for introducing me to this area of research, and am grateful for having him as a research collaborator and as a friend these past few years. I appreciate the careful reading and thoughtful comments provided by my examination committee, both the longer-term participants Wayne Nagata and Anthony Pierce from the department committee, and also the more recent additions of Brian Wetton and Antony Hodgson. I also offer a special thanks to Fernando Lobo Perĕıra, the external examiner, for an encouraging and thorough report whose excellent suggestions have improved this final version. Thanks to the staff: Lee, Marija, Mar, Marlowe, Verni, Ann, Jessica, Mary-Margaret, Sharon, Yvonne, Joseph, The and Thi for the many things they do for the department and graduate students in particular. I would also like to thank UBC Mathematics in general for continuing to support non-NSERC graduate students like myself. I would like to acknowledge my MSc supervisor, Patrick Browne, and former education work supervisor Keith Taylor for their friendship and con- tinued support of my academic endeavours. I am fortunate enough to have more friends and family than I can reason- ably mention here, but their love and support during my extended schooling has made it all worthwhile. I will single out Amy, however, since my time with her has been the best part of these years. viii Statement of Collaboration The stabilization results in Chapter 4 are due to joint work with Geraldo N. Silva, who brought his expertise on impulsive systems to our research group at UBC during a sabbatical year. Having studied the work of Rifford extensively, I was the leader in the stabilization concepts. I also proposed the use of the auxiliary system in analyzing the impulsive system. These result are the basis of a published article [23], and have been presented by each of us at different conferences (with a limited version appearing in [22]). After the initial outline and some of the background material was written, I was responsible for the manuscript preparation and submission, including production of the figures. While Geraldo N. Silva also proposed an adaptation of current necessary conditions from Vinter or Clarke as a second project, time and geography constraints prevented further significant collaboration. The necessary con- ditions results of Chapter 5 are due to joint work with my supervisor, Philip D. Loewen. I developed the multiscale method in the development of the radius R in the stratified necessary conditions, the revision of the proof of Vinter, as well as the arguments for the global results. My supervisor brought his vast experience with nonsmooth analysis and standard, nonim- pulsive optimal control. I have had sole responsibility in terms of manuscript preparation and submission. ix Chapter 1 Introduction Imagine yourself throwing a ball towards a brick wall. As it arcs through the air, the ball traces out a trajectory that is roughly parabolic, and we can predict the ball’s position and velocity using Newton’s famous equations of motion. But what happens when the ball hits the brick wall? The dominant physics at play seems to be quite different; our list of essential concepts to model includes deformation, elasticity, friction and spin. Yet all of these new forces seem to play out in an instant, after which the ball is on a new parabolic trajectory heading away from the wall. This type of scale discrepancy is at the heart of impulsive systems: one set of dynamics accounts for the “slow” time scale (the ball arcing through the air) and another the “fast” time scale (the bounce). Impulsive dynamical systems model shock behaviour, where a shock (or impulse) may be thought of as an action occurring on a very fast time scale (modeled as an instant) relative to the other dynamics of the system. In im- pulsive control systems, the magnitude and timing of the impulse may both be considered as choice variables. The two primary ways of describing such systems mathematically are: including both a continuous dynamics map and an explicit jump map that describes the discrete jumps in the trajec- tory, or using a measure in the place of a standard linear control-dependence and working with an integral equation to capture the discrete behaviour by weighting impulse times to permit action at an instant. Use of a measure does complicate the analysis somewhat as, among other issues, a full solu- tion concept must be presented; indeed, of the many applications involving impulsive models, the former method of explicit jump maps appears to be the most popular. We employ the latter concept in our discussion, however, 1 Chapter 1. Introduction as it includes a straightforward description of the action at an instant via explicit dynamical subsystems (which in turn allows more descriptive mod- eling), and permits a helpful transformation to and from a standard control problem where existing results are abundant. When the system under consideration involves a control, we may work with a differential inclusion that incorporates the control parameter as a selection from a multifunction (set-valued function). Writing such a system in a differential form so that we may include a measure as a special control parameter, we arrive at the general form dx(t) ∈ F (t, x)dt +G(t, x)dµ(t), x(0−) = x0, (1.1) where the measure µ takes on vector values in a closed cone, K, while G takes as values sets of matrices. Solutions of such a system, impulsive trajectories, may be described as a sequence of continuous arcs separated by discrete “jumps”; our solution concept is presented in detail in Section 3.2.1. One feature of our chosen method is that it permits the transformation of a given impulsive trajectory to a standard trajectory of a related system via a time reparametrization. Applying this to each of the relevant trajectories in our optimal control problem, we obtain a new optimal control problem that enjoys the same cost function, a related differential inclusion with no measure involved, but with the sacrifice that the time interval endpoint becomes a choice variable. This differs from the solution concepts used in, for example, [30, 43, 55, 67] where jumps give rise to dynamic subsystems on a unit interval; the time stretch we use here (also used in [23]) weights instants and produces subsystems on intervals of varying length where the jump dynamics of G are active. This dissertation provides extensions to state-of-the-art results in stabi- lization and necessary conditions in optimal control of impulsive systems via creative use a of a well-established time reparametrization. While of value in their own right, these may also serve as a template for a host of future results ranging across the field of control theory. An overarching goal is to describe a framework based on measure-driven differential inclusions where 2 Chapter 1. Introduction the various control problems involving “impulsive”, “discrete-continuous”, or “hybrid” systems may be subsumed under a unified theory. We review in Chapter 2 the mathematical tools we will employ, pri- marily the concept of differential inclusions and the associated elements of nonsmooth analysis. In Chapter 3, we provide an overview of solution concepts in impulsive systems, and introduce our measure differential inclusion solution concept, with comparison to its closest relatives in the literature. An important feature of this solution concept is a transformation whereby discontinuous trajectories are paired with continuous versions with the impulse instants “stretched out”. We review the history of impulsive control theory, drawing attention to the diverse character of the field since its first appearance in the mid-20th century, and list the significant research leaders from the various conceptual models to attempt to put the current work in context. With the stage set, we proceed to our key results. In Chapters 4 and 5, we prove fundamental results in closed-loop stabilization and necessary con- ditions in optimal control, respectively, in both cases overcoming concerns in the recent literature in the general vector-valued measure case, and adapting state-of-the-art results in standard (non-impulsive) nonlinear control theory to our impulsive framework. In Chapter 6, we present possible approaches in imposing standard types of modeling constraints on our solutions, whereby our framework covers a broad class of impulsive and hybrid systems. With this eye towards modeling issues, we prove a tailored version of our necessary conditions for optimal control which captures behaviour that is typical of contemporary impulsive models, and we discuss examples where such constraints may arise. We conclude in Chapter 7 with a summary of novel contributions in- cluded in this dissertation, as well as a number of future research directions that may be pursued in building on this fundamental work. 3 Chapter 2 Preliminaries This chapter introduces the mathematical objects and concepts that will help us treat our conceptual problems in the later chapters. After a brief discussion of function regularity in Section 2.1, we will mention some of the basic elements of control theory in Section 2.2, and follow that with a primer in Section 2.3 on the basic tools in nonsmooth analysis which have proved invaluable in the study of nonlinear control. 2.1 Function Regularity: Lipschitz Continuity and Related Properties Early questions in the theory of differential equations were concerned with solutions of the basic first-order ordinary differential equation ẋ(t) = f(t, x), (t, x) ∈ [a, b]× Rn, x(a) = x0, (2.1) and mathematicians attempted to determine properties of x based on prop- erties of f , traditionally a vector-valued function in Rn. Unique solutions exist under fairly weak assumptions on f : that f is measurable in t for each fixed x, obeys some form of continuity in x for each t, and is bounded above by a measurable function for each (t, x) (a classic reference for existence and uniqueness theorems for solutions of differential equations is [20]). One convenient property that sufficiently captures sufficient continuity and boundedness for f is that of Lipschitz continuity in x. Definition (Lipschitz continuity for a function). A function f : [a, b]× Rn → Rn satisfies a Lipschitz condition with respect to x on the set C ⊆ 4 2.2. State-Space Control Theory [a, b]× Rn with a function K = K(t) if |f(t, x1)− f(t, x2)| ≤ K(t) |x1 − x2| holds for every (t, x1), (t, x2) ∈ C. We also write “f is K-Lipschitz in x” to refer to the same property (where C is the appropriate domain, usually not mentioned explicitly). We ask only that the differential equation (2.1) be satisfied for almost every t in the Lebesgue sense. The result, for each initial condition x0, is a solution (also called a trajectory) x(·) that is absolutely continuous on the interval [a, b]. 2.2 State-Space Control Theory 2.2.1 Classical Control Systems Many physical phenomena are modeled by the autonomous, first-order dif- ferential equation ẋ(t) = f0 (x(t)) , x(0) = x0 (2.2) where the independent variable t usually represents time, and where x takes values in a vector space (the state space). In this document, we will work with states in Rn, but results may be readily adapted for more general settings such as smooth finite-dimensional manifolds [28]. We will concern ourselves with continuous-time systems, where the underlying dynamics take place over the time interval [0,∞). There is a parallel development for discrete time systems, where the time is captured in “instants” and the trajectories are discrete sequences in the state space. This is a favoured approach in engineering disciplines, where analysis occurs primarily in the frequency domain. Solutions to the differential equation (2.2), which may also be interpreted in the sense of the corresponding integral equation, are called trajectories, and one significant topic of study in such systems is the qualitative behaviour of trajectories. In particular, if we assume that f0(0) = 0, an equilibrium 5 2.2. State-Space Control Theory exists at the origin x = 0, and various techniques exist to determine if this equilibrium is stable (nearby trajectories head towards or remain close to the origin) or unstable (nearby trajectories head away from the origin). We call a system globally asymptotically stable with respect to the origin (GAS) if, for every initial value x0 ∈ Rn, the trajectory x(t) with x(0) = x0 satisfies limt→∞ x(t) = 0. (We will consider only the special case of stability with respect to the origin.) An autonomous control system is a dynamical system of the form ẋ(t) = f (x(t), u(t)) , u(t) ∈ U, a.e. t (2.3) where the parameter u is called the control parameter. Values of this param- eter, chosen from U , the control space, may influence qualitative properties of the system, in particular stability. As a simple theoretical setup, we as- sume f(0, 0) = 0, and think of the underlying dynamics defined by f as fixed, while the control parameter is a user-defined function that, under the right circumstances, can ensure that the origin is a stable equilibrium. We may impose regularity conditions on the function u(t) (e.g., continuity, differen- tiability), and declare only those functions satisfying these conditions to be admissible controls. A control system is globally asymptotically controllable to the origin (GAC) if, for every initial value x0 ∈ Rn, there exists an ad- missible control u(t) such that the resulting trajectory x(t) with x(0) = x0 satisfies limt→∞ x(t) = 0. Global asymptotic controllability depends on f , as in the uncontrolled case of (2.2), but could also depend on the control space which may restrict available control actions. One desirable outcome is the development of a control scheme such that the value of u(t) only depends on the current state, x(t). In other words, we seek a function k(x), called a state feedback control, such that the choice u(t) = k(x(t)) stabilizes the system. We refer to the original system (2.3) as open loop when u is explicitly a function of time; control actions are taken at every time instant according to some predetermined plan. By contrast, we call the system ẋ(t) = f (x(t), k(x(t))) (2.4) 6 2.2. State-Space Control Theory closed loop in the sense that control actions are prescribed by the current state, regardless of the time instant that state is encountered. This implies a sense of automation on the part of the control scheme based solely on measurements of the system state, hence the term “feedback”. It should be stressed that, in practice, determining whether or not a system is controllable can be a process very much removed from developing an appropriate control scheme, even when both of these are possible. The discussion in Chapter 4 will focus almost exclusively on questions arising from the latter, such as: Given a GAC control system, what sort of state feedbacks can we expect or construct that produce a closed-loop system that is GAS? 2.2.2 Linearity in Control Systems The most basic and best-understood control systems are those where the dynamics are linear in both the state and the control vectors: ẋ = Ax+Bu u ∈ U, (2.5) where A and B are matrices of size n × n and m × n, respectively. The control space U is, in elementary treatments, the unbounded space Rm, but is bounded in a more typical model setting; usually we assume convexity (allowing more natural combinations of different control configurations) and boundedness (realistic in physical models), and usually 0 ∈ U so we can turn the control “off” as well. Linear system dynamics and the effects of controls may be investigated via properties of the matrices A and B. Recalling that stability of the linear, uncontrolled differential equation ẋ = Ax is determined by the eigenvalues of A, one approach in the stabilization of (2.5) is to determine a matrix F based on A and B such that setting u = Fx produces the closed-loop system ẋ = Ax+BFx that is stable at the origin; equivalently, if all eigenvalues of the matrix 7 2.2. State-Space Control Theory A+BF have negative real part. Offering a step up in generality, control-affine systems are of the form: ẋ = h(x) + m∑ i=0 uigi(x) u ∈ U, (2.6) where h and the gi are smooth n-vector-valued functions with h(0) = 0, and the ui are scalar. The system (2.6) is also commonly expressed as: ẋ = h(x) +G(x)u u ∈ U, where the gi form the columns of the matrix G and u is the controlm-vector. (The linear case (2.5) is recaptured by taking h(x) = Ax and G(x) = B.) Such systems describe a wide variety of nonlinear phenomena where a basic linear system (2.5) is insufficient as a model. We call the function h the drift, which provides the dynamics of the system when the control u is set to zero. Control-affine systems without drift (i.e., h ≡ 0) are by themselves of great interest in robotics and other models that are inherently nonlinear but retain their current state if the control is not active. Controllability of control-affine systems like (2.6) is a more complicated issue than in the linear case, but may be analyzed using Lie derivatives. Given a smooth field h : Rn → Rn and a differentiable function V : Rn → R, the Lie derivative of V with respect to h at x is defined by LhV (x) = 〈h(x) , ∇V (x)〉 , and quantifies the flow according to the field h in the direction of the gradient of V . That is, if x(·) obeys ẋ(t) = h(x(t)), then LhV (x(t)) = ddtV (x(t)). We consider Lh as a first-order differential operator. Given another smooth field g, we may compose Lie derivatives, writing Lh ◦Lg as LhLg, a second-order differential operator: LhLgV = gTHV h + ∇V g∗h, 8 2.2. State-Space Control Theory where HV is the Hessian of V and g∗ the Jacobian of g. We define the Lie bracket of h and g as: [h, g] = g∗h − h∗g, which defines another smooth field on Rn, that with Lie derivative operator: L[h,g] = LhLg − LgLh. This bilinear, skew-symmetric bracket operation allows us to define a Lie al- gebra: a set of smooth vector fields on Rn closed under the bracket operation [·, ·]. Roughly speaking, when system (2.6) involves a sufficiently nice control space U , we need only examine the Lie algebra formed by the functions f, g1, · · · , gm to examine a variety of controllability issues. Chapter 4 of [58] provides an introduction to these concepts in the smooth case on Rn. There is a parallel development on manifolds - see [28] which incorporates this approach from the beginning. 2.2.3 The Differential Inclusion Setting An alternative approach to the study of control systems is the use of differen- tial inclusions. Rather than considering the controlled differential equation ẋ = f(x, u) u ∈ U, (2.7) where f is a function of the state and the control parameter, we consider a set-valued function (also called a multifunction) of the state: F : Rn→P(Rn), where P is the power set, so F maps vectors in Rn to sets of vectors in Rn. We will say that a multifunction is any one of closed, non-empty, or convex, to mean that the property holds for all the image sets F (x) for x in the appropriate domain. We then refer to F as the differential inclusion map in 9 2.2. State-Space Control Theory the following differential inclusion: ẋ ∈ F (x). (2.8) For each x, we call the set F (x) the velocity set for the state, x. If F is single-valued, say F (x) = {g(x)}, then (2.8) reduces to an un- controlled differential equation in the style of (2.2). Apart from this trivial connection, however, the two setups are connected by the following: suppose that F (x) = f(x,U) = {f(x, u)|u ∈ U} for the function f of (2.7), that is, the velocity set contains the velocities of all possible control actions at a given state, x. It is a famous result, Filippov’s Lemma (see, for example, [15]), which confirms that x is a solution of (2.8) with F (x) = f(x,U) if and only if a measurable selection u(·) of U exists such that (2.7) is satisfied. (A “measurable selection u(·) of U” means that u takes values in U and is a measurable function with respect to the time variable.) In answering questions of controllability and stability of the control sys- tem (2.7), the inclusion (2.8) lends itself more readily to geometric analysis when we consider properties of its graph, gph(F ) = {(x, v) : v ∈ F (x)} , thereby expanding our available set of tools. We extend our notion of Lipschitz continuity to multifunctions, where it can help to guarantee properties of solutions when the multifunction is used as a differential inclusion map. As opposed to the single-valued case (which it reduces to directly), the definition for multifunctions must be stated in terms of set inclusions. Definition (Lipschitz continuity for a multifunction). A multifunction F : [a, b] × Rn→Rn satisfies a Lipschitz condition with respect to x with modulus function K : [a, b]→ (0,∞), K measurable, if F (t, x1) ⊆ F (t, x2) + K(t) |x1 − x2|Bn holds for every x1, x2 ∈ Rn and for almost every t ∈ [a, b]. We also write “F 10 2.3. Nonsmooth Analysis is K-Lipschitz in x” to refer to the same property. We note that K may be a function of the time interval. A further useful extension is the notion of local Lipschitz continuity, Definition (Local Lipschitz continuity for a multifunction). A multi- function F : [a, b] × Rn→Rn satisfies a local Lipschitz condition at x with modulus function K : [a, b] → (0,∞), K measurable, if there exists ² > 0 such that F (t, x1) ⊆ F (t, x2) + K(t) |x1 − x2|Bn holds whenever x1, x2 ∈ B[x; ²] and for almost every t ∈ [a, b]. We also write “F is locally K-Lipschitz at x” to refer to the same property, or “F is locally K-Lipschitz” if this occurs for all x in the domain of F . A yet weaker form of regularity, pseudo-Lipschitz continuity, will be introduced in Chapter 5. 2.3 Nonsmooth Analysis In optimal control, even with smooth data in the initial problem formu- lation, we often arrive at objects with a nonsmooth character: functions that fail to be differentiable at isolated points (or worse), or multifunctions with graphs that have corners or cusps. These objects can be approached with the tools of nonsmooth analysis. The power in these methods is the relation of geometric properties, normal vectors in particular, of a function (or multifunction) graph, to properties analogous to differentiability. These concepts have a rich history, which the excellent standard reference of Rock- afellar and Wets [52] discusses and extends. We also mention the book of Clarke, Ledyaev, Stern and Wolenski [15] as another popular reference with a focus on control theory, and the work of Mordukhovich with many signif- icant results in variational analysis, where [40, 41] provides a summary and extensive reference list. We now proceed to define the key objects in nonsmooth analysis we will use in later chapters. 11 2.3. Nonsmooth Analysis 2.3.1 Normal Vectors and Cones Let S be a given closed set in Rn. For any point s̄ ∈ S, we say that vector z is a proximal normal to S at s̄ if there exists a constant σ > 0 such that 〈z, s− s̄〉 ≤ σ |s− s̄|2 ∀s ∈ S (in this and what follows, | · | denotes the 2-norm in Rn). We will often refer to the point s̄ as the base point when it appears in this manner. The proximal normal cone to S at s̄ is the set of all proximal normals to S at s̄, and denoted by NPS (s̄). Clearly, this cone consists only of the zero vector when s̄ is an interior point of S. When the boundary of S is smooth at s̄, the cone is one-dimensional, and we recover the classic notion of a normal vector in smooth analysis. The cone may be higher-dimensional at corners and cusps of S, but it depends on whether they are inward- or outward- pointing. An incredibly useful, related object is the limit normal cone to S at s̄, defined by NLS (s̄) = NS(s̄) = {z ∈ Rn |z = limk→∞ zk for sequence zk where zk ∈ NPS (sk), sk →S s̄ } . In other words, NLS (s̄) is the set of limit vectors resulting from sequences of proximal normals at base points in S near to s̄. This cone is sometimes referred to simply as the normal cone, which explains the second, simpler notation style in the above. These vectors can give information at boundary points were the proximal cone is trivial, and the limit normal cone is at the heart of some of the sharpest-known necessary conditions statements in optimal control, including those extended in the later chapters of the present work. There is one more popular cone associated with normals to the set S, but we first require the notion of convexification. The convex hull of a set 12 2.3. Nonsmooth Analysis S is denoted coS, and defined by coS = { k∑ i=1 λisi ∣∣∣∣∣ si ∈ S, λi ∈ [0, 1], k∑ i=1 λi = 1, k ∈ N } , the set of convex combinations of points in S. The Clarke normal cone to a set S at point s̄ is defined as NCS (s̄) = coN L S (s̄), the convex hull of the limit normal cone. 2.3.2 Subgradients We next define some generalizations of derivatives, called subgradients, and mention the relation to the normal vectors in the previous section. To do so, we first define an appropriate type of function regularity: a function Q : Rn → R ∪ {∞} is lower semicontinuous at a point x0 if lim inf x→x0 Q(x) ≥ Q(x0). The function Q is lower semicontinuous if its epigraph, the set of points lying above or on the graph of Q and denoted epiQ, is closed. Given a lower semicontinuous function Q : Rn → R∪{∞} and a point x where Q(x) ∈ R, the proximal subgradient at x, denoted ∂PQ(x), is the set of vectors z satisfying the inequality: ∃ ρ,C > 0 s.t. Q(y) − Q(x) + C||y − x||2 ≥ 〈z, y − x〉 ∀ y ∈ x+ ρBn, where Bn is the unit ball in Rn. The limiting subgradient of Q at x is defined by: ∂LQ(x) = {lim zk : xk → x, zk ∈ ∂PQ(xk)} . We define the generalized directional derivative of a locally Lipschitz func- 13 2.3. Nonsmooth Analysis tion Q at x in the direction v by DQ(x; v) = lim sup y→x,t↓0 Q(y + tv)−Q(y) t , and then the Clarke subgradient of Q at x: ∂Q(x) = {z : DQ(x; v) ≥ 〈z, v〉 ∀v ∈ Rn} . If Q is locally Lipschitz, the Clarke subgradient is the convex hull of the limiting subgradient: ∂Q(x) = co ∂LQ(x). A comprehensive calculus has been developed with these various subgra- dients. They are also of value in connection with the cones of the previous section. The matching nomenclature is intentional; for example, a vector z is a proximal subgradient of Q at x if (z,−1) ∈ NPepi f (x,Q(x)), and a similar relation is true for the limiting and Clarke subgradients. 14 Chapter 3 Impulsive Solution Concepts for Measure-Driven Differential Inclusions Impulsive systems have been studied in a mathematical framework since the mid-20th century, motivated by the space travel research programs of the time. In modeling control of space trajectories, a large rocket impulse for planetary escape is paired with stabilizing thrusters in orbit that are much lower in energy; the model attempted to decouple these dynamics, by treating the large impulse as a discontinuity in the state (velocity and/or position). The most-cited summary references in the English language of that period are Rishel [51] and Lawden [34]. An extensive Russian litera- ture also exists. Zavalishchin has been a significant contributor, publishing mostly in Russian; his [74] is an English survey of his results dating back to the 1960s. Miller and Rubinovich also provide a record of the Russian developments in [39]. One natural approach in determining solutions for such problems is to look at the limit of continuous trajectories, where one increases the effect of the stronger (impulse) dynamics and tracks the effect on the resulting trajectory. This turns out to be unsatisfactory under certain conditions where the limit is not unique; this is due to a suppression of the structure on the fast time scale when it is sufficiently complex (we quantify this below). Subsequent work has produced methods of dealing with this structure. The alternative, frameworks that explicitly permit discontinuous trajectories, fall into two basic categories: explicit jump maps and measure-driven dynamics. 15 3.1. Explicit Jump Maps and Hybrid Systems 3.1 Explicit Jump Maps and Hybrid Systems Many, if not most, impulsive systems have been modeled by a system of the following type on the time interval [a, b]: given some set T ⊆ [a, b] of (typically a finite number of) impulse times,{ ẋ(t) = f(t, x, u) if τ /∈ T , x(τ) = g (τ, x(τ−), vτ ) if τ ∈ T . (3.1) Here, f can be any standard control differential equation map, and deter- mines the continuous behaviour of x, while g is the explicit jump map that determines the “exit point” of a jump at t = τ , x(τ), based on input data that may include the “entry point” of the jump, x(τ−) = lim t→τ− x(t). In general, the control choices in the problem could involve both the standard control parameter u and the jump policy choice vτ at each jump instant, τ ∈ T , though in many models only one type of control is available. In [70], the three cases: u absent, v absent, or both present, are introduced as three types of impulsive system, each useful in modeling different behaviour. When u is absent, we have fixed, naturally evolving continuous dynamics and some choices to make at jump instants, providing some form of shock control, as in pesticide spray in agriculture or setting of bank rates in economics. When v is absent, we have a control system that is subject to natural shocks when it attains certain states, such as our ball thrown against the wall. When both are present, we have control choices available at both time scales, such as the space navigation problem where low-energy thrusters and high- energy launch rocket are both available to achieve a specific orbit. Remark: The term “hybrid system” has not had a consistent definition, but has been somewhat of a catch-all for systems that mix discrete and continuous behaviour. One of the more accepted definitions is a system of the form (3.1) where the map T = [a, b], and the map g depends only on the state, x. In other words, jumps may only occur for certain state 16 3.1. Explicit Jump Maps and Hybrid Systems configurations. This is typical of impact and friction modeling, for example [61]. The standard reference on these types of systems, still cited for its funda- mental results in systems like (3.1), is the book of Lakshmikantham, Băınov, and Simeonov [33]. The book diverges into various engineering treatments (linear versions), all based on analysis in the explicit jump map case. A further book by Băınov and Simeonov [9] also remains influential for its extension of the Floquet theory for periodic solutions, which is of value to applications as disparate as biological systems modeling [63] and synchro- nization of chaotic circuits [75], to name but two examples. In any case, systems of the form (3.1), due to their simplicity, require a careful notion of solution once the jump map, g, is sufficiently complicated. One significant limitation of many simple concepts is that they cannot ac- count for more than a finite number of impulses. Once an infinite number of state jumps are possible on a finite interval, some notion of a limit point for impulses must be defined. Consider our initial example of a bouncing ball, this time with the ball left to bounce on the ground. If we assume some pro- portionate dissipation of energy upon impact, the ball will rise less and less after each successive bounce, and the bounces come at shorter and shorter intervals. The limit of such a process is referred to as “Zeno behaviour”, recalling the ancient paradox of the arrow that, so the argument goes, can never reach its destination since it much first reach the halfway point, then the quarterway point, and so on; only intermediate stages are reached and so the final stage is always halfway again farther away. Of course, the ball in our example eventually comes to rest, but the solu- tion concept for the trajectory model must be carefully formulated. Goebel and Teel present such a solution concept for hybrid systems [53], which ad- dresses these concerns by a careful deconstruction of the time and space domains (see also their more recent survey with Sanfelice, [26]). The result is a framework for explicit jump maps in the case of state-dependent im- pulses, that is, when g = g(x(τ−)) only in (3.1), that permits continuation beyond Zeno behaviour. This type of state-dependent impulse motivates our work in Chapter 4. The recently published book of Haddad, Chellaboina, 17 3.2. Measure-Driven Systems and Nersesov [27] also acknowledges links between hybrid and impulsive systems, in this traditional setting and in the measure-driven framework be- low. The very comprehensive thesis of Yunt [72] includes numerical methods and optimal control with applications in engineering, focusing on the purely discrete, instantaneous change that characterizes hybrid systems and inten- tionally avoiding what he calls the “interval opening” approach, that being the measure-driven point of view we describe next. 3.2 Measure-Driven Systems We refer to a system as “measure-driven” when a dynamics term that in- volves a measure is added to a standard control differential equation or inclusion. The measure determines the activity of the impulsive dynamics, including (but not necessarily limited to) those actions modeled as instanta- neous. Roughly speaking, when the total variation of the measure weights an instant as nonzero, the impulsive dynamics run as a subsystem whose ve- locity is proportional to that weight. The continuous part of the dynamics is ignored during this action, and the state is observed to have “jumped” to the exit point of that subsystem. The key distinguishing feature from (3.1) is that the measure-driven dynamics must provide a continuous dynamical system path connecting the entry and exit points of any jump in the state at a time t = τ , while the map g in (3.1) is under no such obligation. This connecting path represents the result of a continuous flow during a “fast- time” interval whose length scales with the measure of the time instant τ ; this all occurs at an instant in the observed, “slow time”. We now describe our solution concept in full detail. It adopts the exten- sion used in [23] and [21], which is similar to the extension [68] of [43, 44], but employs the measure-differential inclusion (as opposed to equation) frame- work. 18 3.2. Measure-Driven Systems 3.2.1 Impulsive System Solution Concept and Time Reparametrization We now provide our solution concept for the measure differential inclusion system dx(t) ∈ F (t, x)dt +G(t, x)dµ(t), x(0−) = x0 (3.2) on the time interval [0, T ]. Our basic hypotheses require F : [0, T ]×Rn→Rn and G : [0, T ] × Rn→Rn × Rm to take as values sets which are closed and nonempty. We also assume that µ takes values in the closed cone K ⊆ Rm. The solution concept we adopt was introduced in [43, 44] for the case where the measure control takes values in the positive cone of the Euclidean space Rm. We make the necessary modifications to the solution concept to allow for the measure controls to take values on the whole Euclidean space instead of only on the positive cone. We note that (3.2) is in differential form; this permits the measure, µ, to appear in full, and suggests that µ determines a new time scale, potentially faster than t. This effect will become clearer below, after we review a few key facts about measures. Denote R := R ∪ {±∞}. Let B be the Borel σ-algebra on [0,∞). We recall that a signed Borel measure is a function ν : B → R such that ν(∅) = 0, ν assumes at most one of the values ±∞ and if {Ej}∞j=1 is a disjoint sequence in B, then ν (⋃∞ j=1Ej ) = ∑∞ j=1 ν(Ej). Given a signed measure ν and any ν-measurable set E, we denote the positive variation by ν+(E) = max{0, ν(E)}, the negative variation by ν−(E) = min{0, ν(E)}, and the total variation by ν̄(E) = ν+(E)− ν−(E), noting ν(E) = ν+(E) + ν−(E). A signed Borel vector-valued measure is a function µ where each compo- nent µj , j = 1, · · · ,m, is a signed Borel measure. We denote by µ̄ the total variation of the signed vector-valued measure µ, i.e., µ̄(E) := m∑ j=1 ( µ+j (E)− µ−j (E) ) = m∑ j=1 µ̄j(E). We say that the measure µ is regular if µ̄ is regular. 19 3.2. Measure-Driven Systems It is necessary to introduce a change of the time variable before we present the solution concept; this time reparametrization will be central in our main theorems. We fix a regular, signed vector-valued measure µ, and let ψj(t) = ∫ t 0 µ̄j(ds), for j = 1, . . . ,m and t > 0, with ψj(0) = 0, and consider the nondecreasing function η(t) := t + m∑ j=1 |ψj(t)|, and the associated multifunction η̄(t) := [η(t−), η(t)] (the set values are closed intervals). We note that when t is not an atom of µ̄, this set-valued function has the singleton value η̄(t) = {η(t)}. The function η is a reparametrization of the time variable, t. We define the (single-valued) function θ to be the “inverse” of η̄; that is, s ∈ η̄(t) ⇔ θ(s) = t, and we will maintain this notation throughout the work below. Finally, we introduce the concept of robust solution for (3.2), which is the same as in [23, 43–45]. Definition (Measure Differential Inclusion Solution Concept). We say that x, with x(0−) = x0, is a trajectory (or solution) for (3.2) if x(t) = xac(t) + xs(t) ∀t ∈ [0,∞), where xac is absolutely continuous and satisfies ẋac(t) ∈ F (t, x(t)) + G(t, x(t))wac(t) a.e. t ∈ [0, T ], while xs(t) = ∫ t 0 gsc(σ)wsc(σ) µ̄sc(dσ) + ∫ [0,t] Ga dµ̄sa. Here, µ̄ is the total variation measure associated with µ; µsc, µsa and µac are, 20 3.2. Measure-Driven Systems respectively, the singular continuous, the singular atomic, and the absolutely continuous components of µ; wac is the time derivative of µac; wsc is the Radon-Nikodym derivative of µsc with respect to its total variation, gsc(σ) ∈ G(θ(σ), x(σ)) for all σ ∈ [0, t], and Ga(·) is a µ̄sa-measurable selection of the multifunction G̃(t, x(t−);µ({t})) : [0,∞)× Rn × Rm ↪→ P(Rn). Here, by definition, ζ ∈ G̃(t, x(t−);µ({t})) ⇔ ζ = w(η(t))− x(t −) µ̄sa({t}) for some process Xt = (w(·),Υ(·), v(·)) satisfying Υ(s) = ψ(t−) + ∫ [η(t−),s] v(σ) dσ (3.3) ẇ(s) ∈ G(θ(s), w(s)) v(s), a.e. s ∈ η̄(t) (3.4) where v : η̄(t)→ K ⊆ Rm satisfies ∫η̄(t) v(s)ds = µ({t}) and θ̇(s) + m∑ j=1 |vj(s)| = 1 a.e. s ∈ η̄(t), and w(η(t−)) = x(t−), Υ(η(t−)) = µ([0, t)), with (w,Υ) : [0,∞) → Rn × Rm absolutely continuous. We refer to a collection of such processes X = {Xt}µ({t})>0 as a graph completion associated with µ. This terminology of “graph completion” is apt: whenever x has a discon- tinuity at an instant, t0, X provides a connecting path using the G-dynamics that occurs at an instant in the slower t-time, but over the nontrivial interval [η(t−0 ), η(t0)] in the faster s-time. For any trajectory x of our measure-driven impulsive system (3.2), we may use the associated measure µ and process collection X to determine a trajectory in the following “stretched-time” sys- 21 3.2. Measure-Driven Systems tem: [ θ̇(s) ẏ(s) ] ∈ F (θ(s), y(s)) a.e. s, where F(θ, y) = {[ 1− |β|1 F (θ, y) (1− |β|1) + G (θ, y)β ]∣∣∣∣∣β ∈ K ∩ Bm1 } , (3.5) with y(s) = x(θ(s)) when η(t) is single-valued, and y follows the graph completion otherwise. The jump dynamics of (3.2) are captured here by β: when |β(s)|1 = 1, we have θ̇(s) = 0, and the dynamics of y are determined solely by GK; this corresponds to η̄(t0) taking an interval value for a “jump” instant, t0, and that portion of trajectory y is the same arc as determined by Xt0 . We will refer to the pair (x,X) as a solution for the system (3.2), but sometimes refer to this solution by x alone, with an implicit graph completion subprocess set, X. In this new system (3.5), the atoms of µ̄ have been “stretched out”, replaced by subintervals whose lengths are determined by the magnitudes of the atoms’ measures. This new trajectory in s is thus defined on a time interval determined by the inverse of the reparametrization, and conversion of a different trajectory of (3.2) may produce an s-interval of different length according to the measure used; this fact compels our discussion of free end- time problems in Section 5.4. We note that F is autonomous relative to the stretched time s; the “time” dependence from t has moved to the state variable θ. 22 3.2. Measure-Driven Systems Example measure and time reparametrization Suppose that λ is the Lebesgue measure and we define a scalar-valued µ in (4.1) by the following: for any subset M of [0, 1), µ(M) = −λ(M), for any subset P of [1, 2), µ(P ) = 2λ(P ), for any subset S of (2,∞), µ(S) = 0, and µ ({2}) = 1.5 The first three statements refer to the absolutely continuous part of µ, while the third refers to the singular atomic part. We consider the reparametriza- tion of t given by η(t) = t+ ∣∣∣∫[0,t] µ̄(dτ)∣∣∣: 0 1 2 3 4 5 0 1 2 3 4 5 6 7 8 9 10 t e ta (t) 0 1 2 3 4 5 0 1 2 3 4 5 6 7 8 9 10 t e ta ba r(t ) Figure 3.1: Sample time reparametrization function η and its completion η̄. 23 3.2. Measure-Driven Systems In Figure 3.1, η shows the cumulative effect of µ (via µ̄) in stretching the time t. On the right, by “connecting up” the discontinuity in η, we have η̄, whose “inverse” is η̄−1(s) = θ(s). Impulsive trajectories on the interval [0, 4] look like: x(t) = xac(t)+xs(t), satisfying ẋac(t) ∈ F (t, x(t)) +G(t, x(t))wac(t) a.e. t ∈ [0, 4], where wac = −1 on [0, 1), wac = 2 on [1, 2), zero otherwise, and xs(t) = ∫ [0,t] g(σ)wsc(σ)dµ̄sc(σ) + ∫ [0,t] ga(σ)dµ̄sa(σ). Here, wsc ≡ 0. At the time instant t = 2, we “flow” from x(2−) for 1.5 stretched time units according to ξ̇(s) ∈ G (2, ξ(s)) from ξ (η(t−)) to ξ (η(t)), thereby jumping to x(2−) + ξ (η(t)). The stretched-time F-trajectory, (θ, y), has first component determined by θ(s) = η̄−1(s), as noted above, and the second component satisfies y(s) ∈ F (θ(s), y(s)) (1− |β(s)|1) + G (θ(s), y(s))β(s) for some β taking values in Bm1 ∩K, and the relation θ(s) = 1− |β(s)| ∀s ∈ [0, η̄(4)]. Final notes about our solution concept definition We emphasize that, along with the challenge it adds to the analysis, the sin- gular continuous measure part, µsc, is often undesirable in terms of physical interpretation; it is simpler and often more appropriate to work only with the pure “flow” part of µ represented by µac and the pure “jump” component of µ represented by µsa (strictly speaking, the jumps occur according to µ̄sa, as noted below). A good feature of our closed-loop feedback in Chapter 4 is that it has no such singular continuous component, and we discuss modeling approaches that avoid singular continuous measures in Chapter 6. 24 3.2. Measure-Driven Systems We also remark that the use of the total variation of the measure is essential in defining the time reparametrization. Because the measures may be signed, an instant t0 could have µ({t0}) = 0, with the function v in the subsystem at instant t0 taking, for example, the value 1 for 2 s-time units followed by the value of −1 for 2 s-time units. This is consistent with our solution concept: the instant t0 has µ-measure zero, but µ̄({t0}) = 4. In the scalar case, this adds no real effect, since the resulting trajectory will have no discontinuity at time t0. If we combine this “forward and back” motion in a vector-valued case, however, the situation is more complicated. If the vector fields commute, then a mixture of “forward and back” motion will again result in continuity of the trajectory at t0. If the fields do not commute, an interweaving of this type of motion need not return the subsystem to its starting point, and we would observe a discontinuity at time t0 even with µ({t0}) = 0, though µ̄({t0}) > 0 is certain. 3.2.2 Recent Results in Measure-Driven Systems The transformation techniques above recall the pioneering work of Rishel [51] and Warga [66] in the context of optimal control. Related solution concepts are used in a somewhat different context for impulsive dynamical systems in [10, 11, 24, 39]. We elaborate on related solution concepts and measure-drive frameworks in the next section. We now turn our attention to the most recent developments (1980s and later) in measure-driven im- pulsive control, acknowledging major contributors and identifying our most immediate predecessors. Dal Maso, Bressan and Rampazzo These authors developed ideas from Sussman’s work [62] on stochastic differ- ential equations with a measure in a similar position to (3.2). They produced much fundamental work in measure-driven control systems, and produced a precursor to the solution concept we use in this dissertation, even extending to vector-measures both in the case of commutative fields in G [10, 11] and with some work in the non-commutative case [12]. 25 3.2. Measure-Driven Systems Miller Boris Miller has many Russian publications regarding impulsive systems studied via time reparametrization; a key English-language reference is [39], and also the more recent [38]. Like Dal Maso, Bressan and Rampazzo, he works with what he calls “robust” solutions, which are those with commu- tative fields in G, with limited work in the noncommutative case. Research involves optimal control in many applications including observation control in radar sensing. Silva, Vinter, Pereira The solution concept in this dissertation is closest in spirit and technical details to the collaborations of these three authors in particular, and their other collaborators. They developed a solution concept, based on the work of Dal Maso, Bressan and Rampazzo (above) that decomposes the measure into parts, including the singular continuous part, and is able to handle noncommutative jump dynamics [43]. Wolenski and Zabic Further work in this vein appears in the work of Wolenski and Žabić [68], [67]. In the dissertation [73], the measure differential equation is treated in the autonomous case (the inclusion maps depend only on the state variable, not explicitly on the time variable) with convex control sets. Chapter 2 of [73] covers the solution concept with comparison to its most direct predecessor, the solution concept of Bressan and Rampazzo. Care is taken to demonstrate an equivalence in the two solution concepts. The decomposition of the measure is discussed in detail, and care is taken in examining and providing an example for the singular continuous component of the measure. The primary results of [68], [73] involve an approximation to impulsive trajectories by discrete graphs, using the graph distance. Some convergence is proved, and this system is applied to the issue of weak invariance for 26 3.2. Measure-Driven Systems impulsive systems. A similar style of result, encompassing both weak and strong invariance, has since been proved [42] using the same solution concept. Karamzin The work [30] may be considered in the Russian tradition; indeed, the pri- mary results given for comparison are those of Miller (above), and Karamzin was a student of Arutyunov, a major contributor to the field of optimal control who has also studied impulsive systems (the connection being col- laboration with Pereira [6], [7]). Karamzin addresses head-on the issue of nonuniqueness for impulses in- volving noncommutative fields in the impulse dynamics by defining a solu- tion concept where an “impulsive control” is defined by a measure and a set of appropriate graph completion subsystems based on that measure’s weight of each of the instants in the time interval in question. Rather than stretch- ing the time interval in the way we described above, this solution concept effectively stretches every instant to a unit interval, where the measure may introduce some motion in the fast time scale. These objects are found to form a complete metric space, with a metric that defines distance based on the total variation of the measure as well as the accumulated distances (es- sentially in the W 1,1 sense) between the graph completion subsystems (the distance function we employ in Chapter 5 measures analogous quantities in our setting). Impulsive controls with absolutely continuous measures, which produce continuous trajectories, are found to be dense in this space; this fact is exploited in proving various necessary conditions (with and without phase constraints) by using controls with absolutely continuous measures to ap- proximate arbitrary impulsive controls and using the excellent convergence properties conferred by the metric. This work avoids the singular continuous issue by working with integral equations rather than differential-form MDIs, and treating general measures as limits of absolutely continuous ones. 27 3.2. Measure-Driven Systems Ahmed N. U. Ahmed has produced a string of articles, including [1–4], proving a number of generalizations not found in the other measure impulsive systems literature, such as work with operator-valued measures (by contrast, this thesis considers regular, signed, vector-valued measures with values in a cone, a much more typical setup). Work is attempted in Banach spaces (we remain in Rn). However, there does not appear to be much collaborative overlap with the others mentioned above, as the reference lists of any of the citations can attest. This is unfortunate, due to the volume of abstract results with a clear eye for applications, especially where the linear subcase is concerned. 28 Chapter 4 Closed-Loop Stabilization In this chapter, we study the stabilization problem for the impulsive control system dx(t) = m∑ i=1 fi (x(t))ui(t)dt + q∑ j=1 gj (x(t))µj(dt) a.e. t ∈ [0,∞), x(0) = x0, (4.1) where fi, gj : Rn → Rn, for i = 0, 1, · · · ,m, j = 1, · · · , q are smooth func- tions, u(t) ∈ U a.e., U is a convex, compact, nonempty subset of Rm that contains 0, the µj are real-valued, regular (signed) Borel measures defined on Borel subsets of [0,∞), and x0 ∈ Rn is the initial state. More specifically, we study the calculation of state-dependent control choices of u and µ in (4.1) such that the resulting impulsive dynamical system is globally asymp- totically stable (GAS) with respect to the origin; that is, for any choice of all x0 ∈ Rn, the impulsive trajectories for (4.1) satisfy lim t→∞x(t) = 0. We have chosen to work in the measure differential equation framework here, as opposed to measure differential inclusions (which will be used in Chapters 5 and 6); this will be more compatible with the results from the literature on which we draw. The solution concept involved is the same as that defined in Chapter 3 in terms of the measure structure and the resulting time reparametrization described by η and θ. This chapter is organized as follows. We begin with a discussion in Section 4.1 of the necessary background involving Lyapunov functions, a 29 4.1. Classical Stabilization via Lyapunov Functions key tool in the classical theory which we update to the impulsive system (4.1). Section 4.2 presents some background in terms of stabilization for control systems in the impulsive case as outlined in Section 3.2.1. Our main stabilization results for impulsive systems are presented and proved in Section 4.3, with the time reparametrization of our solution concept as a key element. We demonstrate our closed-loop framework with an example in Section 4.4 and conclude in Section 4.5 with a discussion of future directions for this topic. 4.1 Classical Stabilization via Lyapunov Functions A long-favoured technique in analysis of dynamical systems is due to Lya- punov [37]. To describe it, we recall that a function Q : Rn → R is proper if its sublevel sets {x : Q(x) ≤ c} are bounded for all c, which in this case is equivalent to being weakly coercive, that is, lim |x|→∞ V (x) =∞. We say Q is positive definite if Q(0) = 0 and Q(x) > 0 for all x 6= 0. A Lyapunov function (LF) for the control-free system ẋ = f(x) with f(0) = 0 is a continuous function V : Rn → R that is proper, positive definite, and satisfies the infinitesimal decrease condition: there exists a proper, positive definite, continuous function W defined on Rn \ {0} such that ∀x ∈ Rn \ {0}, ∀ζ ∈ ∂PV (x), 〈f(x) , ζ〉 ≤ −W (x). One of Lyapunov’s results in the seminal work [37] is that existence of a smooth Lyapunov function implies that a dynamical system is GAS. A sig- nificant converse result, that a GAS system ẋ = f(x) with f continuous must admit a smooth Lyapunov function, is due to Kurzweil [32]. This notion can be extended to control systems. A control-Lyapunov function (CLF) for the system ẋ = f(x, u) is a continuous function V : Rn → 30 4.1. Classical Stabilization via Lyapunov Functions R that is proper, positive definite and satisfies the infinitesimal decrease condition: there exists a proper, positive definite, continuous function W defined on Rn \ {0} such that ∀x ∈ Rn \ {0}, ∀ζ ∈ ∂PV (x), min u∈U 〈f(x, u) , ζ〉 ≤ −W (x). If a CLF is available for a given control system then stability is assured. A variety of converse theorems have also been proved, where we assume a system is GAC and prove existence of a CLF. Given a GAC system, the Lyapunov function indicates appropriate control actions at each state. More specifically, when the system is at state x, a stabilizing action is to choose a direction that follows the decrease of the Lyapunov function. Following this idea at every state in the space gives a closed-loop feedback, k(x). Due to the “min” in the decrease condition, it is clear that the control-Lyapunov function is then a Lyapunov function for the closed-loop system. Regularity of the feedback k is a central question in control systems. The stability and solution properties of the closed-loop system (2.4) depend on the regularity properties of k. A further practical advantage of a more regular feedback is robustness: if k is sufficiently regular, then control actions for nearby states will be similar in magnitude and direction, hence (2.4) may be robust with respect to perturbations in state (these perturbations are often called the “measurement error”). For a more detailed overview of Lyapunov function and feedback regularity, we refer the reader to Clarke’s survey [16]. 4.1.1 Smooth Case Though its existence cannot be guaranteed, availability of a smooth CLF simplifies many calculations. In the control-affine case for example, the Lyapunov decrease condition may be expressed in terms of Lie derivatives of the CLF; for the system ẋ = f(x) + m∑ j=1 ujgj(x) u ∈ Bm, 31 4.1. Classical Stabilization via Lyapunov Functions where Bm is the m-dimensional unit ball, we state the decrease condition as: ∀x 6= 0, ∃u such that LfV (x) + m∑ j=1 LgjV (x)uj < 0. (4.2) Moreover, there is a formula in this case for a state feedback control due to Sontag and Lin [59] (“Sontag’s formula” for the unbounded control case is mentioned often in the literature, originating in [57]). If the system admits a differentiable Lyapunov function V that satisfies the “small con- trol property”, which requires that the decrease condition applies for small control actions near the origin (one sufficient condition for these to hold is the existence of some other continuous feedback), then the function k(x) = ϕ (LfV (x), ||LGV (x)||)LGV (x) is a state-feedback that stabilizes the system. Here, LGV (x) is the m-vector with entries LgjV (x), and ϕ is defined by ϕ(a, b) =  − a+ √ a2+b4 b2(1+ √ 1+b2) if b > 0, 0 if b = 0. The feedback k globally stabilizes the closed-loop system, and is guaranteed to be smooth for x 6= 0 and continuous everywhere. 4.1.2 Semiconcave Case Only relatively recently has a minimal regularity for Lyapunov functions been established in the general control-affine case. In a series of publications, Rifford answers a long-standing pair of questions in nonlinear control: Given a GAC system, what sort of feedback is guaranteed to exist? Given a GAC system, what sort of Lyapunov functions must exist? One earlier approach to these questions was to prove existence of a sample-and-hold style feedback, that is, a feedback where the control value is fixed over time intervals of possibly varying length. In practice, this is often all that is available anyway; one assumes continuous dynamics, but 32 4.1. Classical Stabilization via Lyapunov Functions can only measure and adjust the control action at discrete time instants. For this reason, much literature discusses the limits of such an approach, for example, conditions under which a stabilizing feedback may be designed with some lower-bound on the sampling interval; arbitrarily fast switching of the control value, called “chattering”, may work in mathematical theorems, but is undesirable from an engineering standpoint [16], [31]. Sontag’s formula does not apply in general since no smooth CLF exists for certain systems. Sontag did establish in [56], however, that a continuous CLF always exists for a GAC system. Rifford improved upon this twice in the context of model (2.6), first demonstrating in [47] the existence of a Lipschitz CLF, and then more re- cently in [48] the existence of a semiconcave (see below) CLF leading to a global stabilizing feedback that is smooth except for a set of measure zero, which coincides with the singular set of the CLF. He required the additional assumption that the control-affine system satisfy a “bracket-generating con- dition” at each state, i.e., that the Lie algebra formed by the system’s com- ponent functions span the state space. A continuous function h : Rn → R is called semiconcave if for any x0 ∈ Rn there exist ρ,C > 0 such that h(x) + h(y) − 2h ( x+ y 2 ) ≤ C||x− y||2 ∀x, y ∈ x0 + ρB. Such functions may be considered locally as the sum of a smooth function and a concave function [13]. In fact, they are smooth wherever they are not concave. For control-affine systems, Rifford demonstrates in [48] con- struction of a semiconcave CLF, which in turn may be used to define a feedback that is smooth except on the CLF’s singular set (where the CLF is necessarily concave). 4.1.3 SRS Feedback The latest feedback results of Rifford (apart from adapting the whole scheme to manifolds) further refine the above in two and three dimensions (in [49] 33 4.2. Stabilization for Impulsive Systems and [50], respectively), providing further detail about the feedback for states on the singular set of the semiconcave CLF. A smooth repulsive stabilizing (SRS) feedback is defined as a stabilizing feedback that is smooth except possibly on the singular set of the CLF (which has Lebesgue measure zero) and which causes trajectories in the closed loop system to remain outside that singular set for all positive time. It is thus known for standard control-affine systems, those of the form ẋ(t) = m∑ i=1 fi (x(t))ui(t), that global asymptotic controllability implies the existence of a control- Lyapunov function which is semiconcave outside the origin [47]. Such a control-Lyapunov function allows the construction of a stabilizing feedback which is continuous outside a zero measure set [48]. The state of the art regarding stabilization theory for standard control systems on Rn can be found in [50]. 4.2 Stabilization for Impulsive Systems Stability in terms of control-Lyapunov functions for impulsive control sys- tems has been addressed, for example, in [44] and [45]. The stability of solutions for this class of impulsive systems via perturbation of the mea- sures is to be found in [39]. When a control measure is approximated by a sequence of absolutely continuous measures, several authors have proved under a variety of hypotheses that the solutions to the systems with cor- responding absolutely continuous measures converge to the solution of the system corresponding to the general measure; recent results include [54] for scalar-valued measures, and [30, 68] for vector-valued measures which take their values in closed, convex cones in Euclidean space. Despite all the research in the field of impulsive control systems and the recent advances in feedback stabilization theory for standard control sys- tems, to the best of our knowledge, very little is known about stabilization 34 4.2. Stabilization for Impulsive Systems of impulsive systems in the context of measure differential equations (or in- clusions). This work is an attempt to bring some understanding to how one might build a stabilizing feedback law for such systems. That aim is accom- plished by proving that it is possible to construct a smooth (Theorem 4.2) or an almost everywhere smooth (Theorem 4.1) feedback law for system (4.1) under appropriate assumptions. An admissible control for system (4.1) consists of a pair (u, µ) where u(·) is a measurable function taking values in U and µ = (µ1, · · · , µq) is a regular Borel signed vector measure. The impulsive system (4.1) is globally asymptotically controllable (GAC) if, for each x0 ∈ Rn, there is an admissible control (u, µ) such that the corresponding trajectory x(t;u, µ, x0) tends to zero as t increases to infinity. System (4.1) is an open-loop system. We now consider the corresponding closed-loop system dx(t) = m∑ i=1 fi (x(t)) ki (x(t)) dt + q∑ i=1 gj (x(t)) ρj (x(t); dt) a.e. t ∈ [0,∞), x(0) = x0, (4.3) obtained by replacing ui(t) with ki (x(t)) and µi(t) with ρi (x(t); dt). The closed-loop system (4.3) is said to be globally asymptotically stable at the origin (GAS) if 1. for every x0 ∈ Rn there exists a trajectory x(t;x0) starting at x0 that tends to zero as t goes to infinity; 2. for each ² > 0 there is a δ > 0 such that for each x0 ∈ Rn with ‖x0‖ ≤ δ the trajectory x(·;x0) starting at x0 that tends to zero as t goes to infinity, as in Item 1, obeys ‖x(t;x0)‖ ≤ ² for all t ≥ 0. System (4.3) may have more than one solution at each starting point due to the possibly noncommutativity of the vector field g. So in the definition of stability above it is desirable that all trajectories starting at the same initial value would obey the listed conditions. However, as will be clear in Section 4.3, our method in the main stability result guarantees that just one 35 4.2. Stabilization for Impulsive Systems solution among infinite possibilities can be chosen to satisfy the stability con- ditions. When either the control measure takes values in a one-dimensional vector space (q = 1), or the vector fields g1, · · · , gq associated to the control measure are commutative, the system (4.3) has a unique solution for each initial point. Hence it is GAS in the usual sense (all trajectories tend to zero). We also consider the associated auxiliary system: y′(s) = ∑m i=1 fi (y(s)) αi(s) + ∑q j=1 gj (y(s)) βj(s) a.e. s ∈ [0,∞), y(0) = x0. (4.4) Admissible control functions for (4.4), which is not impulsive, are Lebesgue measurable functions α = (α1, · · · , αm) : [0,∞)→ U and β(s) = (β1(s), · · · , βq(s)) : [0,∞)→ Bq1 ⊂ Rq, where Bq1 := {β ∈ Rq : ∑q j=1 |βj | ≤ 1}, the closed unit “ball” when using the 1-norm. We say that V : Rn → R is a control-Lyapunov function (CLF) for (4.4) if it is positive definite (i.e., V (0) = 0, and V (y) > 0 for y 6= 0), proper (i.e., V (y)→∞ when ‖y‖ → ∞) and satisfies the infinitesimal decrease condition ∀y ∈ Rn \ {0}, ∀ζ ∈ ∂pV (y), min α∈U, β∈Bq1 〈ζ, f(y)α + g(y)β〉 ≤ −W (y), (4.5) for some continuous positive definite function W , where f(y)α+ g(y)β := m∑ i=1 fi(y)αi + q∑ j=1 gj(y)βj . When system (4.1) is GAC then system (4.4) is also GAC (Lemma 4.1, in Section 4.3) and hence, by Theorem 2.4 of [48], there exists a semicon- cave control-Lyapunov function V : Rn → R which satisfies an exponential decrease condition (that is, obeys (4.5) with W = γV for some γ > 0). When a control-Lyapunov function is known, it enables the construction of 36 4.3. Closed-Loop Feedback Stabilization Results a feedback law for the non-impulsive system via a generalization of Sontag’s formula [59] to the nonsmooth case (Theorem 2.7 of [48]). Moreover, in the case n = 2, it is possible (see [49]) to exploit the affine structure of the sys- tem, combined with the semiconcavity properties of the Lyapunov function, to prove the existence of a smooth repulsive stabilizing (SRS) feedback law for system (4.4), which is smooth outside a zero measure set and directs any trajectory starting on that special set to leave the set for all positive time (i.e., the set is “repulsive”). Alternatively, an everywhere differentiable CLF may exist, in which case we may work with Sontag’s result [59] more directly. In the present work, we set out to adapt such feedback laws for the impulsive system (4.1). Existence of a continuous feedback control on a dense open set of Rn (or possibly everywhere on Rn) will be proved using the fact that auxiliary non-impulsive system (4.4) is GAC whenever the original impulsive system (4.1) is GAC. Then we assume a control-Lyapunov function for the auxiliary system and deduce an appropriately smooth feedback law a : Rn \ {0} → U and b : Rn \ {0} → Bq1. Using this feedback law, we write a closed loop system y′(s) = f (y(s)) a (y(s)) + g (y(s)) b (y(s)) , y(0) = x0. (4.6) This system is GAS in the sense of Carathéodory solutions. From the feedback scheme a(·) and b(·), we are able to construct the feedback pair k and ρ for the impulsive system via the “time-stretching” transformation explained in Section 3.2.1. 4.3 Closed-Loop Feedback Stabilization Results We now state our main results. Theorem 4.1. If n = 2 or n = 3 and the system (4.1) is GAC and satisfies Hormander’s bracket condition, Lie{f1, . . . , fm, g1, . . . , gq} = Rn, then it permits a closed-loop state feedback (k, ρ) and a zero-measure set Σ such that 37 4.3. Closed-Loop Feedback Stabilization Results i) On Rn \ Σ, both k(·) is smooth and ρ(·; dt) is continuous. ii) Any trajectory x(·) with x(0) ∈ Σ obeys x(t) /∈ Σ for all t > 0. iii) The closed-loop system (4.3) is GAS if n = 2 or asymptotically stable in a neighborhood of the origin if n = 3. Theorem 4.2. If the system (4.1) is GAC and permits a differen- tiable CLF then it further permits a closed-loop state feedback (k, ρ) with k(·) smooth and ρ(·, dt) continuous such that the closed-loop system (4.3) is GAS. To develop a feedback control for (4.1), we consider the auxiliary system y′(s) = f (y(s))α(s) + g (y(s))β(s), α(s) ∈ U, β(s) ∈ Bq1. (4.7) This is a control-affine system that has many available feedback results, typically derived from analysis of a control-Lyapunov function (CLF). Lemma 4.1. If system (4.1) is GAC then system (4.7) is GAC. Proof. Suppose system (4.1) is GAC and let x0 ∈ Rn be given. Hence there exists an admissible process (x, u, µ) for (4.1) such that x(0) = x0 and x(t)→ 0 as t ↑ ∞. Fix the functions η and θ as in Section 3.2.1, and let {ti}∞i=1 be an enumeration of the points of discontinuity of the reparametrization function η. Set α(s) := u(θ(s))θ̇(s), and β(s) ∈ Bq1 as the reparametrization selection associated to µ for almost every s ∈ [0,∞). Noting that θ̇(s) = 1− |β(s)|1, we can rewrite (4.7): z′(s) = f (z(s)) (1− |β(s)|1)u (θ(s)) + g (z(s))β(s). (4.8) The unique solution of (4.8) corresponding to initial value x0 is given by z(s) = { x(θ(s)) if s ∈ [0,∞) \ ∪∞i=1η̄(ti); ξ(s) if s ∈ ∪∞i=1η̄(ti) Since θ(s)→∞ as s ↑ ∞ and due to the definition of z(·), we conclude that z(s)→ 0 as s ↑ ∞ as required. 38 4.3. Closed-Loop Feedback Stabilization Results We will now treat the system (4.8) as an open-loop system in its own right: z′(s) = f (z(s)) (1− |β(s)|1) v(s) + g (z(s))β(s). (4.9) We seek a state feedback control scheme for (4.9), and we may then trans- form back to the impulsive setting. We first determine such a scheme for system (4.7). Given stabilizing feedback controls a(·) and b(·) for (4.7), we say that V is a Lyapunov function for the closed-loop system with α(s) = a(y(s)) and β(s) = b(y(s)) if V is positive definite, proper, and satisfies, for all y ∈ Rn \ {0} and all ξ ∈ ∂V (y): m∑ i=0 〈fi(y)ai(y), ξ〉+ q∑ j=0 〈gj(y)bj(y), ξ〉 ≤ −W (y), (4.10) for some positive continuous functionW . Existence of such a function guar- antees that all trajectories of the closed-loop system converge to the origin as time increases. When a differentiable CLF exists, Sontag [59] has determined a formula for a robust continuous feedback for closing the loop. The most current general results given our standing assumptions on f and g, however, are due to Rifford [48–50], where the existence of a semiconcave CLF is proved, along with a state feedback control that is smooth except on a stratified singular set of measure zero. Indeed, there may be some flexibility in the determination of a feedback control at this step; our interest is in how such a control can be used in our impulsive system. We will discuss our method first in the context of Theorem 4.1. We are assured by Lemma 4.1 that (4.7) is GAC; let V be a semiconcave CLF for (4.7), which is then a Lyapunov function for the closed-loop system with some SRS feedback controls a(·) and b1(·) (this is a feature of the construction): y′(s) = f (y(s)) a (y(s)) + g (y(s)) b1 (y(s)) , 39 4.3. Closed-Loop Feedback Stabilization Results where a(·), b1(·) and V satisfy the decrease condition (4.10) with W = W0 for some function W0 as above. For any y where V (·) is differentiable, we group the terms using the following notation: LFV (y) = m∑ i=0 〈fi(y)ai(y),∇V (y)〉 , LGV (y) = q∑ j=0 〈 gi(y)b1i (y),∇V (y) 〉 . (We note that LFV is the sum of Lie derivatives with respect to the aifi). We may thus rewrite the decrease condition (4.10) as: LFV (y) + LGV (y) ≤ −W0(y), ∀y where ∇V (y) exists. (4.11) We claim that V is a CLF for (4.9), which we will be prove by showing that V is a Lyapunov function for the closed loop employing the control scheme a(·) with a modified version of b1(·). Where V (·) is smooth, we will replace b1(·) with b0(·) =M(·)b1(·), where M(·) is defined by: M(z) =  0 if LGV (z) ≥ 0, φ ( LGV (z) W0(z) ) if − 1 ≤ LGV (z)W0(z) < 0, 1 if LGV (z) < −W0(z) (4.12) Here, φ(·) is any monotonic decreasing scalar function that ensures a smooth transition from 1 to 0 over the interval [−1, 0] and has φ (12) = 12 . Lemma 4.2. For any z where V is differentiable,∑m i=0 〈( 1− ∣∣b0(z)∣∣ 1 ) fi(z) ai(z),∇V (z) 〉 + ∑q j=0 〈 gj(z) b0j (z),∇V (z) 〉 ≤ −W0(z)4 . In other words, V is a Lyapunov function for the closed-loop version of (4.9) using a(·) and b0(·) as feedback controls, satisfying the decrease condition (4.10) with W = 14W0. Proof. Using r(z) := 1 − ∣∣b0(z)∣∣ 1 = 1 −M(z) ∣∣b1(z)∣∣ 1 and linearity of the 40 4.3. Closed-Loop Feedback Stabilization Results Lie derivatives, we may rewrite the desired inequality as: r(z)LFV (z) +M(z)LGV (z) ≤ −W0(z)4 . (4.13) We examine four cases: • When LGV (z) ≤ −W0(z), we have M(z) = 1; noting that 0 ≤ r(z) ≤ 1, (4.13) holds. • If −W0(z) < LGV (z) ≤ −W0(z)2 then 12 ≤M(z) ≤ 1 and due to (4.11) we have LFV (z) < 0, hence r(z)LFV (z) +M(z)LGV (z) ≤ 0 + 12 ( −W0(z) 2 ) = −W0(z) 4 . • If −W0(z)2 < LGV (z) < 0 then 0 ≤ M(z) ≤ 12 so 12 ≤ r(z) ≤ 1 (recall that ∣∣b1∣∣ 1 ≤ 1), while (4.11) ensures that LFV (z) < −W0(z)2 , hence r(z)LFV (z) +M(z)LGV (z) ≤ 12 ( −W0(z) 2 ) + 0 = −W0(z) 4 . • Finally, for LGV (z) ≥ 0, we have M(z) = 0 and thus r(z) = 1; then (4.13) follows directly from (4.11). This completes the proof. Our feedback control scheme a(·) , b0(·) ensures stability of the closed- loop system: z′(s) = f (z(s)) ( 1− ∣∣b0(z(s))∣∣ 1 ) a(z(s)) + g (z(s)) b0(z(s)). (4.14) In order to reverse the change of variables to recover our impulsive sys- tem, we may need to further modify this feedback to prevent a jump taking “infinite time” if b0(·) = 1 on any neighbourhood of the origin; this may be accomplished by dividing the value of b0(·) in half near the origin. We pick a tolerance ε from the origin and insist that b0(·) decrease smoothly from b0(·) to 12b 0(·) between radius ε and ε/2. We call the resulting feedback control 41 4.3. Closed-Loop Feedback Stabilization Results b(·), and note that trajectories still head towards the origin, though perhaps more slowly in this small region than when b0(·) was in use. Indeed, it is easy to check that this modification does not disrupt the inequalities in the proof of Lemma 4.2. We are now ready to reverse our change of variables. This will be done one trajectory at a time, after which we will “paste” the resulting trajectories together to demonstrate that the impulsive process is of sufficient regularity. Let ξ ∈ Rn be given. We denote the unique solution to (4.14) with initial point ξ by zξ. Define θξ(s) = ∫ s 0 ( 1− |b(zξ(σ))|1 ) dσ s ∈ [0,∞), ρξ(A) = ∫ θ−1ξ (A) b (zξ(τ)) dτ ∀A ∈ B. The increasing function θξ(·) maps [0,∞) to [0,∞); it is not difficult to verify that ρξ(dt) is a signed vector-valued regular measure. It is now possible to show, by changing variables, that x(t) = zξ(η(t)) is a solution of the closed-loop system dx(t) = f (x(t)) a (x(t)) dt+ g (x(t)) dρξ (x(t); dt) (4.15) satisfying x(0) = ξ and which furthermore tends to the origin as t ↑ ∞. We refer to [44] for the details. We remark that a(·) is unchanged throughout this process, and hence retains its regularity. It remains to show that the measure in (4.15) is continuous with respect to the chosen initial value ξ and hence the resulting trajectories zξ(·) and x(·). We first determine that θ̇ξ(·) is continuous in ξ. Let ε > 0 be given and suppose that ξ0 and ξ1 outside the singular set of V are the initial points 42 4.3. Closed-Loop Feedback Stabilization Results for trajectories of (4.15) z0 and z1, respectively. For i = 0, 1, we define θi(s) = ∫ s 0 (1− |b (zi(σ))|1) dσ s ∈ [0,∞), ρi(A) = ∫ θ−1i (A) b (zi(τ)) dτ ∀A ∈ B. Then θ̇i(s) = 1− |b (zi(s))|1, and for fixed s̄ ∈ [0,∞), we have∣∣∣θ̇1 (s̄)− θ̇0 (s̄)∣∣∣ = ||b (z1(s̄))|1 − |b (z0(s̄))|1| < ε (4.16) as long as |z1(s̄)− z0(s̄)| is sufficiently small, which we may guarantee by ensuring their initial points ξ0 and ξ1 are sufficiently close together; it is important here that b(·) is SRS, so any points of discontinuity for b(·) are not an issue since we are only looking along trajectories. Continuity of θξ(·) in ξ follows directly from (4.16), noting of course that θξ(0) = 0 for any initial point ξ. As a monotone continuous function, θξ(·) has a maximal monotone set-valued inverse, θ−1ξ (·). Given nonempty bounded open set A ⊂ [0,∞), |ρ1(A)− ρ0(A)| ≤ ∣∣∣∣∣ ∫ θ−11 (A) b (z1(τ)) dτ − ∫ θ−11 (A) b (z0(τ)) dτ ∣∣∣∣∣ + ∣∣∣∣∣ ∫ θ−11 (A) b (z0(τ)) dτ − ∫ θ−10 (A) b (z0(τ)) dτ ∣∣∣∣∣ ≤ ∫ θ−11 (A) |b (z1(τ))− b (z0(τ))| dτ + ∫ θ−11 (A)∆θ −1 0 (A) |b (z0(τ))| dτ ≤ m (θ−11 (A))max |b (z1(τ))− b (z0(τ))| +m ( θ−11 (A)∆θ −1 0 (A) ) . (Here, “m” refers to the Lebesgue measure, and |·| is the usual 2-norm.) The first term is small (as before) for sufficiently close initial values, while the second is small due to uniform convergence of the θi(·) with respect to initial values which implies a graph convergence for the θ−1i . Having demonstrated the desired regularity in our measure for the jump 43 4.4. Stabilization Example dynamics, we have completed the proof of Theorem 4.1. We remark that ρξ has no singular continuous component due to the regularity of b(·). The arguments above still apply if we drop the assumption that n = 2 but insist that the original CLF, V , be differentiable (indeed, our analysis focuses on the regions where V is differentiable), in which case the resulting feedbacks a and b1 (and thus b) may be assumed to be smooth rather than SRS, Lie derivatives exist everywhere. Given these small modifications, we have proved Theorem 4.2. 4.4 Stabilization Example Consider the impulsive control system on R2, given by dx(t) = f (x(t))u(t)dt + g (x(t))µ(dt) a.e. t ∈ [0,∞), (4.17) where f ([ x1 x2 ]) = [ −x31 x2(x2 − x1) ] , g ([ x1 x2 ]) = [ x2 −x1 ] , and u ∈ [0, 1]. The fields are shown in Figure 4.4. The auxiliary system is −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 y1 y 2 f −1 0 1 −1.5 −1 −0.5 0 0.5 1 1.5 y1 y 2 g Figure 4.1: “Flow” field f and “jump” field g 44 4.4. Stabilization Example then: y′(s) = f (y(s))α(s) + g (y(s))β(s) a.e. t ∈ [0,∞), (4.18) with α ∈ [0, 1] and β ∈ [−1, 1]. Here, we have “jump” dynamics that travel along circular arcs, and the “flow” dynamics guide trajectories to the origin in the half-plane y2 < y1. We will describe two different stabilizing feedbacks that involve jumps: a simple feedback that demonstrates the transformation back to the impulsive setting, followed by a feedback determined via Sontag’s formula that uses the techniques of Section 4.3. 4.4.1 Feedback By Inspection We first construct a simple feedback scheme by considering the properties of f and g. We denote the polar angle of y in the plane by φ(y) (measured from the y1 axis), and partition R2 into the following regions: J = { y : pi 4 < φ(y) < 7pi 4 } , I = { y : −pi 8 ≤ φ(y) ≤ pi 8 } ⊂ {y : y1 ≥ 2 |y2|} , M+ = { y : pi 8 < φ(y) ≤ pi 4 } , M− = { y : 7pi 4 ≤ φ(y) < 15pi 8 } We define the following state-feedback scheme: a(y) = { 0 if y ∈ J 1 2 (cos (4φ(y)) + 1) if y ∈M+ ∪M− ∪ I b(y) =  1 if y ∈ J 1 2 (cos (8φ(y)) + 1) if y ∈M+ ∪M− 0 if y ∈ I Flow in the closed-loop system y′ = f(y)a(y) + g(y)b(y) may be quali- tatively described according to the initial state. If we start in J , only g is active, and the trajectory is a clockwise circular arc until it hits M+. On M+, trajectories flow to I. When the state is in I and close enough to the origin, the trajectory will remain in I and approach the origin as time in- 45 4.4. Stabilization Example −1.5 −1 −0.5 0 0.5 1 1.5 0 0.2 0.4 0.6 0.8 1 Figure 4.2: a(y) and b(y) versus polar angle φ(y) creases. Otherwise, the trajectory will escape to either M+ or M−. In the former case, the trajectory must return to I eventually and remain there on its approach to the origin. On M−, the trajectory will enter either I or J . We are assured stability since any trajectory in J must leave J in finite time and never return, and its radius is strictly decreasing when not in J . We expect the same qualitative behaviour from the companion system z′(s) = f (z(s)) (1− |b(z(s))|) a(z(s)) + g (z(s)) b(z(s)); (4.19) the trajectory of (4.19) starting at z0 = (1.5,−0.9) is highlighted in the plotted trajectories in Figure 4.3 (where J , M+, M− and I are indicated), with corresponding reparametrization θ(·) in Figure 4.4. Closing the loop in (4.17) using a(·) and a measure defined via integration of b (as in Section 4.3), we will see continuous flow due to f(·) alone on the set I, “mixed” flow on M+ and M− where both f(·) and g(·) (via the absolutely continuous component of our measure) are active, and we will see a jump from any state in J to M+ (via the singular atomic component of the measure). 46 4.4. Stabilization Example 4.4.2 Feedback Via CLF and Formula In this section, we construct a feedback for (4.18) based primarily on Son- tag’s formula. This requires a control-Lyapunov function for (4.17); we will use V (y1, y2) = ( (y1 − 1)2 + y22 + 1 ) ( y21 + y 2 2 ) , (see Figure 4.5) which is smooth everywhere and has a Lie derivative with respect to g of LgV (y1, y2) = −2y2 ( y21 + y 2 2 ) . As there is no drift term, Sontag’s formula for unbounded controls re- duces to α∞(y1, y2) = −LfV (y1, y2), β∞(y1, y2) = −LgV (y1, y2) = 2y2 ( y21 + y 2 2 ) . These may be scaled to fit our control constraints while maintaining a stabi- lizing effect; we may imagine a smooth transition to saturation for α ∈ [0, 1] and β ∈ [−1, 1], with results as in Figures 4.7 and 4.6, contour plots of a(·) and b1(·), respectively. The pair a and b1 satisfy the CLF decrease condition (4.10) with the continuous function W (y1, y2) = { 1 10 ( y41 + y 4 2 )2 if y41 + y42 ≤ 1 1 10 ( y41 + y 4 2 ) if y41 + y 4 2 > 1 Following the notation of Section 4.3, we now modify b1(·) to obtain b0(·) (here, we use φ(w) = 12(1 − cos(piw)) in the definition of M(·)). As no jumping occurs near the origin in this case, no further modification is necessary, and we may take b(·) = b0(·), shown in Figure 4.8. Unlike the feedback scheme in Section 4.4.1, the a(·) and b(·) here are smooth even at the origin. We are now assured stability of the closed-loop system z′(s) = f (z(s)) (1− |b(z(s))|) a(z(s)) + g (z(s)) b(z(s)); (we plot (1− |b(z)|) a in Figure 4.9 for reference) Figure 4.10 shows a col- 47 4.5. Final Considerations lection of trajectories, while Figure 4.11 shows the parameterization θ(·) for the bold trajectory starting at z0 = (−1.5, 0.1). Trajectories in the impulsive system look like those in Figure 4.10, but with jumps to the edges of the ∣∣b1(·)∣∣ = 1 contours in the y1 > 0 half-plane; the time is given by θ, and the jumps correspond to horizontal parts of the θ graph. 4.5 Final Considerations Having built a stabilizing feedback for the impulsive system from a feedback for the auxiliary system, we may immediately state a result for open-loop systems: Corollary 4.1. The open-loop impulsive system (4.1) is GAC if and only if the open-loop auxiliary system (4.4) is GAC. An equivalence result for existence of a state feedback is less clear, as the reparametrization we have defined is only valid in one direction. This is a subject for future consideration. Other future work may involve extension to the case where drift is permitted. 48 4.5. Final Considerations −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 x1 x 2 Figure 4.3: Trajectory starting at (1.5,−0.9). 0 1 2 3 4 5 6 7 8 9 10 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 Time scale s th et a(s ) Figure 4.4: Reparametrization θ(s). 49 4.5. Final Considerations −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 y1 y 2 V, contours for 0.25 and squares from 1 to 25 0.2 5 0.25 1 1 1 1 1 4 4 4 4 4 4 4 9 9 9 9 9 9 9 16 16 16 16 25 25 Figure 4.5: A CLF for the auxiliary system 50 4.5. Final Considerations −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 y1 y 2 −LgV = 2y2(y1 2  + y2 2), contours for −1 to 1, steps of 0.25 −1 −1 − 1 − 1 −0.75 −0.75 − 0.75−0.5 −0.5 − 0.5 −0.25 −0.25 − 0.25 0 0 0 0 0.25 0.25 0.25 0.5 0.5 0.5 0.5 0.75 0.75 0.75 1 1 1 1 Figure 4.6: Feedback b0 in auxiliary system −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 y1 y 2 a, contours for 0, 0.5 and 1 0 0 0 0 0 0.5 0.5 0.5 0.5 0.5 1 1 1 1 1 Figure 4.7: Feedback a 51 4.5. Final Considerations −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 y1 y 2 b (after modification) −1 −1 − 1 − 1 −0.75 −0.75 − 0.75 −0.5 − 0.5 −0.5−0.25 −0.25 − 0.25 0 0 0 0 0.25 0.25 0.250.5 0.5 0.5 0.5 0.75 0.75 0.75 1 1 1 1 Figure 4.8: Feedback b0 in modified system −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 y1 y 2 a(1 − |b|) for use in modified system 0 0 0 0 0 0 0 0 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.75 0. 75 0.75 0.75 0.75 11 11 Figure 4.9: Feedback (1− |b(z)|) a in modified system 52 4.5. Final Considerations −1.5 −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 1.5 y1 y 2 Figure 4.10: Trajectories (s up to 1000); dashes are used in jump regions (|b| = 1 border is indicated). 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 2.5 s th et a Figure 4.11: First five s-seconds of the parameterization θ(s) for the bold trajectory in Figure 4.10. 53 Chapter 5 Necessary Conditions in Optimal Control In this chapter, we turn our attention to problems in optimal control, where a cost function is associated to control system, as in the following: (SAMPLE)  Minimize ` (x(a), x(b)) over x with x : [a, b]→ Rn satisfying ẋ(t) ∈ F (t, x), a.e. t ∈ [a, b] (x(a), x(b)) ∈ C. In problem (SAMPLE), F is a given multifunction and C is a given set of available starting and ending points for the trajectories under consideration. This is called a Mayer problem, characterized by its cost function that de- pends only on the state values at either end of the time interval. It is a fundamental problem type in the sense that more elaborate cost functions, including integral costs, may be expressed as a problem of this form by the clever addition of a cost-tracking state whose endpoints are the input vari- ables in a new objective function. As stated, (SAMPLE) implies a search for a global minimizing arc x; often, as below, this is weakened to a discus- sion of the minimum among a smaller bundle of nearby trajectories. Local minimization is stated in terms of a reference trajectory, x∗, and a distance function for trajectories; the minimization is performed over arcs within a certain distance of x∗. The development of necessary conditions for optimal- ity is an attempt to characterize minimizers in terms of a costate arc with restrictive properties, including a related differential inclusion and endpoint constraints. Excellent resources for the modern theory of such problems are 54 Chapter 5. Necessary Conditions in Optimal Control [36] and [15]. Significant milestones in the theory of optimal control of impulsive sys- tems date back to the 1950s [34], with early significant use of measures contributed by Rishel [51] and Warga [66], though the measure differential inclusion (MDI) setting is more recent; see [24], [54]. As noted earlier, our solution concept most closely resembles that of Wolenski and Žabić [67] and matches that of Code and Silva [23]. Optimal control of MDIs include the result of Silva and Vinter [55], where the measure is scalar while F is Lips- chitz in x, measurable in t, and the impulsive map G is continuous in (t, x), Lipschitz in x. In [7], the authors consider a variety of associated issues in the free-time, scalar measure case, including state constraints and nondegen- eracy of the maximum principle. Results for vector-valued measures were limited for a long time to the “commutative” case, where the component fields in the matrix-valued map G were assumed to commute (see [10], [11], [39]; the latter refers to this as the “robust” case). Some results have been established in the “non-commutative” vector- valued measure case under the case of smooth maps in the differential equa- tion setting [12], [39]. More recently, Karamzin tackled these using differen- tial equations and a somewhat different impulsive scheme; in [30], f(t, x, u) (the control-parametrized analogue of multifunction F ) must be measur- able in t, C1 in (x, u), while g(t, x) (analogue of G, but single-valued) is C1 and has no u-dependence (it is a field whose product is taken with the control measure); however, explicit state constraints and a discussion of non- degenerate conditions are presented, and a free end-time setup (where the time interval endpoints are choice variables) is also considered for impulsive problems. Karamzin’s work [30] is closer in setup and the comparisons are more direct to Miller and Rubinovich [39] (who include a significant sur- vey of impulsive results in general and the Russian literature in particular) than others, though the latter uses the time change method to work with theorems for standard control problems; we adopt a similar approach be- low. The concluding discussion in [30] sees this as limiting in that convexity assumptions are needed, but Clarke [17] addresses this limitation, and we bring these results together in the present work. 55 Chapter 5. Necessary Conditions in Optimal Control In [43], Pereira and Silva provided a framework for vector-valued mea- sures using the general MDI: dx(t) ∈ F (t, x)dt +G(t, x)dµ(t), (5.1) where the differential inclusion maps F and G were only measurable in the time and space variables (jointly in (t, x) in the case of G). The setup in the present work is most closely related to [43], but here we remove convexity and incorporate sub-Lipschitz assumptions on F and G. In this chapter, we study a Mayer-style problem where the differential inclusion in question is instead a measure differential inclusion: (P)  Minimize ` (x(0), x(T )) over impulsive solutions (x,X) such that x : [0, T ]→ Rn satisfies dx(t) ∈ F (t, x)dt +G(t, x)dµ(t) with graph completion X associated to µ, where the measure µ takes on vector values in a closed cone, K, while G takes sets of matrices as values. We continue the use of our solution concept from Chapter 3, and take as our starting point the relatively recent results of Vinter [64] and Clarke [17], where convexity of the velocity sets is no longer needed in the hypotheses. The primary regularity assumptions for the differential inclusions maps in [17] are twofold: 1. A pseudo-Lipschitz condition, which calls for a Lipschitz-like behaviour in a restricted velocity set; this is the Aubin property (as described in [52] and elsewhere), but with the neighbourhood about the optimal trajectory identified with an explicit radius that later appears in the necessary conditions (which makes them “stratified”). 2. A growth restriction on the map; the most general of these provided in [17] is called the tempered growth condition, which is shown to imply other standard growth restrictions. The attention paid to the radius in the pseudo-Lipschitz condition leads to a 56 5.1. Current Necessary Conditions for Nonimpulsive Problems stratified set of necessary conditions based on the radius functions involved; this is the core contribution of [17]. We first review this result and the requisite definitions. In what follows, we will use Bn to denote the unit ball in the usual 2- norm in n-dimensions, Bn1 for the unit ball using the 1-norm in n-dimensions, and B[x; r] to denote the open ball (2-norm) centred at the point x and with radius r; here the dimension is inherited from x. We write B[x; r] = B[x; r] for the closure. Our trajectories will be right-continuous in time variable t, but we will denote the initial point (prior to any impulse at time t = 0) by x(0−). We will use δA as the indicator function for set A: it takes the value zero on the set A, and the value +∞ otherwise. 5.1 Current Necessary Conditions for Nonimpulsive Problems Clarke [17] uses a positive-valued “radius function” R : [a, b] → (0,+∞] to make three definitions. The first of these is: Definition (PLC). A multifunction F : [a, b]×Rn→Rn is said to satisfy (PLC), a pseudo-Lipschitz condition of radius R near x∗, if there exist ε > 0 and k ∈ L1[a, b] such that, for almost all t ∈ [a, b] and for every x1 and x2 in B[x∗(t); ε], one has F (t, x1) ∩ B[ẋ∗(t);R(t)] ⊆ F (t, x2) + k(t) |x2 − x1|B. Put another way, each of the set-valued maps F (t, ·) has the Aubin property (as in [52]) with the ball B[ẋ∗(t);R(t)] relative to x∗(t) for almost every t ∈ [a, b]. By comparison, a locally Lipschitz function is pseudo- Lipschitz with radius function R ≡ +∞. In the context of differential inclusions, the localizing effect of the radius in pseudo-Lipschitz functions restricts our focus to velocities in the vicinity of the reference (optimal) trajectory, ignoring any irregular behaviour of the maps elsewhere. This permits us to work with maps that are locally 57 5.1. Current Necessary Conditions for Nonimpulsive Problems well-behaved in terms of image sets, but not locally Lipschitz (where the term “locally” refers to local input values, not the image sets). Consider the simple example where Fray : R2→R2 is defined by Fray(x) = {cx | c ≥ 0} , in other words, the set value at x is the ray in the plane containing x. This map is pseudo-Lipschitz about any point, with any finite radius. Consider the point x∗ = (2, 0); the ray in question is the positive horizontal axis. To have (PLC) hold at x∗, we require an ε, radius R and Lipschitz modulus k such that the above inclusion holds. For ε = R = 1 (this map has a special relationship between input values and output sets), and given a ray Fray(x′) that intersects the unit disc centred at x∗, we need to uniformly cover all such rays by fattening Fray(x′). This is achieved by any k ≥ 2. In fact, we may determine a value of k for any chosen radius, though as R grows, so must k. We may even define a sequence of radii, growing to infinity, where the pseudo-Lipschitz property holds. The map Fray is not globally nor locally Lipschitz at x∗, however; no ray near to Fray(x′) can be covered by a finite fattening of Fray(x′). In the theorems below, we are able to recover “global” results (i.e., equivalent to the results for Lipschitz multifunctions) for maps where an increasing sequence of radii exists. The other two definitions are related to growth of the map relative to growth of the input value. Definition (TG). A multifunction F : [a, b] × Rn→Rn satisfies (TG), the tempered growth condition of radius R near x∗, if there exist ε > 0, λ ∈ (0, 1) and r0 ∈ L1[a, b] obeying both 0 < r0(t) ≤ λR(t) and F (t, x) ∩ B[ẋ∗(t); r0(t)] 6= ∅, ∀x ∈ B[x∗(t); ε], a.e. t ∈ [a, b]. Definition (ESSINF). The pair of a radius function R with some k ∈ L1[a, b] will be said to satisfy (ESSINF), the essential infimum condition, 58 5.1. Current Necessary Conditions for Nonimpulsive Problems if essinf t ∈ [a, b] { R(t) k(t) } > 0. It is demonstrated in [17] that assuming (PLC) with (ESSINF) using the same radius function R and Lipschitz modulus function k implies (TG). We will work with this pair of assumptions in our main theorem, with our additional condition that the radius functions involved be bounded away from zero; our goal is to extend the core result of [17] quoted below. Given a radius function R, an F -trajectory x∗ with endpoints in C is a local W 1,1 minimum of radius R for the cost function ` if there exists ε > 0 such that every F -trajectory x with endpoints in C satisfying the localization conditions |ẋ(t)− ẋ∗(t)| ≤ R(t) a.e. t ∈ [a, b],∫ b a |ẋ(t)− ẋ∗(t)| dt ≤ ε, and ||x− x∗||∞ ≤ ε, must also satisfy ` (x(a), x(b)) ≥ ` (x∗(a), x∗(b)). Theorem 5.3 (Theorem 3.1.1 of [17]). Suppose, for the radius function R, that F satisfies (PLC) and (TG) near x∗, a local W 1,1 minimum of radius R. Then there exist an arc p and a number λ0 ∈ {0, 1} satisfying the following four conditions: • Nontriviality (N): (λ0, p(t)) 6= 0 ∀t ∈ [a, b] • Transversality (T ): (p(a),−p(b)) ∈ ∂Lλ0` (x∗(a), x∗(b)) + NLC (x∗(a), x∗(b)) • Euler inclusion (E): ṗ(t) ∈ co { ω : (ω, p(t)) ∈ NLgphF (t,·) (x∗(t), ẋ∗(t)) } a.e. t ∈ [a, b] 59 5.2. Statement of Main Results • Radius R Weierstrass condition (WR), for almost every t ∈ [a, b]: 〈p(t), v〉 ≤ 〈p(t), ẋ∗(t)〉 ∀v ∈ F (t, x∗(t)) ∩ B[ẋ∗(t);R(t)] If x∗ obeys (PLC), (TG) and local minimality for a sequence of radius func- tions Ri and lim inf i→∞ Ri(t) = ∞ a.e. t ∈ [a, b], then conclusions (N), (T ), and (E) hold for some arc p which satisfies the global Weierstrass condition (W ): 〈p(t), v〉 ≤ 〈p(t), ẋ∗(t)〉 ∀v ∈ F (t, x∗(t)) a.e. t ∈ [a, b] 5.2 Statement of Main Results To build on Theorem 5.3 in an impulsive context, we will use a stronger assumption than measurability in t (the technical details will appear through the course of the proof) and assume that G satisfies a form of (PLC) jointly in t and x: Definition (JPLC): A multifunction F : [a, b] × Rn→Rn is said to satisfy a joint pseudo-Lipschitz condition of radius R near gph(x∗) if there exist δ > 0 and kF ∈ L1[a, b] such that, for almost all t ∈ [a, b], for every (t1, x1) and (t2, x2) in B[(t, x∗) ; δ], one has F (t1, x1) ∩ B[ẋ∗(t);R(t)] ⊆ F (t2, x2) + kF (t) |(t1, x1)− (t2, x2)|Bn. Given a multifunction G : [a, b]×Rn→Rn×m, which takes as values sets of matrices with m columns of n-dimensional fields, we say that G satisfies (JPLC) with radius R if each of those m component (sets of) fields satisfy the above. That is, if we define Gv = {gv : g ∈ G} ⊆ Rn for each v ∈ Rm, then G satisfies (JPLC) with radius R if all of the Rn-valued multifunctions Gê1, · · · , Gêm satisfy (JPLC) with radius R (here, êi is the i-th standard basis function in Rm). Remark: This joint regularity in (t, x) is reminiscent of the hypothesis 60 5.2. Statement of Main Results (H3) in Theorem 8.2.1 of [64] (a result we will extend in Section 5.4), where it is used to accommodate free interval endpoints. We need it for a similar reason here; due to the “instantaneous” actions in our impulsive system, we require regularity in the time t beyond measurability, the current mini- mal hypothesis ([64] has measurability in time in its Theorem 8.4.1, which requires much more work to prove). This is still less demanding, however, than a requirement of Lipschitz behaviour in t for each x, which is a recent standard hypothesis (as noted in the introduction). Our (JPLC) is further distinguished from that in [64] insofar as a single radius is identified (in the style of [17]), which will permit our stratified result. We will assume a pseudo-Lipschitz behaviour and an appropriate growth restriction for F , G, and F+GK, jointly in (t, x). As we show in Section 5.3, this level of regularity is preserved under the time reparametrization to a free end-time optimal control problem. We may then appeal to the strati- fied necessary conditions result we establish (of interest in its own right) in Section 5.4, which extends Theorem 8.5 of [64]. Finally, we determine the implications of this result for the original impulsive system, (5.1). To clarify our notion of local minimum, we define the distance between solutions (x1, X1) and (x2, X2) by m ((x1, X1), (x2, X2)) = ∣∣∣∣yext1 − yext2 ∣∣∣∣W 1,1 + |S1 − S2| = |x1(0−)− x2(0−)|+ ∫ T 0 ∣∣ẏext1 (s)− ẏext2 (s)∣∣ ds + ∣∣max θ−11 (T )−max θ−12 (T )∣∣ , (5.2) where T = max{max θ−11 (T ),max θ−12 (T )}, yi is the time-stretched tra- jectory for xi using reparametrization θi, Si is the right endpoint of the stretched time interval (i.e. θi(Si) = T ), and yexti refers to the extension (if necessary) of yi to the interval [0, Sj ] by setting yexti (s) = y ext i (Si) for s ∈ [Si, Sj ], if Si < Sj . We thus consider (x∗, X∗) to be a local minimum in the impulsive problem if there exists some δ > 0 such that (x∗, X∗) gives the lowest cost function value over all (x,X) such that m ((x,X), (x∗, X∗)) ≤ δ 61 5.2. Statement of Main Results We are now able to state our main results. Theorem 5.4 (Stratified Necessary Conditions). Assume F is nonempty, closed-valued, locally bounded and locally Lipschitz with Lipschitz modulus function kF , with kF bounded above by the constant k0 > 0. Assume G obeys (JPLC) and (ESSINF) with radius RG bounded below by the constant R0 > 0 and with Lipschitz modulus kG. Assume that either 1. the radius RG is more than double the local bound for F , or 2. the combinations F +Gr, for each r ∈ K, uniformly obey (JPLC) and (ESSINF) with radius RF+GK and Lipschitz modulus of kF+GK , with kF+GK also bounded above by the constant k0 > 0. Suppose that (x∗, X∗) gives a local minimum in (P), where X∗ is a graph completion for x∗ with associated measure µ∗. Then there exist a number λ0 ∈ {0, 1} and impulsive arcs p : [0, T ]× Rn → Rn and q : [0, T ]× Rn → R with graph completions P and Q, respectively, both associated to µ∗, which satisfy: • Transversality condition (T ) ( p(0−),−p(T )) ∈ λ0∂L` (x∗(0−), x∗(T )) + NLC (x∗(0−), x∗(T )) • Euler inclusion (E), for almost every t ∈ [0, T ]: (−q̇(t), ṗ(t)) ∈ co { (ω, ν) : (ω, ν, p(t)) ∈ NLgph(F+GK) (t, x∗(t), ẋ∗(t)) } • Radius R0 Weierstrass condition (WR0), where R0 is a radius function defined in (5.32): 〈p(t), v〉 ≤ 〈p(t), ẋ∗(t)〉 , ∀v ∈ (F +Gκ(t)) ∩ B [ẋ∗(t);R0(t)] , where the set-valued function κ(t) = {γ s.t. |γ| = |µ̇∗ac(t)|}; µ̇∗ac(t) is the time derivative of the absolutely continuous part of µ∗ at t. 62 5.2. Statement of Main Results • The graph completions X∗, P and Q consist of subsystems for each t where µ̄∗({t}) 6= 0. These subsystems involve arcs yt, pt and qt, satisfying the following conditions for almost every s ∈ [η(t−), η(t)]: – Euler inclusion at impulse instant t, (Et): (−q̇t(s), ṗt(s)) ∈ co { (ω, ν) : (ω, ν, pt(s)) ∈ NLgph(G(·,·)b∗(s)) (t, yt(s), ẏt(s)) } , – Weierstrass condition at impulse instant t of radius R defined in (5.7), (W tR): 〈pt(s), ν〉 ≤ 〈pt(s), yt(s)〉 = 0, ∀ν ∈ [G (t, ẏt(s)) (K ∩ Bm1 )] ∩ B [ẏt(s);R(s)] where η(t) = t+ m∑ j=1 ∫ t 0 µ̄∗j(dτ) represents a time reparametrization (µ∗j being the j-th component of measure µ∗) and b∗ satisfies∫ η(t) η(t−) b∗(s)ds = µ∗({t}). • Nontriviality (N): (λ0, p(t), pt, q(t), qt) 6= 0 ∀t ∈ [0, T ], where we interpret the subsystem trajectories pt and qt to be zero if they are zero over [η(t−), η(t)]. We note for clarification that the derivative of pt in (ET ) and those like it in the later discussion are with respect to the input argument, s, and not t, which acts in that situation as an index to associate the subprocess to a particular jump instant. 63 5.2. Statement of Main Results Theorem 5.5 (Global Necessary Conditions). If the hypotheses of joint pseudo-Lipschitz conditions and essential infimum growth restrictions of Theorem 5.4 hold for a sequence of radius functions Ri and lim inf i→∞ Ri(t) = ∞ a.e. t, then the conclusions (N), (T ), and (E) hold for some arcs (p,P ) and (q,Q) which satisfy the global Weierstrass condition (W ): 〈p(t), v〉 ≤ 〈p(t), ẋ∗(t)〉 ∀v ∈ (F +GK) (t, x∗(t)) a.e. t ∈ [0, T ]. In the degenerate case where µ̄∗([0, T ]) = 0, we have q(t) ≤ 〈p(t), ẋ∗(t)〉 = max v∈F (t,x∗(t))+G(t,x∗(t))K 〈p(t), v〉 a.e. t ∈ [0, T ]. Otherwise, for µ̄∗([0, T ]) > 0, the arcs (p,P ) and (q,Q) satisfy the Hamilto- nian condition (H): q(t) = max v∈F (t,x∗(t))+G(t,x∗(t))K 〈p(t), v〉 a.e. t ∈ [0, T ], and the analogous subsystem conditions for pt and qt. In addition, 〈p(t), gb〉 ≤ 0 ∀gb ∈ G(t, x∗(t))K, t ∈ [0, T ] \ supp(µ̄∗) 〈p(t), ẋ∗(t)〉 = 〈p(t), f∗(t) + g∗(t)κ∗(t)〉 = q(t) 〈p(t), g∗(t)µ̇∗ac(t)〉 ≤ 0 } , t ∈ supp(µ̄∗ac) 〈pt(s), ẏt(s)〉 = 0 〈pt(s), f〉 ≤ qt(s) ∀f ∈ F (t, yt(s)) } , s ∈ [η(t−), η(t)] The proofs are provided in Section 5.5, and are followed by some con- cluding remarks and a simplified conclusion in the autonomous case in Sec- tion 5.6. 64 5.3. PLC, TG and ESSINF in the Stretched-Time System 5.3 PLC, TG and ESSINF in the Stretched-Time System We next discuss the regularity hypotheses we use in determining necessary conditions. They are expressed in terms of some fixed reference trajectory x∗ for (3.5) defined on [a, b]. We are interested in the regularity of the map F defined in (3.5) based on hypotheses involving F and G. Rockafellar and Wets [52] provide the following related result: Theorem 5.6 (Coderivative Chain Rule, 10.37 of [52]). Suppose S =M◦J for outer semicontinuous mappings J : Rn→Rk and M : Rk→Rm. Let x̄ ∈ domS, ū ∈ S(x̄), and assume (a) the mapping (x, u) 7→ J(x) ∩M−1(u) is locally bounded at (x̄, ū), (b) D∗M (z̄|ū) (0)∩D∗J (x̄|z̄)−1 (0) = {0} for every z̄ ∈ J(x̄)∩M−1(ū). Then the graph of S is locally closed at (x̄, ū), and D∗S (x̄|ū) ⊆ ⋃ z̄∈J(x̄)∩M−1(ū) D∗J (x̄|z̄) (0) ◦D∗M (z̄|ū) (0). (An outer semicontinuous multifunction S satisfies lim supx→x̄ S(x) ⊆ S(x̄), but we will not dwell on this approach long enough to worry more about it.) The subsequent Corollary 10.38 of [52] establishes the Aubin property. We may follow the proof of this pair of results from [52], which describes Lipschitzian properties under composition of multifunctions, in proving the following: Lemma 5.3. Suppose that F and G each satisfy (JPLC) and that G is locally bounded. Then the set-valued map F(θ, y) satisfies a pseudo-Lipschitz condition (PLC) for some ε > 0 and for some radius R for almost every s. Proof sketch. Our local minimum, x∗, under transformation has the space trajectory y∗ and time reparametrization θ∗ built from some selection β∗ from K ∩ Bm1 . 65 5.3. PLC, TG and ESSINF in the Stretched-Time System We compose J : Rn→Rn(1+m) and M : Rn(1+m)→Rn, where J(x) =  F (x) Gê1(x) ... Gêm(x)  and M   f Gv1 ... Gvm   = { f (1− |β|1) + m∑ i=1 Gviβi |β ∈ K ∩ Bm1 } . We thus describe the space component of F by the composition S =M ◦ J . For a given t, the hypotheses in [52] concern the set J(x∗(t))∩M−1(ẋ∗(t)), which is locally bounded based on our hypotheses. We also note that M is Lipschitz with constant 1 on this set, and we thus have sufficient conditions to apply the theorem. While the Chain Rule theorem above does establish the Aubin property with an estimate on the Lipschitz function k, it works with general neigh- bourhoods of the optimal trajectory and does not provide direct information about a possible radius function. We next present a more explicit calculation in this regard, as the radius function features prominently in our stratified necessary conditions. Our additional hypotheses arise, roughly speaking, from the need for some form of boundedness for the set J(x∗(t))∩M−1(ẋ∗(t)) described above. We will employ the following technical result in our argument. Lemma 5.4. Given r > 0, t > 0, and a ∈ Rn \ {0}, the following are 66 5.3. PLC, TG and ESSINF in the Stretched-Time System equivalent: (i) B[a; r] ⊆ tB[a; r′] = B[ta; tr′] (ii) r′ ≥ 1 t (r + |a||t− 1|) Proof. (ii =⇒ i) Assume (ii). Pick any x ∈ B[a; r], and estimate |x− ta| = |(x− a) + (a− ta)| ≤ |x− a|+ |1− t||a| ≤ r + |a||t− 1| ≤ tr′ by (ii). (i =⇒ ii) Assume t 6= 1. Let σ = sgn(1− t). Define x = a+ σr a|a| . Clearly x ∈ B[a; r], so by (i), tr′ ≥ |x− ta| = ∣∣∣∣a+ σr|a|a− ta ∣∣∣∣ = |a| ∣∣∣∣1− t+ σr|a| ∣∣∣∣ = |(1− t)|a|+ σr| . When t < 1, σ = 1, so this gives tr′ ≥ (1− t)|a|+ r = r + |a||t− 1|. When t > 1, σ = −1, so we find tr′ ≥ |(1− t)|a| − r| = |(t− 1)|a|+ r| = (t− 1)|a|+ r = r + |a||t− 1|. 67 5.3. PLC, TG and ESSINF in the Stretched-Time System In either case (and trivially when t = 1), we recover (ii). We now proceed with our regularity extension to F . For the remainder of this section, we assume a time interval of [0, T ] for the impulsive system. Lemma 5.5. Suppose that F is locally Lipschitz in (t, x) near gph(x∗) with function kF and is locally bounded by a function Fmax in the following sense: F (t, y) ⊆ Fmax(θ∗(s))B, ∀(t, y) ∈ B[(θ∗(s), y∗(s)); ε], a.e. s, (5.3) and that G satisfies (JPLC) with radius RG near gph(x∗) with function kG, both with the same ε > 0. Suppose further that there exists α > 1 such that RG(θ∗(s)) ≥ 2αFmax(θ∗(s)) a.e. s. (5.4) Then F satisfies (PLC) with radius R(s) = RG(θ∗(s)) ( α−1 α ) 1 + 2Fmax(θ∗(s)) , (5.5) and function κ(s) = kF (θ∗(s)) + kG(θ∗(s)), for the same ε. Proof. A key object in our analysis is the time reparametrization function θ∗(s) of the reference trajectory x∗(t). That is, x∗(t) and its subprocess graph completion correspond to a time-stretched F-trajectory (θ∗(s), y∗(s)), where θ∗(s) captures the stretch of the original time interval according to µ̄∗. We seek to establish, for some radius function R and some integrable κ, that F(θ1, y1) ∩ B[(θ̇∗(s), ẏ∗(s));R(s)] ⊆ F(θ2, y2) + κ(s) |(θ2, y2)− (θ1, y1)|B whenever (θ1, y1) and (θ2, y2) are sufficiently close to (θ∗(s), y∗(s)). 68 5.3. PLC, TG and ESSINF in the Stretched-Time System As θ̇∗(s) ∈ [0, 1], we may infer from (5.4) that θ̇∗(s) ≤ RG(θ∗(s))2αFmax(θ∗(s)) . (5.6) holds for almost every s. Restricting ourselves to such s, we will work with the stretched optimal trajectory y∗(s) directly as we cannot work with x∗(θ∗(s)) when θ̇∗(s) = 0. We may represent y∗(s) = f∗θ̇∗(s) + g∗b∗ and, assuming R(s) > 0, let (z1, f1z1 + g1b1) ∈ F(θ1, y1) ∩ B [ (θ̇∗(s), f∗θ̇∗(s) + g∗b∗);R(s) ] . We may calculate: |g1b1 − g∗b∗| ≤ ∣∣∣(f1z1 + g1b1)− (f∗θ̇∗(s) + g∗b∗)∣∣∣ + ∣∣∣f∗θ̇∗(s)− f1z1∣∣∣ ≤ R(s) + ∣∣∣θ̇∗(s)∣∣∣ |f∗| + |z1| |f1| ≤ R(s) + 2Fmax(θ∗(s)) ( θ̇∗(s) +R(s) ) . For the left hand side to be majorized by RG(θ∗(s)), we require that R(s) ≤ RG(θ∗(s))− 2θ̇∗(s)Fmax(θ∗(s)) 1 + 2Fmax(θ∗(s)) . Due to (5.6), we are assured that 2θ̇∗(s)Fmax(θ∗(s)) ≤ 1 α RG(θ∗(s)) and may conclude that RG(θ∗(s))− 2θ̇∗(s)Fmax(θ∗(s)) ≥ α− 1 α RG(θ∗(s)). Our choice of R(s) in (5.5) is then sufficient, for any α > 1, to apply the (JPLC) condition for G, which in turn supplies g2 ∈ G(θ2, y2) and b2 ∈ 69 5.3. PLC, TG and ESSINF in the Stretched-Time System Bm1 ∩K with |b2|1 = |b1|1 such that g1b1 ∈ g2b2 + kG(θ∗(s)) |(θ1, y1)− (θ2, y2)|B. To finish, we appeal to the locally Lipschitz property of F : f1 ∈ F (θ2, y2) + kF (θ∗(s)) |(θ1, y1)− (θ2, y2)|B, so that ((1− |b1|), f1(1− |b1|) + g1b1) is in the set[ (1− |b1|1) F (θ2, y2)(1− |b1|1) +G(θ2, y2)b2 ] + [ 0 κ(s) |(θ1, y1)− (θ2, y2)|B ] . As the right hand side is included in F(θ2, y2) + κ(s) |(θ1, y1)− (θ2, y2)|B, we have demonstrated (PLC) for F . The situation can be more complicated if the pseudo-Lipschitz radius for G is ever less than double the local bound function for F ; in the next result, we introduce a notion of multiscale dynamics to deal with this. Lemma 5.6. Suppose that F is locally Lipschitz in (t, x) near gph(x∗) with function kF , and is locally bounded by a function Fmax in the sense of Lemma 5.5; that F + Gr, for each r ∈ K, satisfies (JPLC) with radius R near gph(x∗) with function k; and that G satisfies (JPLC) with radius RG near gph(x∗) with function kG, all with the same ε > 0. Then F satisfies (PLC) with radius R(s) =  RG(θ∗(s))(α−1α ) 1+2Fmax(θ∗(s)) , if θ̇∗(s) ≤ RG(θ∗(s)) 2αFmax(θ∗(s)) , R(θ∗(s))θ̇∗(s) R(θ∗(s))+1+ √ 1+|ẋ∗(θ∗(s))|2 , otherwise, (5.7) and function κ(s) = { kF (θ∗(s)) + kG(θ∗(s)), if θ̇∗(s) ≤ RG(θ∗(s))2αFmax(θ∗(s)) , k(θ∗(s)), otherwise, for the same ε and for any α > 1. 70 5.3. PLC, TG and ESSINF in the Stretched-Time System Proof. Our approach will be multiscale, in the sense that we distinguish between “small” and “large” selections from K; these represent system dy- namics that are “slow” or “fast”, respectively. We first examine the “slow dynamics”, where θ̇∗(s) > RG(θ∗(s)) 2αFmax(θ∗(s)) . (5.8) (We recall that RG(t) > 0 for any value of t ∈ [0, T ].) If any such s exist, we fix one and let (z1, v1) ∈ F(θ1, y1) ∩ B[(θ̇∗(s), ẏ∗(s));R(s)]. Here, R(s) > 0 is defined in (5.11) below; for now it suffices to note that R(s) is positive. By Lemma 5.4, we are assured that the inclusion B [ (θ̇∗(s), ẏ∗(s));R(s) ] ⊆ z1B[(1, ẋ∗(θ∗(s)));R(θ∗(s))] (5.9) holds precisely when R(θ∗(s))θ̇∗(s) ≥ θ̇∗(s) z1 ( R(s) + ∣∣∣(θ̇∗(s), ẋ∗(θ∗(s))θ̇∗(s))∣∣∣ ∣∣∣∣ z1θ̇∗(s) − 1 ∣∣∣∣) . We may rearrange this inequality to obtain z1R(θ∗(s)) − |(1, ẋ∗(θ∗(s)))| ∣∣∣z1 − θ̇∗(s)∣∣∣ ≥ R(s), and using the fact that ∣∣∣z1 − θ̇∗(s)∣∣∣ ≤ R(s), the inequality is true provided that z1R(θ∗(s)) − |(1, ẋ∗(θ∗(s)))| R(s) ≥ R(s), which we rearrange to provide the following condition which guarantees (5.9): z1 ≥ R(s) (1 + |(1, ẋ∗(θ∗(s)))|) R(θ∗(s)) . (5.10) As already noted, our choice of z1 ensures that ∣∣∣z1 − θ̇∗(s)∣∣∣ ≤ R(s), hence 71 5.3. PLC, TG and ESSINF in the Stretched-Time System z1 ≥ θ̇∗(s)−R(s), and so we impose the condition R(s) (1 + |(1, ẋ∗(θ∗(s)))|) R(θ∗(s)) = θ̇∗(s)−R(s). This statement ensures z1 > 0, and it is equivalent to a choice of R(s) = R(θ∗(s))θ̇∗(s) R(θ∗(s)) + 1 + √ 1 + |ẋ∗(θ∗(s))|2 . (5.11) As (z1, v1) ∈ F(θ1, y1), we have a representation of the form v1 = f1z1 + g1b1, f1 ∈ F (θ1, y1), g1 ∈ G(θ1, y1), b1 ∈ Bm1 ∩K, z1 = 1− |b1|1, and we may conclude that (z1, v1) ∈ F(θ1, y1) ∩ B[(1, ẋ∗(θ∗(s)));R(θ∗(s))]z1 f1z1 + g1b1 ∈ B[ẋ∗(θ∗(s));R(θ∗(s))]z1 f1 + g1 b1 z1 ∈ B[ẋ∗(θ∗(s));R(θ∗(s))]. Hence, by our hypothesis of (JPLC) for F +G b1z1 , f1 + g1 b1 z1 ∈ f2 + g2 b1 z1 + k(θ∗(s)) |(θ1, y1)− (θ2, y2)|B, for some f2 ∈ F (θ2, y2), g2 ∈ G(θ2, y2), and so (z1, f1z1 + g1b1) ∈ F(θ2, y2) + k(θ∗(s)) |(θ1, y1)− (θ2, y2)|B, as required. We may think of the case where (5.8) is false as representing the “fast dynamics”, as it represents s-times where the GK dynamics dominate the system, either completely as during a jump (θ̇∗(s) = 0) or almost completely for small θ̇∗(s) > 0. Of course, this is precisely where (5.6) does hold, and 72 5.4. Free End-Time Problems we appeal to the work of Lemma 5.5 to handle this case. Based on these result, we may observe the following facts: Lemma 5.7. If, in addition to the hypotheses of Lemma 5.5, G each satisfies (ESSINF) and both RG ≥ R0 for some constant R0 > 0, and kF ≤ k0 for some constant k0 > 0, then F(θ, y) satisfies (ESSINF) for its radius R paired with function κ as defined in Lemma 5.5, and there exists a constant R̄ > 0 such that R ≥ R̄. Lemma 5.8. If, in addition to the hypotheses of Lemma 5.6, F +Gr and G each satisfy (ESSINF) with their respective radii and Lipschitz functions, and those functions satisfy the further conditions that RG ≥ R0 for some constant R0 > 0 and both k ≤ k0 and kF ≤ k0 for some constant k0 > 0, then F(θ, y) satisfies (ESSINF) for its radius R paired with function κ as defined in Lemma 5.6, and there exists a constant R̄ > 0 such that R ≥ R̄. 5.4 Free End-Time Problems At an intermediate stage of our main proof, we will consider the following “free end-time” problem, where we include the time interval endpoints, a and b, as choice variables: (FETP)  Minimize ` (a, y(a), b, y(b)) over intervals [a, b] and arcs y ∈W 1,1 ([a, b];Rn) such that ẏ ∈ F (t, y) a.e. t ∈ [a, b] (a, y(a), b, y(b)) ∈ C We assume that the cost function ` is locally Lipschitz and the endpoint set C is closed. We will be interested in necessary conditions for a local W 1,1 minimum of (FETP) under suitable hypotheses on the differential inclusion map, F . The first free end-time result in [64] (Theorem 8.2.1) employs a condition similar to our pseudo-Lipschitz condition on F that is joint in time and space but demands an infinite and growing set of radii. As demonstrated in 73 5.4. Free End-Time Problems [17], such a hypothesis is stronger than the combination of (JPLC) with a modified version of (ESSINF) that we introduce below, and does not lead to a stratified conclusion in the spirit of Theorem 5.3. We thus proceed by extending the cited theorem for use in the stratified framework. To compare interval-trajectory pairs in the free end-time problem, we define (as in [64]) the metric d (([a, b], x), ([A,B], y)) := |a−A|+ |b−B|+ |x(a)− y(A)| + ∫ b∨B a∧A |ẋext(r)− ẏext(r)| dr, (5.12) where xext and yext are the constant extensions of x and y at their endpoints; for example, xext(t) =  x(a) if t < a, x(t) if a ≤ t ≤ b, x(b) if t > b. We may then call any feasible ([A∗, B∗], x∗) a W 1,1 local minimizer for (FETP) if there exists some δ0 > 0 such that `(a, x(a), b, x(b)) ≥ `(A∗, x∗(A∗), B∗, x∗(B∗)) for all ([a, b], x) feasible for (FETP) that are within δ0 of ([A∗, B∗], x∗) with respect to the metric d. With this notion in place, we are ready to state our result. We note again that we impose an additional constraint on the radius function, R, in requiring it to be bounded away from zero; the value of this will become clear in the course of the proof. Theorem 5.7. Let ([A∗, B∗], x∗) be a W 1,1 local minimizer for the free end-time problem (FETP) such that B∗ > A∗. Suppose that F (t, x) is nonempty and closed for all (t, x), and that F satisfies both (ESSINF) and (JPLC) with a radius function R that satisfies R(t) ≥ R0 > 0 for some constant R0 for all t, a function kF and a constant ε, all shared by both conditions. Then there exist arcs p ∈W 1,1 ([A∗, B∗];Rn) , h ∈W 1,1 ([A∗, B∗];R) 74 5.4. Free End-Time Problems and a number λ0 ∈ {0, 1} such that the following properties hold: • Nontriviality (N free): (λ0, p(t)) 6= 0 ∀t ∈ [A∗, B∗] • Transversality (T free): (−h(A∗), p(A∗), h(B∗),−p(B∗)) ∈ ∂Lλ0` (A∗, x∗(A∗), B∗, x∗(B∗)) + NLC (A∗, x∗(A∗), B∗, x∗(B∗)) • Euler inclusion (Efree), for almost every t ∈ [A∗, B∗]:( −ḣ(t), ṗ(t) ) ∈ co{(β1, β2) : (β1, β2, p(s)) ∈ NLgphF (t, x∗(t), ẋ∗(t))} • Radius RWeierstrass condition (W freeR ), for almost every t ∈ [A∗, B∗]: 〈p(t), v〉 ≤ 〈p(t), ẋ∗(t)〉 ∀v ∈ F (t, x∗(t)) ∩ B[ẋ∗(t);R(t)] If F satisfies (ESSINF) and (JPLC) as above for a sequence of radius func- tions Ri (allowing all parameters to depend on i) with lim inf i→∞ Ri(t) = ∞ a.e. t, then conclusions (N free), (T free), and (Efree) hold for some h, p, λ0, where p satisfies the global Weierstrass condition (W free): 〈p(t), v〉 ≤ 〈p(t), ẋ∗(t)〉 ∀v ∈ F (t, x∗(t)) a.e. t ∈ [a, b] and moreover we have the Hamiltonian relation (Hfree): h(t) = max v∈F (t,x∗(t)) 〈p(t), v〉 = H (t, x∗(t), p(t)) a.e. t ∈ [a, b] Proof. We proceed in the manner of [64], employing a time reparametriza- 75 5.4. Free End-Time Problems tion to translate (FETP) into a fixed-interval problem which reuses the cost function, `; the change of variables we employ does not alter the cost function value of the corresponding trajectories, thus we retain the local minimum when moving to the fixed-interval problem. The key issues to resolve are translation of the hypotheses to establish those of Theorem 5.3, and then the translation of the resulting arc back to the free end-time problem. By our definition of a local W 1,1 minimizer, there exists some δ′ > 0 such that ([A∗, B∗], x∗) provides the minimum in (FETP) over all feasible trajectories satisfying d (([A,B], x), ([A∗, B∗], x∗)) < 5δ′. We will use this δ′ in the localization considerations below. We first choose ϕ ∈ C2([A∗, B∗]) to approximate x∗ as follows: ϕ(A∗) = x∗(A∗) and d (([A∗, B∗], ϕ), ([A∗, B∗], x∗)) < δ′. (5.13) Then we select some ᾱ ∈ (0, 12) such that ᾱ||ϕ̇||L∞ ≤ 1 and ᾱ||ϕ̇||L∞ |B∗ −A∗| < δ′. We next choose Γ ∈ (0, 1) and then α : [A∗, B∗]→ [0, ᾱ] such that α(s) ≤ min { ᾱ, (1− Γ)R(s) (1 + |ϕ̇(s)|) R(s) + |ẋ∗(s)| } . (5.14) (We set α(s) = 0 if the second entry is undefined, which may occur on a set of at most Lebesgue measure zero.) We then define ρ(s) := α(s) 1 + |ϕ̇(s)| ≥ 0, (5.15) along with the function F̃ : [A∗, B∗]× [ A∗ − δ′, B∗ + δ′ ]× Rn × R+→R× Rn × R 76 5.4. Free End-Time Problems defined by F̃ (s, τ, y, z) =  wwv ξ  : w ∈ [1− ρ(s), 1 + ρ(s)], v ∈ F (τ, y), ξ ≥ |wv − ϕ̇(τ)|  , (5.16) and D(τ0, y0, z0, τ1, y1, z1) = |τ0 −A∗| + |τ1 −B∗| + |y0 − ϕ(A∗)| + |y1 − ϕ(B∗)| + ∫ A∗∨τ0 A∗ |ϕ̇(σ)| dσ + ∫ B∗ B∗∧τ1 |ϕ̇(σ)| dσ + |z1 − z0| , and consider the fixed interval problem: (XP)  Minimize ` (τ(A∗), y(A∗), τ(B∗), y(B∗)) over (τ, y, z) ∈W 1,1 ([A∗, B∗];R1+n+1) (τ̇ , ẏ, ż) ∈ F̃ (s, τ, y, z) a.e. s ∈ [A∗, B∗] (τ(A∗), y(A∗), τ(B∗), y(B∗)) ∈ C D (τ(A∗), y(A∗), z(A∗), τ(B∗), y(B∗), z(B∗)) ≤ δ′. Next, we present what amounts to a revision of the proof in [64] in showing that the trajectory (τ∗(s), y∗(s), z∗(s)) = ( s, x∗(s), ∫ s A∗ |ẋ∗(σ)− ϕ̇(σ)| dσ ) is a minimizer for (XP); we keep the spirit of the argument there but adapt it for clarity. Let (τ, y, z) be a feasible triple in (XP). According to the Generalized Filippov Selection Theorem ([64] Theorem 2.3.13), there exist measurable functions w and v such that w(s) ∈ [1− ρ(s), 1 + ρ(s)], v(s) ∈ F (τ(s), y(s)) a.e. 77 5.4. Free End-Time Problems τ̇(s) = w(s), ẏ(s) = w(s)v(s), ż(s) ≥ |w(s)v(s)− ϕ̇(τ(s))| a.e., and since (τ, y, z) is feasible by assumption, D (τ(A∗), y(A∗), z(A∗), τ(B∗), y(B∗), z(B∗)) ≤ δ′. (5.17) Our first task is to demonstrate that (τ, y, z) being “local” using D in (XP) implies that the corresponding solution, with time reparametrized by the τ variable, is “local” using d in (FETP). We note that τ is an absolutely con- tinuous, strictly increasing function, and we may thus convert freely between y, defined on [A∗, B∗] in (XP), and its free-end-time equivalent in (FETP) by defining x(t) = y ( τ−1(t) ) , so x(τ(s)) = y(s). We will maintain this convention of s as the independent (fictional time) variable in (XP) and t as the time variable in (FETP) as we proceed, and to simplify the notation we define A = τ(A∗), B = τ(B∗), the endpoints of the τ coordinate; we thus pair our (XP) trajectory (τ, y, z) with the (FETP) trajectory ([A,B], x). We continue via distance estimates involving the smoother, constructed function ϕ: d (([A,B], x), ([A∗, B∗], x∗)) ≤ d (([A,B], x), ([A,B], ϕext)) + d (([A,B], ϕext), ([A∗, B∗], ϕ)) + d (([A∗, B∗], ϕ), ([A∗, B∗], x∗)) . In examining the right hand side of this estimate, we note that third term is smaller than δ′ by our constraints (5.13) in the construction of ϕ, and recall 78 5.4. Free End-Time Problems the definition of d to probe the other two separately: d (([A,B], ϕext), ([A∗, B∗], ϕ)) = |A−A∗|+ |B −B∗|+ |ϕext(A)− ϕ(A∗)|+ ∫ B∨B∗ A∧A∗ |ϕ̇ext(t)− ϕ̇ext(t)| dt = |τ(A∗)−A∗|+ |τ(B∗)−B∗|+ ∣∣∣∣∫ A∗∨A A∗ ϕ̇(t)dt ∣∣∣∣+ 0 ≤ |τ(A∗)−A∗|+ |τ(B∗)−B∗|+ ∫ A∗∨τ(A∗) A∗ |ϕ̇(t)| dt ≤ δ′ by (5.17), and d (([A,B], x), ([A,B], ϕext)) = |A−A|+ |B −B|+ |x(A)− ϕext(A)|+ ∫ B A |ẋ(t)− ϕ̇ext(t)| dt = 0 + 0 + |x(τ(A∗))− ϕext(τ(A∗))|+ ∫ B∗ A∗ |v(s)w(s)− ϕ̇ext(τ(s))w(s)| ds ≤ |y(A∗)− ϕ(A∗)|+ |ϕext(A)− ϕ(A∗)|+ ∫ B∗ A∗ |w(s)(v(s)− ϕ̇ext(τ(s)))| ds ≤ |y(A∗)− ϕ(A∗)|+ ∫ A∗∨τ(A∗) A∗ |ϕ̇(t)| dt+ ∫ B∗ A∗ |v(s)w(s)− ϕ̇ext(τ(s))w(s)| ds. We estimate this final integral term on its own:∫ B∗ A∗ |v(s)w(s)− ϕ̇ext(τ(s))w(s)| ds ≤ ∫ B∗ A∗ |v(s)w(s)− ϕ̇ext(τ(s))|+ |ϕ̇ext(τ(s))− ϕ̇ext(τ(s))w(s)| ds ≤ ∫ B∗ A∗ ż(s)ds+ ∫ B∗ A∗ ρ(s) |ϕ̇ext(τ(s))| ds ≤ z(B∗)− z(A∗) + ᾱ||ϕ̇||L∞ |B∗ −A∗| ≤ z(B∗)− z(A∗) + δ′, 79 5.4. Free End-Time Problems according to properties of ᾱ. Applying this estimate to the previous calcu- lation and recalling our locality assumption in (XP), we conclude that d (([A,B], x), ([A,B], ϕext)) ≤ 2δ′, whence d (([τ(A∗), τ(B∗)], x), ([A∗, B∗], x∗)) ≤ 4δ′ < 5δ′. In other words, any feasible trajectory for (XP) corresponds to a feasible interval-trajectory pair in (FETP). In particular, since both problems share the cost function, `, our candidate (τ∗, y∗, z∗) in (XP) will have the same (minimal) cost as ([A∗, B∗], x∗) in (FETP), and is therefore a minimizer in the fixed-time problem. Further, again as in [64], we note that the final constraint of (XP) is inactive for endpoints near those of (τ∗, x∗, z∗) (enforced by the constructed auxiliary constants and functions); this ensures the feasible trajectories are local to the minimum, but the constraint may then be omitted in our state- ment of necessary conditions. In verifying that our hypotheses are sufficient to apply Theorem 5.3 to (XP), we begin by showing that (JPLC) for F implies (PLC) for F̃ for a radius related to R. Let (τ1, y1, z1) ∈ B [(τ∗(s), y∗(s), z∗(s)); ε] = B [(s, x∗(s), z∗(s)); ε] and w1 ∈ [1− ρ(s), 1 + ρ(s)] , v1 ∈ F (τ1, y1), ξ1 ≥ |w1v1 − ϕ̇(τ1)| so that (w1, w1v1, ξ1) ∈ F̃ (s, τ1, y1, z1)∩ B̄ [ (1, ẋ∗(s), ż∗(s)) ; R̃(s) ] , where we reserve our definition of R̃(s) for a moment, assuming for now only that it 80 5.4. Free End-Time Problems is positive. We observe that |w1v1 − ẋ∗(s)| ≤ R̃(s) |w1v1 − w1ẋ∗(s)| − |w1ẋ∗(s) − ẋ∗(s)| ≤ R̃(s) (1− ρ(s)) |v1 − ẋ∗(s)| − ρ(s) |ẋ∗(s)| ≤ R̃(s) |v1 − ẋ∗(s)| ≤ R̃(s) + ρ(s) |ẋ∗(s)|1− ρ(s) . Recalling the definitions of α in (5.14) and ρ in (5.15), setting R̃(s) := (1− ρ(s))R(s) − ρ(s) |ẋ∗(s)| ensures both that ΓR(s) ≤ R̃(s) ≤ R(s) and |v1 − ẋ∗(s)| ≤ R(s). As v1 ∈ F (τ1, y1) by definition, this latter inequality allows us to invoke (JPLC), and we may thus express v1 = v2 + kF (s) |(τ1, y1)− (τ2, y2)| b1 for some v2 ∈ F (τ2, y2) and some b1 ∈ Bn. If we define w2 = w1, then we have already established the necessary inclusion for (PLC) of F̃ at (s, x∗(s), z∗(s)) for (1, ẋ∗(s), ż∗(s)) of radius R̃ in the top two coordinates; it remains to express ξ1 in terms of an appropriate ξ2. Since |w1v1 − ϕ̇(τ1)| ≤ ξ1 ≤ |ẋ∗(s)− ϕ̇(s)| , we will require |w2v2 − ϕ̇(τ2)| to be yet smaller but with a pseudo-Lipschitz modulus function term permitted. We calculate: |w2v2 − ϕ̇(τ2)| = |w1 (v1 − kF (s) |(τ1, y1)− (τ2, y2)| b1) − ϕ̇(τ2)| ≤ |w1v1 − ϕ̇(τ1)| + |ϕ̇(τ2) − ϕ̇(τ1)| + w1kF (s) |(τ1, y1)− (τ2, y2)| ≤ |w1v1 − ϕ̇(τ1)| + Φ |τ2 − τ1| + (1 + ρ(s)) kF (s) |(τ1, y1)− (τ2, y2)| 81 5.4. Free End-Time Problems where Φ is the largest magnitude of |ϕ̈| over [A∗ − ε,B∗ + ε]. Thus, even in the “worst case” of ξ1 = |w1v1 − ϕ̇(τ1)|, we may still choose an appropriate ξ2 to satisfy the (PLC) inclusion by taking as our pseudo-Lipschitz function k̃F̃ = (1 + ρ(s))kF (s) + Φ; we have established |w2v2 − ϕ̇(τ2)| − k̃F̃ (s) |(τ1, y1)− (τ2, y2)| ≤ |w1v1 − ϕ̇(τ1)| ≤ ξ1. (5.18) Having demonstrated (PLC) for F̃ , we examine the ratio of R̃ and k̃F̃ . By construction, R̃(s) ≥ ΓR(s) and with k̃F̃ as above, (ESSINF) for F̃ is clear when we recall that essential infimum of R alone is bounded away from zero by R0. As our new system satisfies the hypotheses of Theorem 5.3, we may conclude that there exist adjoint arc components h : [A∗, B∗]→ R, p : [A∗, B∗]→ Rn, γ : [A∗, B∗]→ R and λ0 ∈ {0, 1} such that we have • nontriviality (Ñ):λ0,  −h(s)p(s) γ(s)   6= 0 ∀s ∈ [A∗, B∗], • this transversality condition (T̃ ), with γ(B∗) = 0: −h(A∗) p(A∗) h(B∗) −p(B∗)  ∈ ∂Lλ0`   A∗ x∗(A∗) B∗ x∗(B∗)   + NLC   A∗ x∗(A∗) B∗ x∗(B∗)   (which is (T free) already), 82 5.4. Free End-Time Problems • this Euler inclusion (Ẽ) for almost every s ∈ [A∗, B∗]:( −ḣ(s), ṗ(s), γ̇(s) ) ∈ co   β1β2 β3  :  β1 β2 β3 −h(s) p(s) γ(s)  ∈ NL gph F̃ (s,·,·,·)   s x∗(s) z∗(s) 1 ẋ∗(s) ż∗(s)    , • and this radius R̃ Weierstrass condition (W̃R̃), a.e. s ∈ [A∗, B∗]: 〈 −h(s)p(s) γ(s)  ,  wwv ξ 〉 ≤ 〈  −h(s)p(s) γ(s)  ,  1ẋ∗(s) ż∗(s) 〉 ∀  wwv ξ  ∈ F̃ (s, τ∗(s), x∗(s), z∗(s)) ∩ B   1ẋ∗(s) ż∗(s)  ; R̃(s)  . Our final step is to translate these conclusions for (XP) back to our original (FETP). We begin by expressing the inequality in (W̃R̃) as: −wh(s) + w 〈p(s), v〉 + wγ(s) |wv − ϕ̇(s)| ≤ −h(s) + 〈p(s), ẋ∗(s)〉 + γ(s) |ẋ∗(s)− ϕ̇(s)| (5.19) for all v ∈ F (s, x∗(s)) and w ∈ [1− ρ(s), 1 + ρ(s)]; we have used |wv − ϕ̇(s)| as the minimum of the relevant ξ. We may deconstruct (Ẽ) via analysis of proximal normals. Let  β1β2 β3  ,  −h(s)p(s) γ(s)   ∈ NLgph F̃ (s,·,·,·)   sx∗(s) z∗(s)  ,  1ẋ∗(s) ż∗(s)   . 83 5.4. Free End-Time Problems This means there exist both a sequence of base points in gph F̃ (s, ·, ·)  sixi zi  ,  wiwivi ζi   −→   sx∗(s) z∗(s)  ,  1ẋ∗(s) ż∗(s)   , and a sequence of proximal normals  β1,iβ2,i β3,i  ,  −hipi γi   −→   β1β2 β3  ,  −h(s)p(s) γ(s)   satisfying  β1,iβ2,i β3,i  ,  −hipi γi   ∈ NPgph F̃ (s,·,·,·)   sixi zi  ,  wiwivi ζi   . There exists a quadratic function Qi with a minimum of zero at the i-th base point αi = (si, xi, zi, wi, wivi, ζi) such that, if we use the notational convention that α = (σ, x, z, w, v, ζ), Qi (α) ≥ 〈(β1,i, β2,i, β3,i,−hi, pi, γi) , α− αi〉 whenever (w,wv, ζ) ∈ F̃ (s, σ, x, z). This inequality implies that the function W 0i (α) = Qi (α)− 〈(β1,i, β2,i, β3,i,−hi, pi, γi) , α〉 when subject to the constraint (w,wv, ζ) ∈ F̃ (s, σ, x, z), has a minimum when α = αi. We may unpack F̃ to express the constraint inclusion as the set of three constraints: v ∈ F (σ, x), w ∈ [1− ρ(s), 1 + ρ(s)], ζ ≥ |wv − ϕ̇(σ)| . 84 5.4. Free End-Time Problems With this in mind, we see that the function Wi defined by Wi (σ, x, z, w, v, ζ) = W 0i (σ, x, z, w, v, ζ) + δgphF (σ, x, v) + δ[1−ρ(s),1+ρ(s)](w) + γi (|wv − ϕ̇(σ)| − ζ)+ shares (with no constraints) this same minimizer with W 0i (where explicit constraints were enforced). A necessary condition for this minimality is that the zero vector is in the subgradient of Wi evaluated at the minimizer, αi. We calculate, using the Sum Rule (see [64] Theorem 5.4.1, among others): ∂Wi (αi) ⊆ (−β1,i,−β2,i,−β3,i, hi − 〈pi, vi〉 ,−wipi,−γi) + {(a1, a2, 0, 0, a5, 0) |(a1, a2, a5) ∈ NgphF (si, xi, wivi)} + { (0, 0, 0, a4, 0, 0) ∣∣∣a4 ∈ NP[1−ρ(s),1+ρ(s)](wi)} + γi {λ (〈−ϕ̈(σ), u〉 , 0, 0, 〈wivi, u〉 , wiu,−1)) |λ ∈ [0, 1], u ∈ Bn } . Using (0, 0, 0, 0, 0, 0) ∈ ∂Wi (si, xi, zi, wi, wivi, ζi), we collect terms in each coordinate: (β1,i, β2,i, wipi) ∈ NPgphF (si, xi, wivi) + (−λγi 〈ϕ̈(σ), u〉 , 0, λγiwiu) , (5.20) β3,i = 0, (5.21){ hi + 〈vi, γiwiu− pi〉 = 0 if ρ(s) > 0 and i suff. large s.t. wi ∈ (1− ρ(s), 1 + ρ(s)), (5.22) (1 + λ)γi = 0. (5.23) Taking the limit as i→∞ in (5.21) and (5.23), we have β3 = 0 and γ(s) = 0, hence γ ≡ 0. Accounting for this, the limit in (5.20) gives precisely (Efree). Our new information about γ transforms the inequality in (5.19) to −wh(s) + w 〈p(s), v〉 ≤ −h(s) + 〈p(s), ẋ∗(s)〉 , 85 5.4. Free End-Time Problems which we may rearrange to (1− w)h(s) + w 〈p(s), v〉 ≤ 〈p(s), ẋ∗(s)〉 . (5.24) By construction, α(s) is positive for almost every s, hence so is ρ(s). For any such s, if we choose w = 1 + ρ(s), we have 〈p(s), v〉 − h(s) ≤ 1 ρ(s) 〈p(s), ẋ∗(s)− v〉 , while w = 1− ρ(s) leads to h(s) − 〈p(s), v〉 ≤ 1 ρ(s) 〈p(s), ẋ∗(s)− v〉 . Recalling that the inequality holds for all v ∈ F (s, x∗(s)) ∩ B[ẋ∗(s); R̃(s)] (which certainly includes ẋ∗(s)), we deduce that h(s) = 〈p(s), ẋ∗(s)〉. This, in turn, forces h(s) = 0 whenever p(s) = 0, provided that α(s) > 0. If p ≡ 0 then h(s) = 0 for almost every s, whence h ≡ 0 since it must be continuous. With γ ≡ 0 in (Ñ), we deduce (N free). At this stage, we proceed in a separate manner for each of the stratified and global Weierstrass conditions. We recover a result we may call (W free R̃ ) by choosing w = 1 in (5.24) to obtain 〈p(s), v〉 ≤ 〈p(s), ẋ∗(s)〉 ∀v ∈ F (s, x∗(s)) ∩ B[ẋ∗(s); R̃(s)]. This holds for every s since w = 1 is available regardless of the value of α. So far in our analysis, we have placed no further restrictions on the constant Γ. We may thus build a sequence of inequalities corresponding to a sequence Γi → 1. Passing to the limit, we recover some p where 〈p(s), v〉 ≤ 〈p(s), ẋ∗(s)〉 ∀v ∈ F (s, x∗(s)) ∩ B[ẋ∗(s);R(s)]. Wemay conclude the same when the open ball is replaced with B[ẋ∗(s);R(s)] by taking limits of vectors in the open ball intersection as needed (these do 86 5.5. Proof of Main Results not disrupt the inequality); this is (W freeR ). Alternatively, if we fix Γ and have a growing sequence of radius functions Ri in our hypotheses for F , we generate a sequence of systems with R̃i which also grow (since R̃i ≥ ΓRi); this permits us to introduce the global Weierstrass condition of Theorem 5.3 in place of (W̃R̃). The translation back to the free end-time problem is similar to that above, but simpler as the radius no longer appears. The conclusion (Hfree) also follows directly. As remarked in [64], we may draw further conclusions in the case where the differential inclusion map is autonomous (depends only on the state variable and not the time). Corollary 5.2. In addition to the hypotheses of Theorem 5.7, assume that F (t, x) = F (x) only, and hence satisfies (PLC) rather than (JPLC). Then the necessary conditions of Theorem 5.7 still hold, but h is constant and (Efree) may be replaced by the Extended Euler inclusion (EEfree): ṗ(t) ∈ co { ω : (ω, p(t)) ∈ NLgphF (t,·) (x∗(t), ẋ∗(t)) } a.e. t ∈ [a, b] Proof. To obtain the new conclusion, we combine (T free), (Efree) to see that h is constant. Then (EEfree) follows directly from (Efree). 5.5 Proof of Main Results With the requisite results in place, our path is now clear: we time stretch the trajectories of our impulsive system to obtain a bundle of trajectories feasible for a free end-time optimal control problem with a higher-dimensional state and adapted endpoint set. The translated optimal trajectory will give a minimum in this new, free end-time problem, which we reparametrize to a fixed interval problem where the necessary conditions of [17] apply. We finish the stratified result by translating the conclusions back through the two stages of time reparametrization. Finally, we consider the global case where a sequence of radius functions is present. 87 5.5. Proof of Main Results 5.5.1 Proof of Theorem 5.4 Proof. Phase 1: We start by “stretching” all of the impulsive trajectories that are near (i.e. all included in the definition of local minimum) to the optimal one we are provided for (P). The result is a bundle of F-trajectories on stretched time intervals that in general will have different endpoints. We consider these in the context of a free-end-time optimal control problem in s-time, with endpoint set C = { {0} × [ {0} c0 ] × [T,∞)× [ {T} cT ]∣∣∣∣∣ (c0, cT ) ∈ C } (5.25) and localization distance d defined in (5.12) of Section 5.4. The distance m defined in (5.2) matches the distance value for d, where the s-interval endpoints may differ: d (([0, S1], (θ1, y1)), ([0, S2], (θ2, y2))) = |0− 0|+ |S1 − S2| + |y1(0)− y2(0)|+ ∫ S1∨S2 0 ∣∣ẏext1 (s)− ẏext2 (s)∣∣ ds = m ((x1, X1), (x2, X2)) . Thus we have a one-to-one relation between solutions “near” (x∗, X∗) in the impulsive system and solutions “near” ([0, S∗], (θ∗, y∗)) in the free end-time system that uses endpoint set C. If we use the cost function: ˜̀(0, y(0), S, y(S)) = `(y(0), y(S)) = `(x(0−), x(T )), then we have established a free end-time optimal control problem in the form of Section 5.4. By employing the same cost function, and with the distance defined above, we have established a preservation of the notion of local minimum under the time-stretching transformation (for the same lo- calization parameter, δ) provided there are no “additional” trajectories in 88 5.5. Proof of Main Results the new problem. We are assured of this last fact by the one-to-one corre- spondence of impulsive solutions and those in the stretched time, proved in [73]. Phase 2: Our next step is to determine a costate in this free end-time optimal control problem. We may invoke the relevant pair, either Lemma 5.5 with Lemma 5.7 or Lemma 5.6 with Lemma 5.8, that transfers regularity of our impulsive MDI maps to the new F system, and are then ready to apply Corollary 5.2 to the autonomous inclusion map F . According to Corollary 5.2, there exist arcs ζ ∈W 1,1 ([0, S∗];R) , ρ ∈W 1,1 ([0, S∗];Rn) , a constant h, and a number λ0 ∈ {0, 1} such that the following properties hold: • Nontriviality (N free): (λ0, ζ(s), ρ(s)) 6= 0 ∀s ∈ [0, S∗] • Transversality (T free): −h −ζ(0) ρ(0) h ζ(S∗) −ρ(S∗)  ∈ ∂Lλ0 ˜̀   0 θ∗(0) y∗(0) S∗ θ∗(S∗) y∗(S∗)   +NLC   0 θ∗(0) y∗(0) S∗ θ∗(S∗) y∗(S∗)   89 5.5. Proof of Main Results • Extended Euler inclusion (EEfree), for almost every s ∈ [0, S∗]:( −ζ̇(s), ρ̇(s) ) ∈ co (ω, v) :  ω v −ζ(s) ρ(s)  ∈ NLgphF   θ∗(s) y∗(s) θ̇∗(s) ẏ∗(s)    • Radius R Weierstrass condition (W freeR ):〈[ −ζ(s) ρ(s) ] , [ ω v ]〉 ≤ 〈[ −ζ(s) ρ(s) ] , [ θ̇∗(s) ẏ∗(s) ]〉 ∀ [ ω v ] ∈ F (θ∗(s), y∗(s)) ∩ B [[ θ̇∗(s) ẏ∗(s) ] ;R(s) ] a.e. s ∈ [0, S∗] We begin our conversion back to the impulsive system with the transver- sality condition, (T free), which implies h ≤ 0 if S∗ = T or h = 0 if S∗ > T , and (ρ(0),−ρ(S∗)) ∈ ∂Lλ0` (y∗(0), y∗(S∗)) + NLC (y∗(0), y∗(S∗)) , which is equivalent to (ρ(0),−ρ(S∗)) ∈ ∂Lλ0` ( x∗(0−), x∗(T ) ) + NLC ( x∗(0−), x∗(T ) ) . (5.26) We next examine (W freeR ); we have − ωζ(s) + 〈ρ(s), v〉 ≤ −θ̇∗(s)ζ(s) + 〈ρ(s), ẏ∗(s)〉 (5.27) ∀(ω, v) ∈ F (θ∗(s), y∗(s)) ∩ B [( θ̇∗(s), ẏ∗(s) ) ;R(s) ] a.e. s ∈ [0, S∗], where we may rewrite v = fω + gb for f ∈ F (θ∗(s), y∗(s)), g ∈ G(θ∗(s), y∗(s)), 90 5.5. Proof of Main Results b ∈ Bm1 ∩K with 1− |b|1 = ω. We may select ω = θ̇∗(s) to obtain:〈 ρ(s), f θ̇∗(s) + gb 〉 ≤ 〈 ρ(s), f∗θ̇∗(s) + g∗b∗ 〉 , for all b such that 1− |b|1 = θ̇∗(s), from which we conclude〈 ρ(s), f + g b ω 〉 ≤ 〈ρ(s), ẋ∗(θ∗(s))〉 , if θ̇∗(s) > 0, (5.28) and 〈ρ(s), gb〉 ≤ 〈ρ(s), g∗b∗〉 , if θ̇∗(s) = 0. (5.29) We deconstruct (EEfree) by an analysis of proximal normals. To begin, we let (ω, ν,−ζ(s), ρ(s)) ∈ NLgphF (θ∗(s), y∗(s), θ̇∗(s), ẏ∗(s)). There exists a sequence (ωi, νi,−ζi, ρi) converging to (ω, ν,−ζ(s), ρ(s)) and satisfying (ωi, νi,−ζi, ρi) ∈ NPgphF (θi, yi, zi, vi) for some (θi, yi, zi, vi) that is a sequence of vectors with limit (θ∗, y∗, z∗, v∗) as i → ∞. For each i, we look at the proximal normals using (θi, yi, zi, vi) as a base point and then examine limits of such normals in determining the limit normal cone at (θ∗, y∗, z∗, v∗). Inclusion in the proximal normal cone is equivalent to the existence of a function σ = σ(θ, y, z, v) that is quadratic near (θi, yi, zi, vi) with a minimum of 0 at (θi, yi, zi, vi) such that σ(θ, y, z, v) ≥ ωi(θ − θi) + 〈νi, (y − yi)〉 − ζi(z − zi) + 〈ρi, (v − vi)〉 , under the constraint that (z, v) ∈ F(θ, y). We may thus conclude that the function Γ0(θ, y, z, v) = σ(θ, y, z, v)− ωiθ − 〈νi, y〉+ ζiz − 〈ρi, v〉 constrained by (z, v) ∈ F(θ, y), has a minimum of 0 at (θi, yi, zi, vi). Rewrit- 91 5.5. Proof of Main Results ing the variable z = 1 − |b|1 for b ∈ Bm1 ∩ K, there is some bi such that vi = fi(1− |bi|1)+ gibi for appropriate selections from F and G (uniqueness is not essential in what follows), and we see that the function: Γ(θ, y, v) = Γ0(θ, y, 1− |bi|1, v) + δgphF (·,·)(1−|bi|1)+G(·,·)bi(θ, y, v) also has a minimum (with no constraints) at (θi, yi, vi). A necessary con- dition for this minimality is that (0, 0, 0) is an element of the subgradient ∂Γ(θi, yi, vi), which in turn satisfies the inclusion: ∂Γ(θi, yi, vi) ⊆ (−ωi,−νi,−ρi) + NPgphF (·,·)(1−|bi|1)+G(·,·)bi(θi, yi, vi) by the Sum Rule. We deduce that (ωi, νi, ρi) ∈ NPgphF (·,·)(1−|bi|1)+G(·,·)bi(θi, yi, vi), and, passing to the limit, we obtain a preliminary (time-stretched) version of the Euler condition:( −ζ̇(s), ρ̇(s) ) ∈ co { (ω, ν) ∣∣∣(ω, ν, ρ(s)) ∈ NL gphF (·,·)θ̇∗(s)+G(·,·)b∗(s) (θ∗(s), y∗(s), ẏ∗(s)) } . (5.30) Here, we have included the convex hull, as the limit points we draw from (EEfree) are of this form. Phase 3: Finally, we translate these necessary conditions back to the original impulsive MDI system. We will determine the costate p based on the ρ calculated above via a time transformation using the optimal time reparametrization, θ∗. Just as we insist that x∗(θ∗(s)) = y∗(s) when θ̇∗(s) > 0, we will define p(θ∗(s)) = ρ(s) and q(θ∗(s)) = ζ(s) for the same s, and consider ρ in terms of a graph completion for p associated with the optimal measure µ∗. We recover (N) as well as a transversality condition based on (5.26): ( p(0−),−p(T )) ∈ ∂Lλ0` (x∗(0−), x∗(T )) + NLC (x∗(0−), x∗(T )) . 92 5.5. Proof of Main Results The Euler inclusion is in multiple parts; when θ̇∗(s) > 0, (5.30) becomes (−q̇(θ∗(s)), ṗ(θ∗(s))) θ̇∗(s) ∈ co (ω, ν) ∣∣∣∣∣∣∣(ω, ν, ρ(s)) ∈ NLgphF (·,·)θ̇∗(s)+G(·,·)b∗(s)   θ∗(s)x∗(θ∗(s)) ẋ∗(θ∗(s))θ̇∗(s)    , where we may “divide out” by θ̇∗(s) to obtain (−q̇(θ∗(s)), ṗ(θ∗(s))) ∈ co { (w, v) ∣∣∣∣(w, v, p(t)) ∈ NLgphF (·,·)+G(·,·) b∗(s) θ̇∗(s) (t, x∗(t), ẋ∗(t)) } , and note that b∗(s) θ̇∗(s) ∈ K to conclude (−q̇(t), ṗ(t)) ∈ co { (w, v) ∣∣∣(w, v, p(t)) ∈ NLgphF (·,·)+G(·,·)K (t, x∗(t), ẋ∗(t))} . (5.31) Meanwhile, for θ̇∗(s) = 0, (5.30) already provides the correct inclusion. The Weierstrass condition of radius R0 is also in multiple parts; once we define R0(θ∗(s))θ̇∗(s) = R(s), (5.32) we have a statement for the absolutely continuous system: 〈p(t), v〉 ≤ 〈p(t), ẋ∗(t)〉 , ∀v ∈ (F +Gκ) ∩ B [ẋ∗(t);R0(t)] , (5.33) for any κ with |κ| = b θ̇∗(η−1(t)) , which follows from (5.28), and one for each of the “jump” subsystems given already by (5.29). This produces a radius function which shrinks near jump times, but whose stretched-time counter- part is bounded away from zero in the jump subsystems. We now proceed in our proof of the global conditions. 93 5.5. Proof of Main Results 5.5.2 Proof of Theorem 5.5 Proof. We pick up where the previous proof leaves off, with the stratified Weierstrass conditions for a possibly shrinking radius function. This shrink- ing, however, does not ruin our conclusions in the global case, where a sequence of radius functions is provided; in that case, we use the same translation for (W free) as we did for (W freeR ), but have no radius to con- sider, and we may assert our global Weierstrass condition in the impulsive system. Finally, we consider the Hamiltonian condition in the global case. Con- dition (W free) states: −ωζ(s) + 〈ρ(s), v〉 ≤ −θ̇∗(s)ζ(s) + 〈ρ(s), ẏ∗(s)〉 ∀(ω, v) ∈ F (θ∗(s), y∗(s)) a.e. s ∈ [0, S∗], which we may rewrite for each s as −ωζ(s) + 〈ρ(s), fω + gb〉 ≤ −θ̇∗(s)ζ(s) + 〈 ρ(s), f∗θ̇∗(s) + g∗b∗ 〉 ∀ω ∈ [0, 1], f ∈ F (θ∗(s), y∗(s)) , g ∈ G (θ∗(s), y∗(s)) , b ∈ Bm1 , |b|1 = 1− ω. (5.34) We set f = f∗, g = g∗, and will determine properties of ζ(s) by perturbation, using values of b near b∗. When b∗ 6= 0, we set b = (1 + γ)b∗ for small γ (positive or negative). This produces the inequality: − ( θ̇∗(s)− γ|b∗|1 ) ζ(s) + 〈 ρ(s), f∗ ( θ̇∗(s)− γ|b∗|1 ) + (1 + γ)g∗b∗ 〉 ≤ −θ̇∗(s) ζ(s) + 〈 ρ(s), f∗θ̇∗(s) + g∗b∗ 〉 , which is equivalent to γζ(s) ≤ γ 〈 ρ(s), f∗ − g∗ b∗|b∗|1 〉 . (5.35) When 0 < |b∗|1 < 1, we may choose γmax > 0 sufficiently small so that 94 5.5. Proof of Main Results (1 + γ1)b∗ and (1 − γ)b∗ are elements of Bm1 for any γ ∈ (0, γmax]. This freedom in our choice of γ forces an equality: ζ(s) = 〈 ρ(s), f∗ − g∗ b∗|b∗|1 〉 for 0 < |b∗|1 < 1. (5.36) When |b∗|1 = 1, we have θ̇(s) = 0, so f∗ is not defined, and we may only perturb using γ < 0. This leads to the inequality: ζ(s) ≥ 〈 ρ(s), f − g∗ b∗|b∗|1 〉 for |b∗|1 = 1,∀f ∈ F (θ∗(s), y∗(s)) . (5.37) When b∗ = 0, we perturb using small vectors b ∈ K with |b|1 = γ > 0. In this case, g∗ is not defined, and we deduce from (5.34) that γζ(s) + 〈ρ(s),−γf∗ + gb〉 ≤ 0, which leads to ζ(s) ≤ 〈 ρ(s), f∗ − g b|b|1 〉 for |b∗|1 = 0,∀g ∈ G (θ∗(s), y∗(s)) . (5.38) Now (Hfree) states: h = max (w,ν)∈F(θ∗(s),y∗(s)) −wζ(s) + 〈ρ(s), ν〉 = H̃ (θ∗(s), y∗(s), ρ(s)) (5.39) for almost every s ∈ [0, S∗]. Fix any s where both (W free) and (Hfree) apply. According to (T free), we have h ≤ 0 if S∗ = T or h = 0 if S∗ > T . Whenever θ̇∗(s) = 1, ẋ(θ∗(s)) = ẏ(s) = f∗(s), and we simplify (5.39) with (5.34) in this case, to obtain: h+ ζ(s) = 〈ρ(s), f∗(s)〉 . (5.40) 95 5.5. Proof of Main Results We combine this with the perturbation result (5.38) to produce: 0 ≥ h ≥ 〈 ρ(s), g b |b|1 〉 , ∀g ∈ G (θ∗(s), y∗(s)) , and q(t) ≤ 〈p(t), ẋ∗(t)〉 . (5.41) In fact, these inequalities are still true for any t where µ̄∗({t}) = 0 (even when we have θ̇∗(s) < 1 elsewhere). If the measure µ̄∗ is active, then S∗ > T and h = 0. Combining (5.39) with (5.34), using the optimality of the velocity ẏ(s), we have θ̇∗(s) ζ(s) = 〈 ρ(s), f∗θ̇∗(s) + g∗b∗ 〉 . For the case 0 < θ̇∗(s) < 1, this implies that q(t) = 〈p(t), ẋ∗(t)〉 when 0 < θ̇∗(s) < 1. Substituting from (5.36), we obtain both 0 = 〈p(t), g∗(t)µ̇∗ac(t)〉 , and q(t) = 〈p(t), f∗〉 . Finally, when θ̇∗(s) = 0, we have 0 = 〈ρ(s), ẏ∗(s)〉 , and use (5.37) to deduce that ζ(s) ≥ 〈ρ(s), f〉 ∀f ∈ F (θ∗(s), y∗(s)) . This completes our proof of the global conditions. We may compare this result with that of [64], where the costate for the 96 5.6. Final Considerations time reparametrization variable appears in the necessary conditions. The list of inequalities in the Hamiltonian condition recalls the necessary conditions in [55] and [43]; all of them are in the global context only, and our stratified necessary conditions are new. 5.6 Final Considerations We may also state a companion result for the autonomous case. Corollary 5.3. In addition to the hypotheses of Theorem 5.4, assume that F (t, x) = F (x) and G(t, x) = G(x) only, and hence each satisfies (PLC) rather than (JPLC). Then the necessary conditions of Theorem 5.4 still hold, but q ≡ 0 and (E) may be replaced by the Extended Euler inclusion (EE): ṗ(t) ∈ co{ω : (ω, p(t)) ∈ NLgphF+GK (x∗(t), ẋ∗(t))} a.e. t ∈ [0, T ] Proof. We note how (E) greatly simplifies in this case, and forces q to be constant via (5.30). It is a consequence of (W ) and analysis of the possible values taken by θ̇∗(s) that determine this constant value to be 0. Remark: In this setup, we have assumed that a local minimum exists in the impulsive problem where the minimizing process produces a finite stretched-time interval. While we have bounded the values of the measure in some sense using K, trajectories are permitted unlimited use of the jump dynamics in minimizing the cost function. This freedom does not appear practical in applications. For example, in a system where we seek to min- imize the norm of the final state (with no restriction on where that state must be), where G represents exponential decay to the origin and F = 0, there can be no feasible minimizer, as use of the impulsive G dynamics can always lower the cost function in an instant of time. With no bound on the total variation of the measure, any feasible trajectory (that is, one that actually ends at some x(T )) will be further from the origin than one which “jumps” for a greater amount, i.e., with a longer stretched time. 97 5.6. Final Considerations In terms of an impulsive model of a system with “fast” and “slow” dy- namics, this would seem to break our model, where we are using the “in- finitely fast” dynamics over an “infinitely long” period of time. Introducing a maximum size for the stretched-time interval is thus a natural concept; we will study this and related measure constraints in the next chapter. As a final note regarding our auxiliary free end-time result in Section 5.4, we speculate that the extension of the free end-time work in [64] to Clarke’s stratified framework may be possible in the case where the differential map is only integrable in the time variable (this is Theorem 8.4.1 of [64], whose proof is considerably more involved than the joint Lipschitz case we’ve extended). This strengthening would not have benefited our approach for the impulsive system, however, as the time reparametrization in Phase 1 of our main proof leads to an autonomous free end-time problem. Our baseline result in terms of necessary conditions for impulsive systems of this form (with no state constraints) enjoys a more general differential inclusion than the state-of-the-art analogue in [30], as well as the other predecessors already mentioned. This work may serve as a template for extensions of the many results in optimal control to the measure differential inclusion framework. Broadly speaking, the use of stratified necessary conditions are useful in the following two ways: 1. Differential inclusion maps which are not Lipschitz but are “nearly” Lipschitz, where they are pseudo-Lipschitz for a sequence of radius functions that grow to infinity, are sufficient to obtain some global necessary conditions in the manner of Lipschitz functions. As Clarke notes in [17], the relation of these radius functions to their respective Lipschitz modulii need not be linear, and moreover this may all occur in the absence of convexity. 2. When global conditions are not available, necessary conditions for opti- mality can be developed in a special local sense. Consider, for example, a differential inclusion map which has as values sets that are strictly separated into two or more pieces. Such maps are not Lipschitz, but 98 5.6. Final Considerations may be pseudo-Lipschitz if those pieces are sufficiently regular. A ra- dius function about a reference trajectory permits focus on the single piece where the reference trajectory takes its velocity. We will exploit this fact in Chapter 6. 99 Chapter 6 Applications via Measure Constraints Theorems 5.4 and 5.5 permit significant freedom in the choice of measure; apart from standard exclusion of non-regular measures, the cone K is the only major restriction. In most applications involving impulses, however, some limits on the impulsive behaviour arise naturally in modeling. The most common modeling restriction in impulsive and hybrid systems when expressing in terms of a measure-driven framework is that the impulsive dynamics be purely impulsive, that is, there are no nearby large velocity choices to the “infinite” velocity spikes of the impulse; the state is either jumping or flowing, but not approximating a jump in a continuous manner. This is unlike the systems modeled in space navigation [34] or impact and friction between rigid bodies, already set in terms of measure differential in- clusions [60, 61] and where such intermediate velocities make sense. Rather, this is the effect produced by a system such as (3.1) in Chapter 3, used to model a wide variety of phenomena, including: 1. Resource control [18], integrated pest management strategies [25, 63]. 2. Chaotic system stabilization/synchronization, for example chaotic cir- cuits used in cryptography [70, 71]. 3. Switched systems, where discrete transitions occur between continuous operating modes [69]. 4. Hybrid systems, where impulses may be forced or optional, according to the state [26, 27]. 100 Chapter 6. Applications via Measure Constraints 5. Multiprocesses, treated as concatenated subprocesses with boundary conditions at discrete transition points [18, 19]. Each of the last three cases is a framework used in modeling, and there is overlap in what they are able to model. Indeed, the multiprocess framework of Clarke and Vinter, using bounded, state-Lipschitz, convex differential inclusions covers a broad set of models where a finite number of discrete transitions occur (their timing need not be fixed) [19]. In this and the hybrid and switched system frameworks, a significant issue is the accumulation of discrete transitions when there are an infinite number of such transitions. As mention in Chapter 3, Goebel and Teel have proposed a solution concept for hybrid systems that can account for this “Zeno behaviour” [53], but our measure-driven approach can handle these quite naturally [73]. The existing literature in scientific applications, regarding , for example, pest management and chaotic circuit synchronization, focuses on issues of existence of equilibria and possible stabilization strategies, with some use of ad-hoc methods to draw conclusions about impulsive optimal control [5, 8, 29, 35, 65]. In the following section, we propose a general measure- driven framework that is appropriate for these problems, with a look towards optimal control strategies. This will be achieved by a careful analysis of what we call “measure constraints” and their effect on our impulsive trajectories. Constraining the total variation of the measure has been considered in earlier treatments [46]. The conversion of a given jump map is not a clear process; there may be multiple ways to interpret the state discontinuities, and several choices in modeling that can produce identical trajectories. In many models, how- ever, this would make explicit the structure of the “fast” dynamics, which may already have some physical basis. Our necessary conditions provide insight into both the “slow” and “fast” components of an optimal control strategy. The basic idea, that systems in a measure-driven framework may be analyzed in terms of a stretched-time system, is used once again. We note that our stretched system will have an intrinsic lack of convexity even in simple models, and we appeal again to the relatively recent results in 101 6.1. Measure Constraints optimal control of non-convex systems featuring pseudo-Lipschitz regularity of the data. 6.1 Measure Constraints We consider measure constraints for impulses in three broad categories: 1. Imposing an overall budget on the fast dynamics over the course of the planning horizon. 2. Restricting or forcing impulsive behaviour at specific times. 3. Employing our notion of state-dependent impulses from Chapter 4. These three categories are discussed in their own sections. A key issue here is that placement of constraints on the measure, apart from the overall “impulse budget” listed first, will impose constraints on the control set in the stretched-time problem by taking appropriate subsets of Bm1 ∩K. As long as these are imposed and translated in the correct manner, the resulting trajectories in the corresponding free end-time control prob- lem will still obey standard hypotheses, just with a shrunken control set; pseudo-Lipschitz regularity and the explicit use of radius functions will be essential to the subsequent analysis. Our ultimate goal in Section 6.1.3 is to demonstrate that a wide variety of jump-map systems appearing in appli- cations can be translated to our measure-driven framework, where results such as those of the preceding chapters (and their attendant subcases and extensions) are applicable. 6.1.1 Impulse Budget We may think of an “impulse budget” as an overall budget of the fast dy- namics that is available to be used. With no limit to the overall variation of the measures, instants may be weighted arbitrarily heavily by the measure with potentially negligible changes in state. 102 6.1. Measure Constraints In our typical measure differential inclusion from the previous chapters, dx ∈ F (t, x)dt + G(t, x)dµ(t) a.e. t ∈ [0, T ], we consider the constraint T + ∫ [0,T ] dµ̄ ≤ Smax (6.1) as a simple requirement which achieves this goal. We recall from our de- scription of the solution concept that the integral on the left corresponds to the maximum value attained by the function η̄, the time reparametrization multifunction which is the “inverse” of θ. Indeed, the value of this integral is precisely the interval length in the stretched-time system under our usual time reparametrization. This explains the choice of notation for the bound: we could describe the constraint above by saying that feasible measures must produce a stretched-time system on an interval [0, S] with T ≤ S ≤ Smax. Implementation of this type of constraint in the framework of Theo- rem 5.4 is thus straightforward: we modify the state endpoint set C defined in (5.25) in the auxiliary free-end-time problem such that the interval end- point (a choice variable in that problem) must lie in [T, Smax]. The major effect is on the constant component r of the costate: now we obtain sign information based on the position of the optimal interval endpoint, S∗, in [T, Smax]. Specifically, r ≤ 0 if S∗ = T , r ≥ 0 if S∗ = Smax, and r = 0 if S∗ lies in (T, Smax). Corollary 6.4. The conclusions of Theorem 5.4 still hold when measures for all solutions are subject to the budget constraint (6.1), with the additional information that h ≤ 0 if S∗ = T , h ≥ 0 if S∗ = Smax, and h = 0 if S∗ ∈ (T, Smax), where S∗ is the interval endpoint for the optimal solution (x∗, X∗) under the Phase 1 time reparametrization. This type of constraint appears in the oldest impulsive problem models, such as fuel use in space navigation [34], where impulse timing may or may not have been flexible, but where an overall budget is a core assumption. 103 6.1. Measure Constraints 6.1.2 Time-Dependent Measure Constraints In this section we focus on the introduction of strictly impulsive phenom- ena (i.e., where only the discrete part of the measure is nonzero). We will discuss measure constraints associated with timing, and combine the dif- ferent flavours of constraints in a new theorem for necessary conditions in Section 6.1.3. Fixed impulse schedule Suppose that the measure for all trajectories is fixed a priori to one that is nonzero only in its discrete component and only at a finite number of times. Even if multiple graph completions are available, this still simplifies our analysis significantly, as the time reparametrization is common to all trajectories and their impulse subsystems. Translation of all feasible trajec- tories results in a trivial free end-time problem where all intervals are of the same length; in other words, the resulting time-stretched problem is on a fixed interval. We consider the MDI dx ∈ F (t, x)dt + G(t, x)dρ(t) a.e. t ∈ [0, T ], (6.2) where ρ is a fixed measure with ρac = ρsc = 0 and taking values in a closed, convex cone K, and with discrete support on the finite set τ = {τ1, τ2, · · · , τN}. Any solution (x,X) to this MDI will correspond to a solu- tion (θ, y) of the stretched time system: [ θ̇(s) ẏ(s) ] ∈  {[ 0 G(τi, y(s))β ]∣∣∣∣∣β ∈ B (s− η(τ−i ), τi) } if θ = τi ∈ τ and s ∈ η(τ−i ) + [0, ρ̄({τi})), or{[ 1 F (θ(s), y(s)) ]} otherwise, where we refer to B as a fixed impulse schedule associated to the measure ρ 104 6.1. Measure Constraints and its impulse time set τ . For each τi ∈ τ , we define B as a selection: B(σ, τi) ⊆ { β ∈ BdimK1 ∣∣∣ |β|1 = 1} ∩K ∀σ ∈ [0, ρ̄({τi})]. This definition builds in the flexibility to fix B as single-valued for some or all of its outputs, which represents further fixed structure in any graph completions for solutions to (6.2). Alternatively, we could use a set equality rather than inclusion in the selection above for B; this leaves the full range of graph completions for ρ available, even though they must occur at the fixed times in τ . We remark that a forced impulse may be chosen to have no effect on the system dynamics if the zero matrix is available in G(τi, y), which leads to greater flexibility in modeling. Restricted impulse times Rather than forcing impulses to occur, as in the previous section, we now consider problems where impulse behaviour is optional, but is restricted to strictly impulsive dynamics; again, only the atomic part of the measure will be permitted to be nonzero. We also allow for the impulsive action to be restricted to certain times. We consider the MDI dx ∈ F (t, x)dt + G(t, x)dν(t) a.e. t ∈ [0, T ], (6.3) where the measure ν takes values in closed convex cone Kν , and has νac = νsc = 0, and supp(ν) ⊆ T , where T is a given closed subset of [0, T ] which restricts the impulse times of ν. The set T need not be a finite set; just as in the standard, free-measure, case, solutions are only valid if the time- reparametrization η arising from ν is bounded. Any solution (x,X) to (6.3) will correspond to a solution (θ, y) with velocities in the stretched-time sys- tem given by:{[ (1− |β|1) F (θ, y) (1− |β|1) +G(θ, y)β ]∣∣∣∣∣β ∈ BdimK1 ∩K, |β|1 ∈ {0, 1}χT (θ) } . 105 6.1. Measure Constraints (Here, χA is the characteristic function for set A that takes the value 1 on the set A and is zero otherwise.) Precedence for multiple measures We next combine the impulse concepts above with our standard unrestricted measure. In doing so, we may choose to impose further structure on the graph completions arising from the multiple measures involved where their supports overlap. We consider the MDI dx ∈ F (t, x)dt + G(t, x)dµ(t) + H(t, x)dν(t) a.e. t ∈ [0, T ], (6.4) where the multifunctions G and H are each matrix-valued, and their re- spective measures are to be regular and signed, with µ taking values in the m-dimensional cone K and ν taking values in the r-dimensional cone J . Without any further assumptions, this is clearly equivalent to our standard MDI, but with the impulsive dynamics expressed in two pieces: for each (t, x), G(t, x) and H(t, x) could be combined to produce a block matrix paired with an (m+ r)-dimensional measure consisting of the elements of µ and ν and expressed in the form: dx ∈ F (t, x)dt + [G(t, x)|H(t, x)] d [ µ ν ] (t) a.e. t ∈ [0, T ]. (6.5) Indeed, we use this standard form so that we may access our standard solu- tion concept. In modeling, we may prefer certain impulsive actions to occur first during a jump instant, such as a budget reset at the beginning of the year before any new expenditure. To accommodate this structure, we introduce a notion of precedence for the measures involved that restricts the available solutions. Definition (Measure precedence). For any solution (x,X) of the MDI (6.4), the constraint “µ has precedence over ν” requires that, for any t such that µ({t}) 6= 0 and ν({t}) 6= 0, the subsystem Xt = (w(·),Υ(·), v(·)) 106 6.1. Measure Constraints satisfies Υ(s) = ψ(t−) + ∫ [η(t−),s] v(σ) dσ ẇ(s) ∈ [G(s, w(s))|H(s, w(s))] v(s), a.e. s ∈ η̄(t) where v(s) ∈ [ K 0 ] for s ∈ [η(t−), η(t−) + µ̄({t})) and v(s) ∈ [ 0 J ] for s ∈ [η(t−) + µ̄({t}), η(t)]. In effect, this precedence partitions the graph completion’s stretched- time interval η̄(t) and the dynamics with precedence operate “first” during the instant. We may define a notion of precedence for more than two mea- sures in a similar manner (below, we will have three), and note that the property is transitive. 6.1.3 Necessary Conditions for Systems Including Impulse-Only Dynamics In this section, we add the behaviour described in Section 6.1.2 to our stan- dard MDI, creating a system that can accommodate a more general type of model. We first recall our definition for the distance M between solutions (x1, X1) and (x2, X2): M ((x1, X1), (x2, X2)) = ∣∣∣∣yext1 − yext2 ∣∣∣∣W 1,1 + |S1 − S2| = |x1(0−)− x2(0−)| + ∫ max{max θ−11 (T ),max θ−12 (T )} 0 ∣∣ẏext1 (s)− ẏext2 (s)∣∣ ds + ∣∣max θ−11 (T )−max θ−12 (T )∣∣ , where yi is the time-stretched trajectory for xi using reparametrization θi, Si is the right endpoint of the stretched time interval (i.e., θi(Si) = T ), and yexti refers to the extension (if necessary) of yi to the interval [0, Sj ] by setting yexti (s) = y ext i (Si) for s ∈ [Si, Sj ], if Si < Sj . We thus consider (x∗, X∗) to be a local minimum in the impulsive problem if there exists some 107 6.1. Measure Constraints δ > 0 such that (x∗, X∗) gives the lowest cost function value over all (x,X) such that M ((x,X), (x∗, X∗)) ≤ δ Given initial data of a set C, a closed set T ⊆ [0, T ], a function `, multi- functions F , Gµ, Gν , and Gρ (the latter three matrix-set-valued), closed, convex cones Kµ and Kν and a measure ρ with fixed impulse schedule B in the style of Section 6.1.2 (whereby ρac = ρsc = 0), we consider the following optimization problem in impulsive control: (IP)  Minimize ` (x(0), x(T )) over (x,X) with x : [0, T ]→ Rn satisfying, a.e. t ∈ [0, T ], dx ∈ F (t, x)dt + Gµ(t, x)dµ(t) + Gν(t, x)dν(t) + Gρ(t, x)dρ(t) subject to: µ takes values in Kµ, ν takes values in Kν , νac = νsc = 0, supp(ν) ⊆ T , ρ has precedence over ν, ν has precedence over µ, (x(0), x(T )) ∈ C. All measures in the problem are assumed to be regular, signed and poten- tially vector-valued. We approach (IP) with the understanding that if any of Kµ, Kν , and/or B are/is empty, then the corresponding measure is iden- tically zero. In other words, the various combinations of styles of measures are all under consideration in the theorem below. We consider (x∗, X∗) to be a local minimum in (IP) if there exists some δ > 0 such that (x∗, X∗) gives the lowest value for ` over all (x,X) such that M ((x,X), (x∗, X∗)) ≤ δ. With this enhanced version of the setup in Chapter 5, we are able to extend Theorem 5.4. Our strategy is the same as that of Chapter 5: we apply the time reparametrization to all trajectories local to the optimal one and describe them as free interval-endpoint trajectories that share a common differential inclusion map sufficiently regular to qualify for Theorem 5.7, and thus produce necessary conditions from Theorem 5.3 (due to Clarke [17]). 108 6.1. Measure Constraints We finish by a translation back through the two transformations to obtain our stated conclusions. Theorem 6.8. Assume F is nonempty, closed-valued, locally bounded and locally Lipschitz with Lipschitz modulus function kF . Assume Gµ, Gν and Gρ are closed-valued and each obeys a joint pseudo-Lipschitz condition and essential infimum growth restriction for the same radius RG and with Lipschitz modulus kG, with RG bounded below by the constant R0 > 0 and both kG and kF bounded above by the constant k0 > 0. Assume that either 1. the radius RG is more than double the local bound for F , or 2. the combinations F + Gr, for each r ∈ Kµ, uniformly obey the joint pseudo-Lipschitz condition and the essential infimum growth restric- tion of radius Rµ, and with Lipschitz modulus of kµ, with kµ also bounded above by the constant k0 > 0. Suppose that (x∗, X∗) gives a local minimum in (IP), where X∗ is a graph completion for x∗ with associated measures µ∗, ν∗ and ρ. Then there exist a radius function R0 < 1, a number λ0 ∈ {0, 1} and impulsive arcs p : [0, T ]×Rn → Rn and q : [0, T ]×Rn → R with graph completions P and Q, respectively, both associated to µ∗, ν∗ and ρ, which satisfy: • Transversality condition (T ) ( p(0−),−p(T )) ∈ λ0∂L` (x∗(0−), x∗(T )) + NLC (x∗(0−), x∗(T )) • Euler inclusion (E), for almost every t ∈ [0, T ]: (−q̇(t), ṗ(t)) ∈ co { (ω1, ω2) : (ω1, ω2, p(t)) ∈ NLgph(F+GµKµ) (t, x∗(t), ẋ∗(t)) } • Radius R0 Weierstrass condition (WR0): 〈p(t), v〉 ≤ 〈p(t), ẋ∗(t)〉 , ∀v ∈ (F +Gµκ(t)) ∩ B [ẋ∗(t);R0(t)] (6.6) 109 6.1. Measure Constraints where the set-valued function κ(t) = {γ s.t. |γ| = |µ̇∗ac(t)|}; µ̇∗ac(t) is the time derivative of the absolutely continuous part of µ∗ at t. • The graph completions X∗, P and Q consist of subsystems for each t where at least one of µ∗({t}), ν∗({t}), ρ({t}) is nonzero. These subsystems involve arcs yt, pt and qt, satisfying, at impulse instant t, an Euler inclusion, (Et), and Weierstrass condition of radius R defined in (6.17), (W tR), as follows (with all normal cones evaluated at the optimal path base point (t, yt(s), ẏt(s))),: – for almost every s ∈ [η(t−), η(t−) + ρ̄({t})), (−q̇t(s), ṗt(s)) ∈ co { (ω, v) : (ω, v, pt(s)) ∈ NLgph(Gρ(·,·)bρ∗(s)) } , (6.7) 〈pt(s), v〉 ≤ 〈pt(s), yt(s)〉 , ∀v ∈ [Gρ(θ∗(s), y∗(s))B (s− η(θ∗(s)−), θ∗(s))] ∩ B [ẏt(s);R(s)] ; (6.8) – for almost every s ∈ [η(t−) + ρ̄({t}), η(t−) + ρ̄({t}) + ν̄∗({t})], (−q̇t(s), ṗt(s)) ∈ co { (ω, v) : (ω, v, pt(s)) ∈ NLgph(Gν(·,·)bν∗(s)) } , (6.9) 〈pt(s), v〉 ≤ 〈pt(s), yt(s)〉 , ∀v ∈ [Gν(θ∗(s), y∗(s))B] ∩ B [ẏt(s);R(s)] , B = { β ∈ Kν ∩ BdimKν1 such that |β|1 = 1 } ; (6.10) 110 6.1. Measure Constraints – and, for almost every s ∈ [η(t−) + ρ̄({t}) + ν̄∗({t}), η(t)] (−q̇t(s), ṗt(s)) ∈ co { (ω, v) : (ω, v, pt(s)) ∈ NLgph(Gµ(·,·)bµ∗ (s)) } , (6.11) 〈pt(s), v〉 ≤ 〈pt(s), yt(s)〉 , ∀v ∈ [ Gµ(θ∗(s), y∗(s)) ( Kµ ∩ BdimKµ1 )] ∩ B [ẏt(s);R(s)] . (6.12) where η(t) = t+ dimKµ∑ j=1 ∫ t 0 µ̄∗j(dτ) + dimKν∑ j=1 ∫ t 0 ν̄∗j(dτ) + dimB∑ j=1 ∫ t 0 ρ̄j(dτ) represents a time reparametrization (µ∗j being the j-th component of measure µ∗) and b ρ ∗, bν∗ and b µ ∗ satisfy∫ η(t−)+ρ̄({t}) η(t−) bρ∗(s)ds = ρ({t}), ∫ η(t−)+ρ̄({t})+ν̄∗({t}) η(t−)+ρ̄({t}) bν∗(s)ds = ν∗({t}), ∫ η(t) η(t−)+ρ̄({t})+ν̄∗({t}) bµ∗ (s)ds = µ∗({t}). • Nontriviality (N): (λ0, q(t), p(t), qt, pt) 6= 0 ∀t ∈ [0, T ], where we interpret the subsystem trajectories qt or pt to be zero if they are zero over [η(t−), η(t)]. Proof. Our first step is apply our usual time-reparametrization to the impul- sive trajectories local to (x∗, X∗) and establish the stretched-time differential inclusion. We develop the differential inclusion map, which we again call F , in stages, noting that it will depend on the stretched-time parameter s due to the fixed impulses of ρ. 111 6.1. Measure Constraints We appeal to the work in Section 6.1.2 to define Fρ(s, θ, y) =    0Gρ(θ, y)β (β, 0, 0)  ∣∣∣∣∣∣∣β ∈ B (s− η(θ−), θ)  if s ∈ [η(θ−), η(θ−) + ρ̄({θ})), ∅ otherwise. (6.13) Here, as in the definitions below, the third component (in this case, of the form (β, 0, 0)) has dimension m = dimB + dimKν + dimKµ. According to Section 6.1.2, we may define Fν(θ, y) =    0Gν (θ, y)β (0, β, 0)  ∣∣∣∣∣∣∣β ∈ Kν ∩ BdimKν1 , |β|1 = 1  if θ ∈ T , ∅ otherwise, (6.14) to describe the impulses restricted to t ∈ T . Finally, we have our unrestricted measure term, the only one permitted to have non-impulsive dynamics, which must remain in combination with the continuous dynamics represented by F : Fµ(θ, y) =   1− |β|1F (θ, y) (1− |β|1) + Gµ (θ, y)β (0, 0, β)  ∣∣∣∣∣∣∣β ∈ Kµ ∩ BdimKµ1  (6.15) We may then define F piecewise using (6.13), (6.14) and (6.15): F(s, θ, y) =  Fρ(s, θ, y) if s ∈ [η(θ−), η(θ−) + ρ̄({θ})), Fν(θ, y) if s ∈ [η(θ−) + ρ̄({θ}), η(θ)− µ̄({θ})], Fµ(θ, y) otherwise, (6.16) where we have accounted for the precedence of the measures. 112 6.1. Measure Constraints This new map is more complicated than (3.5), the stretched-time inclu- sion map used in Chapter 5, in two significant ways. Firstly, it incorporates the three measures in order, allowing only one to be active at a time. Sec- ondly, it has the third coordinate, which tracks the β from each case (they may differ in dimension), a quantity that was not tracked in Chapter 5 but here will permit us to separate the dynamics associated with the different measures during our analysis; this component in each case must be a dis- tance of at least 1 in the m-dimensional 1-norm from such components in the other two cases. The piecewise definition for F may thus be seen as a union of three disjoint graphs, a fact we will exploit. We proceed by establishing (JPLC) and (ESSINF) for F . We revisit Lemmas 5.5 and 5.6 of Chapter 5 in defining the radius function R̃(s) =  RG(θ∗(s))(α−1α ) 1+2Fmax(θ∗(s)) , if θ̇∗(s) ≤ RG(θ∗(s)) 2αFmax(θ∗(s)) , Rµ(θ∗(s))θ̇∗(s) Rµ(θ∗(s))+1+ √ 1+|ẋ∗(θ∗(s))|2 , otherwise, for any α > 1. We define our new R for use in the theorem by R(s) = min { 1√ m+ 1 , R̃(s) } , (6.17) and pair it with the function κ(s) = { kF (θ∗(s)) + kG(θ∗(s)), if θ̇∗(s) ≤ RG(θ∗(s))2αFmax(θ∗(s)) , kµ(θ∗(s)), otherwise, If we are now given (s1, θ1, y1) and (s2, θ2, y2) within ε of (s, θ∗(s), y∗(s)), where ẏ∗(s) = f∗(s)θ̇∗(s) + g∗(s)b∗(s), then any  zv b  ∈ F(s1, θ1, y1) may be identified as belonging to one of (6.13), (6.14) or (6.15) based on b; 113 6.1. Measure Constraints if it has the form (0, β, 0), for example, then we deduce that zv b  ∈ Fν(θ1, y1). (We note that β = 0 indicates inclusion in Fµ(θ1, y1).) In particular, if we pick an arbitrary z1v1 b1  ∈ F(s1, θ1, y1) ∩ B [(s, θ∗(s), y∗(s));R(s)] , then the maximum value of 1√ m+1 for R ensures that (z1, v1, b1) is in the same piece (6.13), (6.14) or (6.15) as b∗(s). In the case of either (6.13) or (6.14), we note that θ̇∗(s) must be zero, and since R(s) < RG(s) for sure, we are assured of (JPLC). In the other case, (6.15), we may repeat the arguments of Lemmas 5.5 and 5.6 to conclude (JPLC) in this case as well. The (ESSINF) condition for F then follows directly. With sufficient regularity now established, we use F as the differential inclusion map for a free end-time problem where the converted optimal trajectory, defined on [0, S∗], is still optimal (the cost function is reused). The free end-time problem will use the endpoint set C defined in (5.25): C = { {0} × [ {0} c0 ] × {0} × [T,∞)× [ {T} cT ] × Rm ∣∣∣∣∣ (c0, cT ) ∈ C } . We apply Theorem 5.7 to obtain arcs h, q ∈W 1,1 ([0, S∗];R) , p ∈W 1,1 ([0, S∗];Rn) , u ∈W 1,1 ([0, S∗];Rm) and a number λ0 ∈ {0, 1} such that the following properties hold: • Nontriviality (N free): (λ0, q(s), p(s), u(s)) 6= 0 ∀s ∈ [0, S∗] 114 6.1. Measure Constraints • Transversality (T free):  −h(0) −q(0) p(0) −u(0)  ,  h(S∗) q(S∗) −p(S∗) u(S∗)   ∈ ∂Lλ0` (y∗(0), y∗(S∗)) + NLC   0 0 y∗(0) 0  ,  S∗ T y∗(S∗) w∗(S∗)   • Euler inclusion (Efree) for almost every s ∈ [0, S∗]:  −ḣ(s) −q̇(s) ṗ(s) −u̇(s)  ∈ co   α1 α2 α3 α4  :   α1 α2 α3 α4 −q(s) p(s) −u(s)   ∈ NLgphF   s θ∗(s) y∗(s) w∗(s) θ̇∗(s) ẏ∗(s) ẇ∗(s)    • Radius R Weierstrass condition (W freeR ): ∀v ∈ F (s, θ∗(s), y∗(s)) ∩ B [ẋ∗(t);R(s)], 〈 −q(s)p(s) −u(s)  , v〉 ≤ 〈  −q(s)p(s) −u(s)  ,  θ̇∗(s)ẏ∗(s) ẇ∗(s) 〉 a.e. s ∈ [0, S∗] Our first step in translating these conditions back to the problem (IP) is to deconstruct (Efree) via analysis of proximal normals. We let α = (α1, α2, α3, α4,−q(s), p(s),−u(s)) ∈ NLgphF ( s, θ∗(s), y∗(s), w∗(s), θ̇∗(s), ẏ∗(s), ẇ∗(s) ) . 115 6.1. Measure Constraints By definition, there exists a sequence αi = (α1i, α2i, α3i, α4i,−qi, pi,−ui) converging to α and satisfying (α1i, α2i, α3i, α4i,−qi, pi,−ui) ∈ NPgphF (si, θi, yi, wi, zi, vi, βi) for some sequence of vectors γi = (si, θi, yi, wi, zi, vi, βi) with limit at the base point of the cone for α, which we will denote by γ∗. For each i, we look at the proximal normals using γi as a base point and then examine limits of such normals in determining the limit normal cone at γ∗. Inclusion in the proximal normal cone is equivalent to the existence of a quadratic function Qi = Qi(σ, θ, y, w, z, v, β) = Qi(γ) with a minimum of 0 at γi such that Qi(γ) ≥ 〈αi, (γ − γi)〉 under the constraint that (z, v, β) ∈ F(s, θ, y). We conclude that the func- tion W 0i (γ) = Qi(γ)− 〈αi, γ〉 , constrained by (z, v, β) ∈ F(s, θ, y), has a minimum of 0 at γi. We sub- stitute z = 1 − |β|1, since this holds whenever the constraint is satisfied. Our next step is determined by the value of ẇ∗(s); this particular vector determines which of Fρ(s, θ∗(s), y∗(s)), Fν(θ∗(s), y∗(s)), or Fµ(θ∗(s), y∗(s)) contains (θ̇∗(s), ẏ∗(s), ẇ∗(s)); ẇ∗(s) is of the form (b ρ ∗(s), bν∗(s), b µ ∗ (s)), but for any s, at least two of the components must be zero. Since Fρ, Fν , and Fµ are identifiable by this property, the sequence (zi, vi, βi) must be con- tained in this same “piece” of F for sufficiently large i (we may assume all i by taking an appropriate subsequence). We will calculate explicitly in the case where Fµ is in use; the other two are similar. Assuming (θ̇∗(s), ẏ∗(s), ẇ∗(s)) ∈ Fµ(θ∗(s), y∗(s)), we have (zi, vi, βi) ∈ Fµ(θi, yi). Then βi = (0, 0, bi) for some bi ∈ RdimKµ , and we may define the function Wi (σ, θ, y, w, v) = W 0i (σ, θ, y, w, 1− |βi|, v, βi) +δgphF (·,·)(1−|bi|1)+Gµ(·,·)bi(θ, y, v), 116 6.1. Measure Constraints which has an unconstrained minimum at (si, θi, yi, wi, vi). Hence (0, 0, 0, 0, 0) must be an element of the subgradient ∂Wi(si, θi, yi, wi, vi), while that sub- gradient is in turn a subset of −α1i −α2i −α3i −α4i −pi  +   0 c2 c3 0 c5  ∣∣∣∣∣∣∣∣∣∣∣∣  c2c3 c5  ∈ NPgphF (·,·)(1−|bi|1)+Gµ(·,·)bi  θiyi vi   by the Sum Rule. We deduce that α1i = 0, α4i = 0 and (α2i, α3i, pi) ∈ NPgphF (·,·)(1−|bi|1)+Gµ(·,·)bi(θi, yi, vi). Passing to the limit, we obtain a time-stretched version of the Euler condi- tion: (−q̇(s), ṗ(s)) ∈ co (α1, α2) ∣∣∣∣∣∣∣  α1α2 p(s)  ∈ NL gphF (·,·)θ̇∗(s)+Gµ(·,·)bµ∗ (s)   θ∗(s)y∗(s) ẏ∗(s)    , (6.18) as well as the information that ṙ(s) = 0 and u̇(s) = 0 for the s under consideration. The same method as in the proof of Theorem 5.4 may be applied to (6.18) in deriving the Euler inclusions involving F + GµKµ and Gµ. We apply a similar calculation involving a function like Wi to each of the cases where Fν and Fρ are in use to obtain (−q̇(s), ṗ(s)) ∈ co { (α1, α2) ∣∣∣(α1, α2, p(s)) ∈ NLgphGν(·,·)bν∗(s) (θ∗(s), y∗(s), ẏ∗(s))} (6.19) 117 6.1. Measure Constraints when (θ̇∗(s), ẏ∗(s), ẇ∗(s)) ∈ Fν(θ∗(s), y∗(s)), and (−q̇(s), ṗ(s)) ∈ co { (α1, α2) ∣∣∣(α1, α2, p(s)) ∈ NLgphGρ(·,·)bρ∗(s) (θ∗(s), y∗(s), ẏ∗(s))} (6.20) when (θ̇∗(s), ẏ∗(s), ẇ∗(s)) ∈ Fρ(s, θ∗(s), y∗(s)). We note that ṙ(s) = 0 and u̇(s) = 0 for each s in these cases as well to conclude that ṙ(s) = 0 and u̇(s) = 0 for almost every s ∈ [0, S∗]. In examining the transversality condition, (T free), we discover that u(S∗) = 0, hence u ≡ 0, and we may simplify this condition to pick out q and p: (−q(0), p(0), q(S∗),−p(S∗)) ∈ ∂Lλ0` (y∗(0), y∗(S∗)) + NLC (0, y∗(0), T, y∗(S∗)) . Finally, we examine the Weierstrass condition, (W freeR ). We define R0(θ∗(s))θ̇∗(s) = R(s) (6.21) and distinguish between the cases of the condition listed in the theorem statement based on, for each s, the value of β∗(s), where the optimal tra- jectory’s velocity is given by v∗(s) =  θ̇∗(s)ẏ∗(s) ẇ∗(s)  =  1− |β∗(s)|1f∗(s)(1− |β∗(s)|1) + g∗(s)β∗(s) β∗(s)  , where we define the term f∗(s)(1− |β∗(s)|1) to be zero if v∗(s) ∈ Fν(θ∗(s), y∗(s)) or v∗(s) ∈ Fρ(s, θ∗(s), y∗(s)). If v∗(s) ∈ Fρ(s, θ∗(s), y∗(s)), then the separation due to the third coordinate 118 6.1. Measure Constraints (distance from β∗(s)) of at most R(s) implies that F   sθ∗(s) y∗(s)   ∩ B   θ̇∗(s)ẏ∗(s) ẇ∗(s)  ;R(s)  ⊆ Fρ(θ∗(s), y∗(s)), whereby we conclude that zv b  ∈ F   sθ∗(s) y∗(s)   ∩ B   θ̇∗(s)ẏ∗(s) ẇ∗(s)  ;R(s)  implies v ∈ Gρ(θ∗(s), y∗(s))B (s− η(θ−), θ) . (6.22) Next, we apply (W freeR ), and recalling u ≡ 0, to obtain the inequality: −zq(s) + 〈v, p(s)〉 ≤ −θ̇∗(s)q(s) + 〈ẏ∗(s), p(s)〉 for all (z, v, b) as in (6.22); the inclusion (6.22) also implies that z = 0, which establishes (6.8). The approach for v∗(s) ∈ Fν(θ∗(s), y∗(s)) to obtain (6.10) is similar, and the case v∗(s) ∈ Fµ(θ∗(s), y∗(s)) is again similar in the subsystem (where z = 0) for (6.12) and follows the proof of Theorem 5.4 for the t-time statement (6.6). We remark that rearranging the measure precedence in (PI) for Theo- rem 6.8 will produce the same conclusions, re-ordered as appropriate. We also note the omission of the global conclusions like those in Theorem 5.4. The “impulse-only” dynamics result from using edges of the unit 1-ball, which renders the differential inclusion map F non-convex and at best jointly pseudo-Lipschitz in its variables; the localization of velocities provided by the radius function is crucial to our extension of regularity and translation of necessary conditions in the proof. We acknowledge that some simplification occurs in the autonomous case if there is no fixed-impulse schedule: 119 6.1. Measure Constraints Corollary 6.5. In addition to the hypotheses of Theorem 6.8, assume that F (t, x) = F (x), Gν(t, x) = Gν(x) and Gµ(t, x) = Gµ(x) only, and hence each satisfies (PLC) rather than (JPLC). Suppose further that the fixed im- pulse measure, ρ, is zero. Then the necessary conditions of Theorem 6.8 still hold, but q ≡ 0 and (E) may be replaced on both time scales by the Extended Euler inclusion (EE) describing only the derivative of p. We note further that an overall budget as described in Section 6.1.1 for the impulse dynamics is compatible with this extended framework. Corollary 6.6. The conclusions of Theorem 6.8 still hold when measures for all solutions are subject to the budget constraint (6.1), assuming that the budget is large enough to accommodate any fixed impulses. We have the additional conclusion that r ≤ 0 if S∗ = T , r ≥ 0 if S∗ = Smax, and r = 0 if S∗ ∈ (T, Smax), where S∗ is the interval endpoint for the optimal solution (x∗, X∗) under the Phase 1 time reparametrization. As a final note, one future investigation could attempt the use of more general “radii” than simple hyperspheres; it does not appear to be necessary that the radius in the “G-dimension” should be kept below 1√ m+1 , a condi- tion that is used only to localize in the “β-dimension” in F . While requiring a more general definition of radius function (adding to the complexity of the framework above), this would permit more range in the stratified condition of Theorem 6.8. 6.1.4 Case Study: Pest Control Using Natural Predators We now demonstrate an application of this framework to an existing problem in biological systems theory: the interaction of pests with pesticides and nat- ural predators. While the use of pesticides involves significant modeling in its own right, mixed methods, where a natural predator is introduced to con- trol the pest population in addition to a pesticide regime, are now preferred in many agricultural settings due to the environmental hazards of pesticides (despite their efficacy). Such a mixed strategy is called “Integrated Pest Management”, and has been demonstrated to be more cost-efficient than pesticide-only strategies for certain systems [63]. 120 6.1. Measure Constraints A variety of impulsive models for pest control have been proposed, in- cluding plant-pest-predator interaction, two predators and one prey, etc. All implement known models for population dynamics (Lotka-Volterra, Holling Type II, IV, other), and all use the explicit jump map to model the impulse, which is a population shock due to pesticide, an artificial boost of the preda- tor population, or harvesting which reduces all population in a proportional way (for example, when the plants are the homes of the fauna in question). Fixed impulse model We recreate first a fixed impulse model, in the style of [14], where an impulse schedule of predator introduction is developed and compared to a previously attempted continuous-time dynamic programming model intended to mimic impulsive behaviour. The model given in [14], where the decision variable is represented by the sequence {u(0), u(20), · · · , u(180)}, is Minimize x1(200) + ∑N−1 k=0 (x1(20k) + x2(20k)) over x : [0, 200]→ R2 subject to: ẋ1 = x1(a− γx1 − αx2), ẋ2 = x2(−b+ βx1), x1(τ+k ) = x1(τk), x2(τ+k ) = x2(τk) + u(τk), τk = 20k, k = 0, 1, · · · , N − 1, x1(0) = 100, x2(0) = 0, x1(200) ≤ 20. (6.23) The case study involves soy plants attacked by caterpillars (Anticarsia gematalis) with population density represented by the variable x1 and a population of predators that may be artificially introduced, including wasps and spiders, with density represented by x2. Based on empirical observa- tions, typical values of the parameters are a = 0.16, b = 0.19, α = 0.02, γ = 0.001 and β = 0.0029 for the interaction of these species. There is a linear penalty incurred in terms of cost of soy plant damage based on the caterpillar density, but here the soy plant population is assumed large 121 6.1. Measure Constraints enough that it is not in danger of elimination, and we ignore it in the model. One unit of predator density increase incurs a unit cost, and the rate of ex- pense per pest density is also a unit cost; these are scaled for the purposes of simulation, and we retain them here for the purpose of direct comparison with [14]. Both forms of cost function increase are calculated only when t is a multiple of 20, representing 20 days in the full 200 day time interval. This system has a stable equilibrium at x1 = 65.5172, x2 = 4.7241, though the level of pests deemed acceptable is x1 ≤ 20, and the pest level must be beneath this threshold by the end of the 200 days. We first translate the above model to a problem of the form (IP). We consider the following pest management impulsive problem: (PMIP)  Minimize x3(200) over (x,X) with x : [0, 200]→ R3 satisfying dx ∈ F (x)dt + G(x)dρ(t) a.e. t ∈ [0, 200], subject to: (x(0), x(T )) ∈ C. where T = {0, 20, 40, · · · , 180, 200}, F (x) =   x1(a− γx1 − αx2)x2(−b+ βx1) 0   , G =   0u u+ x1  ∣∣∣∣∣∣∣u ≥ 0  , ρ({τ}) = 1, B(σ, τ) = {1} ∀σ ∈ [0, 1], ∀τ ∈ T C = {100} × {0} × {0} × [0, 20]× [0,∞)× [0,∞) We note the following choices: • We introduce the cost as a third state variable, a standard trick em- ployed to adapt endpoint cost results to more general cost functions. • The cost due to pests is evaluated only at the impulse time, propor- tional to the number of pests at that instant. This is measured at t = 200 days as well. 122 6.1. Measure Constraints • Predators may be introduced every 20 days, but in the work below, we assume that no new predators will be introduced at t = 200; this would clearly be suboptimal as this incurs cost but the slow time scale is at an end, so there is no time to affect the pest population. • We have restricted the number of pests at t = 200 to be 20 at most. Remark: Our framework offers multiple ways to reproduce the model of [14], for example, we could fix Gν = {(0, 1, 1)} and introduce a measure ν restricted to the impulse times (thereby taking the role of u above), and pair that with fixed impulses of Gρ = {(0, 0, x1)}. We will see in the Weierstrass condition below that it will be convenient to keep the flexibility in G rather than the measure, and we have the further simplification in (PMIP) that the measure is scalar-valued. The regularity requirements of Theorem 6.8 are all satisfied here. We now posit the existence of a (local) minimizing trajectory for (PMIP), x∗, which uses the constant ck at impulse k (we write c∗ if the k is implicit) and examine the necessary conditions for its optimality. We note that there are many trajectories with equivalent cost at the impulses where the c chosen is not constant over the stretched impulse interval; we are using x∗ as a representative that is easy to analyze. We obtain a constant, λ0 ∈ {0, 1} and arcs q and p. The transversality condition implies that p1(200) ≤ 0, p2(200) = 0 and p3(200) = −λ0. The Euler inclusion in t simplifies due to F being single-valued and that single value representing an autonomous, differentiable function. We conclude that q ≡ 0 and that ṗ1ṗ2 ṗ3  =  −p1(a− 2γx1 − αx2)− βp2x2αx1p1 + bp2 0  . 123 6.1. Measure Constraints We have a similar story in s, where ṗτ,1(s)ṗτ,2(s) ṗτ,3(s)  =  −pτ,30 0  , s ∈ [η(τ−), η(τ)] so ṗ3 ≡ 0, and the interval endpoint condition forces p3 ≡ −λ0. We thus have the complete Hamiltonian system in t: ẋ1 ẋ2 ẋ3 ṗ1 ṗ2  =  x1(a− γx1 − αx2) x2(−b+ βx1) 0 −p1(a− 2γx1 − αx2)− βp2x2 αx1p1 + bp2  and in s: (ẏ1(s), ẏ2(s), ẏ3(s), ṗt,1(s), ṗt,2(s)) ∈ {(0, c, c+ y1(s), λ0, 0)| c ≥ 0} Next, we examine the Weierstrass condition during the impulses. For v near ẏt(s), 〈pt(s), v〉 ≤ 〈pt(s), ẏt(s)〉 cpt,2(s) + (c+ x1(t−))pt,3(s) ≤ c∗pt,2(s) + (c∗ + x1(t−))pt,3(s) c (pt,2(s)− λ0) ≤ c∗ (pt,2(s)− λ0) 0 ≤ (c∗ − c) (pt,2(s)− λ0) If c∗ = 0, we learn only that pt,2(s) ≤ λ0, but based on the sample in [14], we expect this would be suboptimal except at the final impulse at t = 200, where c∗ = 0 is clearly optimal, as the predators have no time to reduce the pest population. In fact, since p2(200) = 0, either c∗ = 0 at the final impulse or λ0 = 0, or both. We study the abnormal case where λ0 = 0 below. In any case, we assume that c∗ > 0 for every impulse before t = 200, and this implies the that pt,2(s) = λ0 for every s (we recall that ṗt,2 ≡ 0). This 124 6.1. Measure Constraints imposes a significant constraint on the system, as the entry and exit point for p2 at every impulse (except the final one) must be λ0. Now we consider the abnormal case, where λ0 = 0. This case includes any x∗ with a nonzero final impulse, which is clearly not optimal. This implies p3 ≡ 0 and p2 = 0 at every impulse instant, and there is no impulsive effect on p1. The p-system is linear, with time-varying coefficients, so if p1(200) = 0 then p ≡ 0 (since p2(200) = 0 occurs at the impulse instant 200), which violates our nontriviality condition. Hence we must have p1(200) < 0, which is possible only when x1(200) = 20. Our empirical results indicate that this endpoint condition for p1 prevents the connection of p2(180) = 0 and p2(200) = 0, so we reject this case. 0 20 40 60 80 100 120 140 160 180 200 0 10 20 30 t pe st 0 20 40 60 80 100 120 140 160 180 200 0 20 40 t pr ed at or 0 20 40 60 80 100 120 140 160 180 200 −10 −5 0 5 t p1 0 20 40 60 80 100 120 140 160 180 200 0 1 2 t p2 Figure 6.1: Approximate optimal trajectories in pest control system. In the normal case, λ0 = 1, we implement a combined “shooting” and “relaxation” technique as follows: 125 6.1. Measure Constraints Shooting: 1. For a given set of impulses, run the system forward from t = 0 in the x (state) variables using their initial conditions. 2. Using the conditions for the p (costate) variables at t = 200, run the system backwards in time with the calculated x values from the for- ward system (the state system is independent of the costate variables). 3. Determine the error by adding up the squares of the differences of the p2 values from 1 at the impulse times. This is a “shooting” method for a system of equations with known values at different times; in our case, we know (for a given set of impulses) the initial value of the state, x, the final value of the costate variable p1 and the value of the costate variable p2 at every impulse time. The goal is to run the system forward with guessed initial conditions, then backward using the known final data. Relaxation: 1. We start with an initial impulse sequence. 2. For each impulse (there are ten), we use the Shooting method on the same impulse sequence where one impulse is increased by 1 or de- creased by 1. The result is a set of 21 system trials, one neutral, ten perturbed up, ten perturbed down, where the accumulated error in p2 is known for every perturbation of impulse times, and for the initial (neutral) impulse sequence. 3. We choose the impulse perturbation that produces the greatest drop in overall error, and iterate with this modification as the new “neutral” impulse sequence to be perturbed. The result of the successive iterations is that the system “relaxes” towards an impulse sequence with minimal error in the p2 trajectory, corresponding to an optimal x (state) trajectory. 126 6.2. State-Dependent Measure Constraints We obtained an optimal cost very similar to that of [14], using a predator impulse sequence of 39, 31, 26, 28, 28, 27, 29, 29, 28, 21, with the resulting state and costate variables plotted in Figure 6.1. We do not prove it here, but the empirical data also suggest that this is a unique solution of the above Hamiltonian, so this is in fact the global minimizer. It is worth noting that this shooting and relaxation could have been applied to the minimization problem directly (relaxing to the minimal cost, rather than the correct costate arc), however, the costate system provides us with more guidance about an appropriate initial guess for the impulse sequence; we found a specific set of targets for the costate variable p2, but have no a priori goal for the objective function. Put another way, we know if a perturbed arc is closer to being a desired costate arc based on how well it matches the targets at the impulse times, but in simply relaxing and looking for overall cost decrease, we do not have a way of quantifying how close we might be to a minimum. 6.2 State-Dependent Measure Constraints One feature of hybrid systems is the notion of state-dependent impulses, where impulses may not be the direct result of a control choice, but are instead consequent to system’s arrival at certain states. A typical example would be impact dynamics, studied extensively by Stewart [60, 61]; some control over the continuous dynamics of an object may be permitted, but when it hits a surface an impulse is applied where, in the simplest model, normal velocity is “instantly” reversed while tangential velocity is preserved. As noted in [61], measures may be used in modeling such impact be- haviour. However, some other models with “forced” or “optional” impulses may be described by explicit jump maps [53], and we seek to codify these in our measure-driven framework. To this end, we will define the impulsive measure (possibly paired with a graph completion) as in the closed-loop stabilization result of Chapter 4, as this was designed to produce jump arcs according to state. This translation is most effective when the impulsive action may be de- 127 6.2. State-Dependent Measure Constraints scribed as a continuous change of state that is very fast. In the case of a bouncing ball in the impact problem, for example, the modeling via impulses is of use in contrasting the time scales of the falling versus bouncing ball, but even during that bounce where the velocity sustains an “instant” change, we expect continuous physics at play on the fast time scale. Indeed, a model for impact based on elasticity and other properties of material may be built into that term, and included as part of the system description, whereas a jump-map type of description might omit this substructure. Conversely, such a translation may not always be of great modeling value: if there is no sensible substructure on the fast time scale and the state changes are purely discrete artifacts, as in the case of switched systems, the dynamics associated with the measure may be inappropriate, seem too artificial, or be impossible to achieve (these are concerns expressed in the introduction to [72]). However, these systems may still benefit from our theoretical results if they can be expressed in measure-driven terms. In terms of our optimal control considerations, the stretched-time sys- tem will now involve state-dependent values for our selection from the cone (which was the full Bm1 ∩K in the stretched-time system in the unconstrained case). This presents no obstacle in our method. We consider the following optimal control problem with state-dependent impulses: (ISP)  Minimize ` (x(0), x(T )) over (x,X) with x : [0, T ]→ Rn satisfying, a.e. t ∈ [0, T ], dx ∈ F (t, x)dt + Gµ(t, x)dµ(x; t) + Gρ(t, x)dρ(x; t) subject to: µ takes values in Kµ(x) ρ takes values in Kρ(x), ρac = ρsc = 0, (x(0), x(T )) ∈ C. All measures in the problem are assumed to be regular, signed and poten- tially vector-valued. We approach (ISP) with the understanding that where Kµ, Kρ consist only of the zero vector, the corresponding measure is zero; these are states where impulsive behaviour is explicitly prohibited. We will 128 6.2. State-Dependent Measure Constraints insist that Kµ(x) = { K1 if x ∈ J1 0 otherwise Kρ(x) = { K2 if x ∈ J2 0 otherwise (6.24) where K1 and K2 are each a fixed, closed, convex cone, and J1 and J2 are fixed, jump-permitting, sets. We may define Fρ(θ, y) =   0Gρ (θ, y)β (β, 0)  ∣∣∣∣∣∣∣β ∈ Kρ(y) ∩ BdimKρ(y)1 , |β|1 = 1  , (6.25) Fµ(θ, y) =   1− |β|1F (θ, y) (1− |β|1) +Gµ(θ, y)β (0, β)  ∣∣∣∣∣∣∣β ∈ Kµ(y) ∩ BdimKµ(y)1  , (6.26) as components of the stretched-time system: F(θ, y) = Fρ(θ, y) ∪ Fµ(θ, y) (6.27) where we note the absence of the precedence introduced in Section 6.1.2. The third coordinate allows us, as before, to “separate” the free µ dynamics from the impulse-only ρ dynamics. We may state the following theorem: Theorem 6.9. Under the regularity hypotheses of Theorem 6.8 for F , Gρ and Gµ, and with cones of the form (6.24), analogous localized necessary conditions hold in the problem (ISP), with the impulse Euler inclusions and Weierstrass conditions tracking the optimal trajectory on (6.25) or (6.26), as appropriate. The Extended Euler inclusions hold in the autonomous case, as in Corollary 6.5, and the addition of an impulse budget produces the same effect described in Corollary 6.6. Proof. The proof follows the methods described earlier in this chapter, again making great use of the localization of velocity sets in the pseudo-Lipschitz condition. The only significant item to check is that the state-dependence of 129 6.3. Final Considerations the cones is simple enough that our transfer of (PLC) to the stretched-time system is not disrupted. Following the optimal F-trajectory, y∗(s), and its nearby velocities ac- cording to the radius function (6.17), we again obtain separation of the two “pieces”, Fρ and Fµ, of F . As a result, if ẏ∗(s) is selected from Fρ, velocities at any y /∈ J2 that is close to y∗(s) have no effect in our pseudo-Lipschitz inclusion, nor in the Weierstrass inequality (those being the two major ap- pearances of the stratification). The situation where ẏ∗(s) is selected from Fµ is slightly more delicate, but is still covered: any y /∈ J1 that is close to y∗(s) obeys F(θ, y) =  1F (θ, y) 0  ∀y /∈ J1 ∪ J2, but the analysis is no different than in Lemmas (5.5) and (5.6) due to our carefully constructed radius function. 6.3 Final Considerations The results presented in this chapter add a considerable level of complexity to the fundamental work in Chapter 5. This is a consequence of the wide range of phenomena we seek to model with our measure-driven framework. This convergence and extension of our techniques from previous chapters brings such systems back to their roots in scientific and engineering models after a long theoretical development in the general free-measure case. We anticipate great specific value of these results in application to discrete- continuous systems, many of which lack such a comprehensive framework. With the discussion of state-dependent impulsive systems, we have re- turned thematically to the bouncing ball of the Introduction, and concluded our presentation of new results in measure-driven impulsive control. We proceed to summarize these results, and suggest possible extensions, in the final chapter. 130 Chapter 7 Conclusion and Open Problems We begin our final chapter with a list of the significant contributions of this dissertation to the field of impulsive and nonlinear control theory. 1. A refinement and extension of a state-of-the-art solution concept for a very general class of measure-driven systems. This includes: (a) Introducing a type of “state-dependent measure” in measure- driven dynamics for use in both closed-loop feedback and in mod- eling impulse dynamics that do not act like a control (e.g. impact and friction of free bodies with fixed boundaries, where bouncing dynamics are influenced only by state at impact, not by the user). (b) Introducing “measure constraints” in order to adapt existing im- pulsive models to our framework, thus making progress in ad- dressing the long-standing concerns of applicability in the “free measure” case. 2. Using the “state-dependent measure” to prove a pair of constructive stabilization results in the drift-free, control-affine measure differential equation setting, adapting recent results in Lyapunov theory to the impulsive case. In particular, we establish for the first time existence of a control Lyapunov function with baseline regularity (semiconcavity) for measure-driven impulsive systems. 3. Stratified necessary conditions in optimal control of impulsive systems in a nonconvex MDI setup, signed vector-valued measures and with 131 Chapter 7. Conclusion and Open Problems pseudo-Lipschitz state regularity of the impulsive differential inclusion map. This is the first work to bring the tools for the nonconvex case to measure-driven impulsive systems, producing the first stratified result for impulsive systems as well as weaker regularity hypotheses than its various predecessors in the global case. 4. The optimal control section includes an auxiliary result, Theorem 5.7, which extends necessary conditions for optimal control where interval endpoints are a choice variable (“free end-time problems”) to the case of pseudo-Lipschitz regularity and in terms of stratified conditions. 5. Stratified necessary conditions in optimal control of impulsive systems involving our measure constraints. The localizing nature of the pseudo- Lipschitz regularity is used to great advantage in handling these de- cidedly nonconvex systems. Such a general result was not previously available despite modeling advantages when measures are restricted. A demonstration is provided where an existing pest control model is analyzed under this new framework. 6. Performing all of the above work via careful analysis of the time reparametrization and direct transformation of all feasible trajectories, without an appeal to perturbation and/or approximation by more reg- ular trajectories (such as the trajectories with absolutely continuous measures) and taking limits of such objects. This latter approach is far more common in the literature, so we see our methods themselves as a significant contribution. The common threads running through these results should by now be clear: tools from nonsmooth analysis to handle missing convexity, the min- imum regularity of pseudo-Lipschitz functions that permits the decoupling of time scales in the carefully defined solution concept, and meaningful ex- tensions of that solution concept to better accommodate modeling. An exciting feature of the above results and techniques is that they set the stage for further exploration. Future projects could include: 132 Chapter 7. Conclusion and Open Problems • Proving a closed-loop analogue of Corollary 4.1: existence of a state- feedback for the auxiliary system is equivalent to existence in the given system. • Extension of our stabilization result to allow a drift term. • Exploring a further weakening of hypotheses for the necessary condi- tions of Chapter 5; for example, is it possible to work with the tem- pered growth condition in place of the essential infimum condition, or to remove the bound above zero for the radius functions? • Applying our necessary conditions ideas to our stabilizing feedback: is there any objective function (some sort of minimum time problem, for example) for which this feedback is optimal? • Introducing phase or mixed constraints, especially in application where the measure is also constrained. • Determining necessary conditions with measure constraints where the notion of “precedence” is modified or removed. • Extending the measure-driven necessary conditions to a free-interval impulsive control problem. • Recasting the measure-constraints (both time- and state-based) as a single problem where the cone values for the measure can vary signifi- cantly in time and/or space (i.e., K = K(t, x) for more general K(t, x) than in Chapter 6). • Development of numerical methods to take advantage of the theoretical results in closed-loop stabilization and necessary conditions (including measure constraints) in optimal control. • Application of this technology to impulsive systems in science and engi- neering that have survived on ad-hoc techniques in discrete-continuous systems without having access to a sound, fundamental theory of non- linear optimal control via measure-driven systems. 133 Bibliography [1] N. U. Ahmed, Necessary conditions of optimality for impulsive sys- tems on Banach spaces, Nonlinear Anal. 51 (2002), no. 3, 409–424. MR MR1942754 (2004g:49044) [2] , Measure solutions for evolution equations with discontinuous vector fields, Nonlinear Funct. Anal. Appl. 9 (2004), no. 3, 467–484. MR MR2101193 (2005f:34168) [3] , Optimal relaxed controls for systems governed by impulsive dif- ferential inclusions, Nonlinear Funct. Anal. Appl. 10 (2005), no. 3, 427–460. MR MR2194608 (2006h:34027) [4] , Optimality conditions with state constraints for semilinear systems determined by operator valued measures, Nonlinear Anal. 70 (2009), no. 10, 3522–3537. MR MR2502761 [5] N. C. Apreutesei, An optimal control problem for a prey-predator system with a general functional response, Appl. Math. Lett. 22 (2009), no. 7, 1062–1065. MR MR2523000 [6] A. Arutyunov, V. Jaćimović, and F. Perĕıra, Second order necessary conditions for optimal impulsive control problems, J. Dynam. Control Systems 9 (2003), no. 1, 131–153. MR MR1956448 (2003k:49046) [7] A. Arutyunov, D. Karamzin, and F. Perĕıra, A nondegenerate maxi- mum principle for the impulse control problem with state constraints, SIAM J. Control Optim. 43 (2005), no. 5, 1812–1843 (electronic). MR MR2137503 (2006c:49031) 134 Chapter 7. Bibliography [8] Hunki Baek, Qualitative analysis of a Lotka-Volterra type impulsive predator-prey system with seasonal effects, Honam Math. J. 30 (2008), no. 3, 521–533. MR MR2455237 (2009g:34010) [9] Drumi Băınov and Pavel Simeonov, Impulsive differential equations: periodic solutions and applications, Pitman Monographs and Surveys in Pure and Applied Mathematics, vol. 66, Longman Scientific & Tech- nical, Harlow, 1993. MR MR1266625 (95b:34012) [10] A. Bressan, Jr. and F. Rampazzo, On differential systems with vector- valued impulsive controls, Boll. Un. Mat. Ital. B (7) 2 (1988), no. 3, 641–656. MR MR963323 (89i:93078) [11] , Impulsive control systems with commutative vector fields, J. Optim. Theory Appl. 71 (1991), no. 1, 67–83. MR MR1131450 (92k:49034) [12] , Impulsive control systems without commutativity assumptions, J. Optim. Theory Appl. 81 (1994), no. 3, 435–457. MR MR1281732 (95c:49058) [13] P. Cannarsa and C. Sinestrari, Semiconcave functions, hamilton-jacobi equations, and optimal control, Birkhäuser, Boston, 2004. [14] R. T. N. Cardoso and R. H. C. Takahashi, Solving impulsive control problems by discrete-time dynamic optimization methods, TEMA Tend. Mat. Apl. Comput. 9 (2008), no. 1, 21–30. MR MR2469737 [15] F. H. Clarke, Yu. S. Ledyaev, R. J. Stern, and P. R. Wolenski, Nons- mooth analysis and control theory, Graduate Texts in Mathematics, vol. 178, Springer-Verlag, New York, 1998. MR MR1488695 (99a:49001) [16] Francis Clarke, Lyapunov functions and feedback in nonlinear con- trol, Lecture Notes in Control and Inform. Sci. 301 (2004), 267–282, Springer, Berlin. [17] , Necessary conditions in dynamic optimization, Mem. Amer. Math. Soc. 173 (2005), no. 816, x+113. MR MR2117692 (2006i:49027) 135 Chapter 7. Bibliography [18] Frank H. Clarke and Richard B. Vinter, Applications of optimal mul- tiprocesses, SIAM J. Control Optim. 27 (1989), no. 5, 1048–1071. MR MR1009337 (90e:93066) [19] , Optimal multiprocesses, SIAM J. Control Optim. 27 (1989), no. 5, 1072–1091. MR MR1009338 (90j:49011) [20] Earl A. Coddington and Norman Levinson, Theory of ordinary differ- ential equations, McGraw-Hill, New York, 1955. [21] Warren J. Code and Philip D. Loewen, Optimal control of non-convex measure differential inclusions, (2009), submitted. [22] Warren J. Code and Geraldo N. Silva, Stabilization of certain control- affine measure-driven impulsive control systems, International Journal of Mathematics and Statistics 5 (2009), no. A09, Special Issue Gener- alized Differentiation, Variational Analysis and Mathematical Control Theory – a 60th Birthday Tribute to Boris S. Mordukhovich. [23] , Closed loop stability of measure-driven impulsive control sys- tems, Journal of Dynamical and Control Systems (2010), to appear. [24] Gianni Dal Maso and Franco Rampazzo, On systems of ordinary differ- ential equations with measures as controls, Differential Integral Equa- tions 4 (1991), no. 4, 739–765. MR MR1108058 (92g:49037) [25] Paul Georgescu and Gheorghe Moroşanu, Pest regulation by means of impulsive controls, Appl. Math. Comput. 190 (2007), no. 1, 790–803. MR MR2338755 [26] Rafal Goebel, Ricardo G. Sanfelice, and Andrew R. Teel, Hybrid dy- namical systems: robust stability and control for systems that combine continuous-time and discrete-time dynamics, IEEE Control Syst. Mag. 29 (2009), no. 2, 28–93. MR MR2499435 [27] Wassim M. Haddad, VijaySekhar Chellaboina, and Sergey G. Nersesov, Impulsive and hybrid dynamical systems, Princeton Series in Applied 136 Chapter 7. Bibliography Mathematics, Princeton University Press, Princeton, NJ, 2006, Stabil- ity, dissipativity, and control. MR MR2245760 (2007c:93002) [28] B. Jakubczyk and W. Respondek, Geometry of feedback and optimal control, Marcel Dekker, New York, 1998. [29] Guirong Jiang and Qishao Lu, Impulsive state feedback control of a predator-prey model, J. Comput. Appl. Math. 200 (2007), no. 1, 193– 207. MR MR2276825 (2007i:34012) [30] D. Yu. Karamzin, Necessary conditions of the minimum in an impulse optimal control problem, J. Math. Sci. 139 (2006), no. 6, 7087–7150. [31] P. Kokotović and M. Arcak, Constructive nonlinear control: a historical perspective, Automatica 37 (2001), 637–662. [32] J. Kurzweil, On the inversion of liapunov’s second theorem on stability of motion, Translations of AMS 24 (1963), 19–77, originally appeared in Czechoslovak Mathematical Journal, 81 (1956), 217-259. [33] V. Lakshmikantham, D. D. Băınov, and P. S. Simeonov, Theory of impulsive differential equations, Series in Modern Applied Mathematics, vol. 6, World Scientific Publishing Co. Inc., Teaneck, NJ, 1989. MR MR1082551 (91m:34013) [34] D. F. Lawden, Optimal trajectories for space navigation, Butterworth, London, 1963. [35] Bing Liu, Yujuan Zhang, and Lansun Chen, The dynamical behaviors of a Lotka-Volterra predator-prey model concerning integrated pest man- agement, Nonlinear Anal. Real World Appl. 6 (2005), no. 2, 227–243. MR MR2111652 (2005k:34026) [36] Philip D. Loewen, Optimal control via nonsmooth analysis, CRM Pro- ceedings & Lecture Notes, vol. 2, American Mathematical Society, Prov- idence, RI, 1993. MR MR1232864 (94h:49003) 137 Chapter 7. Bibliography [37] A. Lyapunov, The general problem of the stability of motion, Internat. J. Control 55 (1992), no. 3, 521–790, Translated by A. T. Fuller from douard Davaux’s French translation (1907) of the 1892 Russian original. [38] Boris M. Miller and Joseph Bentsman, Optimal control problems in hybrid systems with active singularities, Nonlinear Anal. 65 (2006), no. 5, 999–1017. MR MR2232490 (2007c:49025) [39] Boris M. Miller and Evgeny Ya. Rubinovich, Impulsive control in con- tinuous and discrete-continuous systems, Kluwer Academic/Plenum Publishers, New York, 2003. MR MR2024011 (2004m:49006) [40] Boris S. Mordukhovich, Variational analysis and generalized differentia- tion. I, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 330, Springer-Verlag, Berlin, 2006, Basic theory. MR MR2191744 (2007b:49003a) [41] , Variational analysis and generalized differentiation. II, Grundlehren der MathematischenWissenschaften [Fundamental Princi- ples of Mathematical Sciences], vol. 331, Springer-Verlag, Berlin, 2006, Applications. MR MR2191745 (2007b:49003b) [42] F. L. Perĕıra, G. N. Silva, and V. Olivĕıra, Invariance for impul- sive control systems, Avtomat. i Telemekh. (2008), no. 5, 57–71. MR MR2437452 (2009j:60129) [43] Fernando L. Perĕıra and Geraldo N. Silva, Necessary conditions of op- timality for vector-valued impulsive control problems, Systems Control Lett. 40 (2000), no. 3, 205–215. MR MR1827552 (2002f:49046) [44] , Stability for impulsive control systems, Dyn. Syst. 17 (2002), no. 4, 421–434, Special issue: Non-smooth dynamical systems, theory and applications. MR MR1975122 (2004j:93089) [45] , Lyapunov stability for impulsive control systems, Differential Equations 40 (2004), no. 8, 1122–1130. 138 Chapter 7. Bibliography [46] Franco Rampazzo, Optimal impulsive controls with a constraint on the total variation, New trends in systems theory (Genoa, 1990), Progr. Systems Control Theory, vol. 7, Birkhäuser Boston, Boston, MA, 1991, pp. 606–613. MR MR1125153 [47] Ludovic Rifford, Existence of Lipschitz and semiconcave control- Lyapunov functions, SIAM J. Control Optim. 39 (2000), no. 4, 1043– 1064 (electronic). MR MR1814266 (2002g:93065) [48] , Semiconcave control-Lyapunov functions and stabilizing feed- backs, SIAM J. Control Optim. 41 (2002), no. 3, 659–681 (electronic). MR MR1939865 (2004b:93128) [49] , Stratified semiconcave control-Lyapunov functions and the sta- bilization problem, Ann. Inst. H. Poincaré Anal. Non Linéaire 22 (2005), no. 3, 343–384. MR MR2136728 (2006a:93086) [50] , On the existence of local smooth repulsive stabilizing feedbacks in dimension three, J. Differential Equations 226 (2006), no. 2, 429– 500. MR MR2237687 (2007g:93064) [51] Raymond W. Rishel, An extended Pontryagin principle for control sys- tems whose control laws contain measures, J. Soc. Indust. Appl. Math. Ser. A Control 3 (1965), 191–205. MR MR0187980 (32 #5425) [52] R. Tyrrell Rockafellar and Roger J.-B. Wets, Variational analysis, Springer, 1998. [53] Ricardo G. Sanfelice, Rafal Goebel, and Andrew R. Teel, Generalized solutions to hybrid dynamical systems, ESAIM Control Optim. Calc. Var. 14 (2008), no. 4, 699–724. MR MR2451791 (2009j:93064) [54] G. N. Silva and R. B. Vinter, Measure driven differential inclusions, J. Math. Anal. Appl. 202 (1996), no. 3, 727–746. MR MR1408351 (98a:49035) [55] , Necessary conditions for optimal impulsive control problems, SIAM J. Control Optim. (1997). 139 Chapter 7. Bibliography [56] E. Sontag, A lyapunov-like characterization of asymptotic controllabil- ity, SIAM J. Control Optim. 21 (1983), 462–471. [57] , A ‘universal’ construction of artstein’s theorem on nonlinear stabilization, Systems Control Lett. 13 (1989), 117–123. [58] , Mathematical control systems, 2nd ed., Springer, New York, 1998. [59] Eduardo D. Sontag, A “universal” construction of Artstein’s theorem on nonlinear stabilization, Systems Control Lett. 13 (1989), no. 2, 117– 123. MR MR1014237 (90g:93069) [60] David E. Stewart, Rigid body dynamics and measure differential in- clusions, Foundations of computational mathematics (Rio de Janeiro, 1997), Springer, Berlin, 1997, pp. 405–413. MR MR1661997 (99j:70011) [61] , Rigid-body dynamics with friction and impact, SIAM Rev. 42 (2000), no. 1, 3–39 (electronic). MR MR1738097 (2001c:70017) [62] Hector J. Sussman, On the gap between deterministic and stochastic ordinary differential equations, The Annals of Probability 6 (1978), no. 1, 19–41. [63] Sanyi Tang and Robert A. Cheke, State-dependent impulsive models of integrated pest management (IPM) strategies and their dynamic con- sequences, J. Math. Biol. 50 (2005), no. 3, 257–292. MR MR2135823 (2005k:92075) [64] R. Vinter, Optimal control, Birkhäuser, Boston, 2000. [65] Weiming Wang, Xiaoqin Wang, and Yezhi Lin, Complicated dynam- ics of a predator-prey system with Watt-type functional response and impulsive control strategy, Chaos Solitons Fractals 37 (2008), no. 5, 1427–1441. MR MR2412346 (2009d:34119) [66] J. Warga, Optimal control of differential and functional equations, Aca- demic Press, New York, 1972. MR MR0372708 (51 #8915) 140 Chapter 7. Bibliography [67] Peter R. Wolenski and Stanislav Žabić, A differential solution concept for impulsive systems, Dyn. Contin. Discrete Impuls. Syst. Ser. A Math. Anal. 13B (2006), no. suppl., 199–210. MR MR2268791 (2007i:34017) [68] , A sampling method and approximation results for impulsive systems, SIAM Journal on Control and Optimization 46 (2007), no. 3, 983–998. [69] Guangming Xie and Long Wang, Necessary and sufficient conditions for controllability and observability of switched impulsive control sys- tems, IEEE Trans. Automat. Control 49 (2004), no. 6, 960–966. MR MR2064371 (2005a:93025) [70] Tao Yang, Impulsive control theory, Lecture Notes in Control and Information Sciences, vol. 272, Springer-Verlag, Berlin, 2001. MR MR1850661 (2002g:93001) [71] Tao Yang and Leon O. Chua, Impulsive stabilization for control and synchronization of chaotic systems: theory and application to secure communication, IEEE Trans. Circuits Systems I Fund. Theory Appl. 44 (1997), no. 10, 976–988, Special issue on chaos synchronization, control, and applications. MR MR1488197 [72] Kerim Yunt, Impulsive optimal control of finite-dimensional lagrangian systems, Ph.D. thesis, ETH Zurich, 2008, Doctoral Thesis. [73] Stanislav Žabić, Impulsive systems, Ph.D. thesis, Louisiana State Uni- versity, Baton Rouge, Louisiana, USA, 2005. [74] S. T. Zavalishchin and A. N. Sesekin, Dynamic impulse systems, Math- ematics and its Applications, vol. 394, Kluwer Academic Publishers Group, Dordrecht, 1997, Theory and applications. MR MR1441079 (99h:34018) [75] Rong Zhang, Zhenyuan Xu, Simon X. Yang, and Xueming He, Gen- eralized synchronization via impulsive control, Chaos Solitons Fractals 38 (2008), no. 1, 97–105. MR MR2417647 (2009e:34156) 141

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0068836/manifest

Comment

Related Items