UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Solution methods for differential systems subject to algebraic inequality constraints Spiteri, Raymond J. 1997

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1997-251659.pdf [ 6.33MB ]
JSON: 831-1.0080007.json
JSON-LD: 831-1.0080007-ld.json
RDF/XML (Pretty): 831-1.0080007-rdf.xml
RDF/JSON: 831-1.0080007-rdf.json
Turtle: 831-1.0080007-turtle.txt
N-Triples: 831-1.0080007-rdf-ntriples.txt
Original Record: 831-1.0080007-source.json
Full Text

Full Text

Solution Methods for Differential Systems subject to Algebraic Inequality Constraints by Raymond J. Spiteri •: B.Sc.(Hons.), University of Western Ontario, 1990 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Doctor of Philosophy in . THE FACULTY'OF GRADUATE STUDIES Department of Mathematics and Institute of Applied Mathematics We accept this thesis as conforming to the reauired standard The University of British Columbia September 1997 @ Raymond J. Spiteri, 1997 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it .freely available for reference and study! I further agree that permission .for extensive copying of this . thesis for scholarly purposes may be. granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Department of ' MflTttertAX\CS> The University of British Columbia Vancouver, Canada 'bate Sen S DE-6 (2/88) Abstract The method of programmed constraints has recently been proposed as an executable specification language for robot programming. The mathematical struc-tures behind such problems are viability problems for control systems described by ordinary differential equations subject to user-defined inequality constraints. Al -though these types of problems are common in applications, practical algorithms and software are generally lacking. This thesis describes a new method for the numerical solution of such viability problems. The algorithm presented is composed of three parts: delay-free discretiza-tion, local planning, and local control. Delay-free discretizations are shown to be consistent discretizations of control systems described by ordinary differential equa-tions with discontinuous inputs. The generalization of delay-free discretizations to higher order in the context of implicit-explicit Runge-Kutta methods represents a potentially powerful new class of time integrators for ordinary differential equations that contain terms requiring different discretizations. The local planning aspect is a computationally inexpensive way to increase the robustness in finding a solution to the viability problems of interest, making it a refinement to a strategy based on viability alone. The local control is based on the minimization of an artificial potential function in the form of a logarithmic barrier. Theoretical examples are given of situations where the choice of such a control can be interpreted to yield heavy solutions. Simulations of two robotic systems are then used to validate the particular strategy investigated. Some complementarity is shown between the programmed-constraint approach to robot programming and optimal control. Moreover, we demonstrate the relative efficiency of our algorithm compared to optimal control ii in the case of programming a mobile robot: our method is able to find a solution on the order of one hundred times faster than a typical optimal control solver. Some simulations on the control of a simple robot arm are also presented. iii C o n t e n t s Abstract i i Contents iv List of Tables vii List of Figures viii Acknowledgements x 1 Introduction 1 1.1 Differential Systems subject to Algebraic Inequality Constraints . . . 1 1.2 Problem Formulation 2 1.3 Applications 4 1.3.1 Mobile Robot 5 1.3.2 Simple Robot Arm 8 1.3.3 Moving Meshes 11 1.4 Overview 12 2 Background 15 2.1 Approaches to Robot Programming 16 2.2 Constraint Programming in Robotics 21 iv 2.3 Constra int Programming as a V iab i l i t y Prob lem 26 2.4 Numer ica l Methods 31 3 A D A I Solver 36 3.1 Overview 36 3.2 T ime Discret izat ion 39 3.3 Loca l P lann ing . . 40 3.4 A n A lgo r i t hm for Loca l Cont ro l 47 3.4.1 Buffer Zone and Sensit ivi ty Parameter Maintenance 50 3.5 Summary 53 4 Theoretical Considerations 55 4.1 Convexi ty of Barr ier 57 4.2 Simple Examples 59 4.3 Interpretat ion of Loca l Cont ro l 67 5 Delay-Free Discretizations 73 5.1 Mot iva t ion 73 5.2 Impl ic i t -Expl ic i t Runge -Ku t ta Methods 75 5.3 Const ruct ion of I M E X R K Methods 78 5.3.1 Composi te Euler (1,1,1) 82 5.3.2 Forward-Backward Euler (1,2,1) 83 5.3.3 Impl ic i t -Expl ic i t Midpo in t (1,2,2) 83 5.3.4 A third-order combinat ion (2,3,3) 84 5.3.5 L-stable, two-stage, second-order D I R K (2,3,2) 86 5.3.6 L-stable, two-stage, second-order D I R K (2,2,2) 86 5.3.7 L-stable, three-stage, third-order D I R K (3,4,3) 87 v 5.3.8 A four-stage, third-order combination (4,4,3) 89 5.4 Advantages of I M E X R K Methods . . 90 6 Numerical Experiments 95 6.1 Mobile Robot 96 6.1.1 Simulation 96 6.1.2 Very General Programs 97 6.1.3 More Refined Motions 102 6.2 Interaction with Approaches from Optimal Control 108 6.3 Robot Arm 122 7 Conclusions 125 7.1 Summary 125 7.2 Directions for Future Research 127 Bibliography 133 Appendices 143 Appendix A Minimization Algorithm 143 vi L i s t o f T a b l e s 3.1 Hierarchy of Approx imate Safety 45 6.1 Simulat ion results: Case 6.1 99 6.2 Simulat ion results: Case 6.2 101 6.3 Simulat ion results: Case 6.3 103 6.4 Simulat ion results: Spotl ight Prob lem 106 6.5 Opt imal -cont ro l solutions using in i t ia l guesses obtained f rom shooting and D A I methods 121 6.6 Simulat ion results: Robot A r m 123 vi i L i s t o f F i g u r e s 1.1 Dynami tes • 5 1.2 Schematic of the mobile robot 7 1.3 Schematic of the robot arm 9 2.1 Trad i t iona l robot progamming solution method 16 2.2 A candidate for a potent ial funct ion 19 2.3 The eternal question: To B or not. to B 22 3.1 Schematic summary of predict ion strategy 43 3.2 A sketch of <j>i in the case of one constraint 48 4.1 Barr ier funct ion for two-dimensional circular spotlight 68 4.2 Pseudo-Act ive-Set Behaviour of a Nonholonomic System 70 4.3 The barrier trajectory, 71 5.1 Time-step stabi l i ty restrictions for various values of a and 0, for I M E X schemes satisfying (5.7) 93 5.2 Time-step stabi l i ty restrictions for various values of a and /3, for I M E X schemes satisfying (5.6) 94 vi i i 6.1 Robot posit ion and wave posit ion wi th no damping 100 6.2 Robot posit ion and wave posit ion wi th damping 100 6.3 Undamped mot ion of robot (Case 6.2) 102 6.4 Damped mot ion of robot (Case 6.2) 103 6.5 Damped mot ion of robot (Case 6.3) 104 6.6 Undamped mot ion of robot (Case 6.3) 105 6.7 Solut ion exhibi t ing heavy properties 107 6.8 Solut ion not exhibi t ing heavy properties 107 6.9 Ini t ia l Guess for One-Obstacle Prob lem 112 6.10 Solut ion for One-Obstacle Prob lem wi th T O L = 1 0 - 3 112 6.1.1 Solut ion for One-Obstacle Prob lem wi th T O L = 1 0 " 6 113 6.12 D A I Solut ion for One-Obstacle Prob lem 115 6.13 Worst-case scenario of mobile robot approaching hidden obstacle. . . 117 6.14 D A I Solut ion for Prob lem wi th Hidden Obstacle 118 6.15 Locat ion of Mul t ip le Obstacles 118 6.16 Ini t ia l Guess for Mul t i -Obstac le Prob lem 120 6.17 D A I and Op t ima l Trajectories for Mul t i -Obstac le Prob lem 121 6.18 Mo t i on of Robot A r m : r = 0.005 123 6.19 Mo t i on of Robot A r m : r = 0.5 124 ix A c k n o w l e d g e m e n t s Al though a thesis is a work at t r ibuted to one person, mine is the end result of the support ive efforts of many. I express my thanks to my Doktorvater, D r . U r i Ascher , for his generosity, patience, and insight, and Dr . Dinesh P a i for in i t iat ing the line of research that was presented in this thesis. I thank D r . Ph i l ip Loewen, D r . J i m Va rah , and D r . B r i an Wet ton for their encouragement, guidance, and support. I also thank D r . Stephen Campbe l l , D r . R ichard Froese, and D r . Wayne Nagata for their contr ibutions to the f inal form of this thesis. For the many frui t ful discussions I have had wi th them over the years, and most of al l for their fr iendship, I thank Gil les Cazelais, Jorg Hilsebecher, Sammy Merc ieca, Peter Newbury, Volker Schulz, and Haro ld Ship. L-egTiruq tieghi hargu minn taht gebla zghira tissejjah Ghawdex. Hemmhekk kont meta wiehed ghalliem xih, Ganni Grech, thassar tifel ghadu gej mis-safar u wrih li seta' jilhaq matematiku. Nirringrazzja l-familja tieghi li talbu ghalija li dan ix-xoghol isehh: il-genituri tieghi Kelinu u Mananni, in-nanniet, u z-zijiet u l-kugini kollha. Nirringrazzja lil Gorg Grech li sallum ghadu ta' ispirazzjoni ghalija. Nir-ringrazzja lil Dr. Paddy Nerenberg, li ispirani ghal hajja ta' ricerka, u li minghajru sibt ruhi mitlufa. Nirringrazzja 'I Pejxa u Keicha, li jaghtu hafna mhabba u ji-tolbu l-ftit il-ftit lura. U lil Lisa Kalynchuk, lilha nirringrazzja iktar minn kulhadd, ghall-fidi taghha fija, ghal imhabbitha, u ghall-ispirazzjoni li taghtini ta' kuljum. RAYMOND J . SPITERI The University of British Columbia ' • September 1997 x Chapter 1 I n t r o d u c t i o n 1.1 Differential Systems subject to Algebraic Inequality Constraints The viability problem of a control system subject to a nonempty set of (user-defined) inequality constraints is referred to in this thesis as a Differential system with Alge-braic Inequalities (DAI). This acronym is motivated from the more popular class of problems known as Differential-Algebraic Equations (DAEs). Much is known about the DAE case, where the algebraic inequalities are replaced by algebraic equations [24, 3, 2, 4, 5, 6]. The contribution of numerical analysts in this case has been the development of new discretization schemes, as well as their implementation in robust, general-purpose software for both the initial-value [50, 93] and boundary-value DAE [8]. In contrast, much less attention has been given to the DAI case; some numerical work has been done in the context of algebraic inequality invari-ants [46, 103], time-dependent contact and friction modelling [84], and methods for differential inclusions (see [39] for a survey). 1 A natural application for DAIs arises from the programmed-constraint ap-proach to robot programming [125, 86, 87, 61]. Such methods are intermediate-level languages, residing between low-level, trajectory-based approaches and high-level motion planning. Their goal is to offer a declarative, yet computationally tractable framework for the solution of such problems. The focus of this thesis will be on developing methods to produce their numerical solution. Our strategy follows the principle that the control for a system should be chosen so that the boundaries of the viability region are avoided whenever possible. We put forth such a selection as a refinement to choosing a control based on viability alone. 1.2 Problem Formulation We now give some mathematical structure to the DAI problem. Before taking into account any state constraints, the problem being considered is that of a control system 4 = f(t,q(t)) + B(t,q(t))u(t) (1.1a) u{t) e u, ( l . i b ) where the state q and the control u belong to finite-dimensional vector spaces X and Z of dimension n and n C n t r i , respectively. We restrict our interest to trajectories on finite time intervals of the form [0,2/]. In general, when the right-hand side of the inclusion (1.1b) is a function of the state q, U is a set-valued map from X to Z representing an a priori feedback that describes any state-dependent constraints on the controls. However, for the applications we are concerned with in this thesis, U is assumed to be a constant set (or a subset thereof), consisting of relations representing, for example, physical 2 l imi tat ions (saturation) of electromechanical actuators \v,j\ ^ U m a x j , j — 1 , 2 , . . . , W c n t r l -Now let K(t) be a closed subset of X. We say that K(t) defines the viability set for the control system ( l . l a ) - ( l . l b ) . This set is similar to the feasibil i ty set in mathemat ica l programming [41]. The main difference is that feasibil i ty sets are not usual ly concerned wi th the flow of an ordinary differential equation ( O D E ) . It is through the definit ion and manipulat ion of the viabi l i ty set K(t) that the robot program is specified [87]. We note that this formulat ion represents a unified task-descript ion and control approach to robot programming. For consistency of the v iabi l i ty problem, we assume that the system starts f rom a viable point go € A ' (0) . We assume that K(t) is of the form K(t) = {q G 5t n | a(t, q) > 0, i = 1 , 2 , . . . , n c n s t r } . (1.1c) The problem (1.1) is a viabi l i ty problem. V iab i l i t y theory (see, for example, [13, 14, 32]) attempts to give conditions relating K{t), U, and the dynamics in (1.1a), under which a trajectory to (1.1a), (1.1b) satisfies (1.1c) for al l t > 0. A trajectory of (1.1) is an absolutely continuous funct ion q(-) such that (1.1a)-(1.1b) are satisfied almost everywhere [32]. A classical solution of (1.1a) exists if the r ight-hand side is continuous wi th respect to q and Bore l measurable wi th respect to the control argument u. However, when a feedback law u = u(q) is considered, the concept of solution must be generalized. A classical solution exists in this case if u is a continuous funct ion of q; however, continuous feedback laws often do not satisfy problem specifications [25, 108]. The question of how to define a solution to a differential equation wi th discontinuous r ight-hand side has been addressed often in the l i terature (see, for example, [33] and the references therein). The concept 3 that we adopt here is consistent wi th discrete sampling and is therefore the most natura l in terms of its fidelity to appl icat ion. Consider a par t i t ion of [0,i/]: UN = {0 = t0 < ti < ... < tN = tf}. In many appl icat ions, the part i t ion is uni form, wi th spacing At = t{ — repre-senting the sampl ing interval. The trajectory associated wi th the feedback u(q) defined on each subinterval is the solution wi th constant control value m = u(q(ti-i)). Th is interpretat ion for a solution is common in differential-game theory, where it is called a step-by-step strategy by Subbot in [111] or a i i ' -strategy by Isaacs [60]. The forward Euler discretization of such a strategy can lead to an-other interpretat ion of a solution to (1.1a) given a feedback control u(q); however, this dist inct ion is not important for the purposes of this thesis. Thus , we define a solut ion to (1.1) as a piecewise control u(t) such that q(t) £ K(t) for al l £ € [0,2/]. The a im of the algor i thm is to construct a solution (when such exists) to the v iabi l -i ty problem (1.1) in an efficient and robust manner. In Chapter 4, we examine the properties of our strategy in the continuous-time l imit At —> 0 + . Solutions of (1.1) are generally nonunique. That is, there is usually a bundle of trajectories q(t) that satisfy (1.1). We denote this set of al l possible trajectories as QK, expl ic i t ly not ing its dependence on the viabi l i ty set. 1.3 Applications In this section, we describe three applications that can be formulated as D A I s . The first two applications deal wi th the control of a mobile robot and a simple robot a rm , respectively. These are considered to be standard problems in robot control and can be viewed as very natura l settings for D A I approaches. The last appl icat ion deals 4 with a reformulation of the well-known moving-mesh problem (see, for example, [56, 92, 57] and the references therein). This is perhaps a less obvious application for the DAI formulation. However, it serves to illustrate the potential for the DAIs to provide novel ways to view otherwise unrelated problems. 1.3.1 Mobile Robot We have performed numerical experiments based on a model of the Dynamite mul-tiple mobile robot testbed [15]. This is a fleet of radio-controlled model cars that share an off-board vision system. Commands are transmitted to the cars at a rate of 50 Hz by means of a remote computer. The computer vision system is able to track the position and orientation of the robots at a rate of 60 Hz with an accuracy in position of a few millimetres. A more detailed description of the Dynamite system may be found in [15, 78]. In Figure 1.1, we show a picture of two Dynamites in their natural habitat. Figure 1.1: Dynamites. Let the robot be located with its centroid at (x,y), oriented at an angle 9 5 measured clockwise from the positive x-axis, and be moving with speed v. The controls are the normalized settings for the acceleration a and the steering ip, i.e. tp is the angle that the wheels make with respect to the car's body. We write the equations of motion of the robot as a set of ordinary differential equations (ODEs) = v cos 6, V = v — v sin 0, tan(W) V—L—' (1.2a) (1.2b) (1.2c) (1.2d) where the quantities /?,7,K are parameters representing steering gain, damping fac-tor, and acceleration gain, respectively. A more precise treatment of the dynamics would warrant terms to describe, for example, the differential equation for and the rotational dynamics of the car. For the purposes of numerical simulation, the model may be simplified further by means of a few theoretical idealizations. These will be discussed further in Chapter 6, where results from numerical simulations are presented. The viability set K(t) is described by constraints that keep the trajectory of the robot on the testbed, which is a table with opposite corners at ( ^ m m , J / m i n ) and ( i m a x , y m a x ) (see Figure 1.2). These are expressed as C t a b l e = / \ Cl C 3 V C 4 J ( \ — X V - 2/ m i n y 2/ m a x - y } 1.3) 6 A speed limit of u m a x is also imposed on the car's motion to reduce the effects of slippage. Thus, its motion may be more reliably modelled and controlled. This is formulated as ^speed \ " m a x + V (1.4) ( ^ r n a x ; 2 / m a x ) * s ( • ^ m i m Drain) Figure 1.2: Schematic of the mobile robot (not to scale). So far, we have not given the robot a task. All we have specified is that the trajectory should remain on the driving surface and obey a speed limit. The robot's task is programmed by means of a driving constraint. We will consider two forms for the driving constraint. One form is that of a plane wave that traverses the table surface at speed vw and inclination tp € (0, with respect to the a;-axis. In this way, the robot can be guided to different locations 7 on the table. In terms of the programmed-constraint framework, this is expressed as Cwave = (V COS (f - (x - Vwt) SHI (f) . (1.5) We also make use of a driving constraint that takes the form of a spotlight. The spotlight is completely determined by the location of its centre (xc(t),yc(t)) and its radius r(t). We take the form of this to be Cspot = (r\t) - (x(t) - xc{t))2 - (y(t) - yc(t))2) • (1-6) The programmer is free1 to then parameterize these quantities according to the task requirements. Similarly, circular obstacles of radius r 0b s located at (x0bs> Vobs) can be de-scribed by C o b s = (x(t) - a ; o b s ) 2 + (y(t) - 2/obs)2 - ?obs- (!-7) This process of simply adding more constraints to refine the behaviour of the robot is known as incremental programming and it is a useful advantage of the programmed-constraint approach to robot programming. Further details and illustrations are given in Chapter 2. 1.3.2 Simple Robot Arm As another example, we have investigated the control of a two-link robot arm. This is modelled as a planar double pendulum, consisting of two rigid rods with masses mi , m2 and lengths li, I2. One end of the first rod (the shoulder) is fixed at the origin, whereas the other end is connected to the second rod (at the elbow). Rotations are allowed in the xy-plane. The free end of the second rod (the hand) 1 O f course, to within reasonable limits. 8 can be required to follow a prescribed path , such as for t racking objects on an assembly l ine. Let the geometry of the system be described as shown in F igure 1.3. F igure 1.3: Schematic of the robot arm. We make use of the fol lowing variables for notat ional convenience: K{ := cosf5j-, Si := sin0,-, Kij := cos(0,- + 9j), and S{j := sin(#; + f o r i , j = l , 2 . The coordinates of the elbow are thus (xi,yi) •- (hnijis-i), and those of the hand are (x2,2/2) := («i + yi + hsi2)-9 We now denote 8{ by U{ for z = 1,2 and let q = 92,Di,U2)T • The equations governing the motion of the two-link robot arm can be written in the form (1.8a) where with T L02 •\mxgl-i_K-i_ - m2g(l\K1 + \12K.12) + \m2hl2S2(lLo1Lo2 + wf) — \m2gl2K\2 - \m2l\l2S2u\ I and M = I 0 0 M where all blocks of M are of size 2 x 2 and M = \mxll + m2(li + \l2 + /1/2K2) m2(\ll + \m2l\ (1.8b) (1.8c) (1.8d) (1.8e) The control variables u = (ui,u2)T are introduced in the following way. We retain the notation of (1.8a) and the definitions in (1.8b), (1.8d) and (1.8e), while making the following modification for the definition of T in (1.8c): T T -1 0 ^ 0 \ U2 ) (1.9) 10 The fol lowing inequali ty constraint is used to represent a possible pa th , in the form of a parabola, which the hand would be required to follow: r - \y2 - xl + 0\ > 0, (1.10) where j3 is chosen so that the in i t ia l conditions (6.10) are satisfied. Here r represents a tolerance on the amount of vert ical deviat ion f rom the parabol ic path permi t ted in the computed solut ion. We note that the l imit r = 0 represents a holonomic constraint. 1.3.3 Moving Meshes Suppose a system of time-dependent part ia l differential equations ( P D E s ) describes an abrupt phenomenon, such as a shock wave, that is moving in t ime. A natura l idea then is to use a method of lines, where the P D E s are discretized in space to produce a set of coupled O D E s in t ime for the solution at discrete mesh points. Now, suppose further, for reasons of accuracy and efficiency, that the spat ial mesh points are allowed to move as functions of t ime, chasing the abrupt phenomenon. Th is gives a moving-mesh method. M a n y such schemes have been proposed by various researchers [20, 40, 56, 57, 79, 92, 119, 123] wi th the idea of moving the mesh so as to reliably portray solution fronts. However, it has been reported that in some cases, such methods can be l i t t le better than the simpler and less expensive f ixed-mesh schemes [26]. The mesh equations themselves are nonlinear and hence add to the difficulty of solving even linear problems. The vast major i ty of the approaches reported involve to some extent the enforcement of an equidistribution principle. Tha t is, the mesh points are chosen so that the value of a monitor funct ion, such as the arc length of the solut ion, is 11 distr ibuted more or less equally between the mesh points. Thus , the problem being solved is a D A E . The pr incipal mot ivat ion for a D A I reformulation of the moving-mesh prob-lem stems f rom the observation that often, in practice, meshes that are only 'close' to op t ima l perform as well as 'op t imal ' ones, and at a fract ion of the computa-t ional expense. Treat ing the mesh-point velocities as the controls, it is possible to implement inequali ty constraints representing allowable deviations from equidistri-bution. In this way, the emphasis on adherence to strict equidistr ibut ion is lessened. F rom a phi losophical standpoint, it seems appropriate to give more importance to the discret izat ion of the differential operator than to the art i f icial constraint of equidistr ibut ion. Th is is in fact the opposite of what is natural ly done w i th a D A E formulat ion, and it could explain the sometimes less-than-satisfying results using equidistr ibut ing moving meshes reported in the l i terature. 1.4 Overview Numer ica l procedures for the efficient solution of the general D A I problem (1.1) are lacking. Consequently, much of the potent ial of the method of programmed constraints has yet to be fully tapped as a suitable framework for robot program-ming. The central contr ibut ion of this thesis is the development and investigat ion of an efficient numerical method for the D A I problem (1.1) based on a local control procedure. In this way, we exploit an advantage in computat ional efficiency over t radi t ional global approaches. The applications studied also help to i l lustrate the effectiveness of programmed constraints. We propose a new family of discret izat ion schemes that have a great potent ial for appl icabi l i ty beyond the systems of O D E s typ ica l in (1.1). 12 Accordingly, Chapter 2 presents some background material in order to es-tablish important theoretical concepts, as well as to provide some context for the problem and relevant solution methods for it. This includes a brief description of DAIs as constraint-programming techniques, their use as applied to robotics, and their role with respect to more traditional approaches. Relevant concepts from viability theory are also presented. Chapter 3 describes in detail a method for the numerical solution of a gen-eral DAI. The algorithm consists of three main parts: the appropriate (delay-free) discretization of the differential part of the problem (1.1a), a strategy for local plan-ning (or anticipation), and a selection mechanism to provide a local control. The description of the last part includes the construction and maintenance of an artifi-cial barrier potential function, which provides a means for the system to be 'steered' away from the heightened risk of infeasibility near the boundary and towards the relative safety of the interior of the viability set. Chapter 4 presents theoretical results for the proposed algorithm. The chap-ter begins with proofs of some convexity properties of the artificial potential function. Convexity is a very useful property because it aids the efficiency and reliability of the minimization procedure. Different examples are presented which illustrate the effect of the proposed control procedure. This is followed by a characterization of the control obtained from the artificial barrier potential. Chapter 5 offers a detailed look at delay-free discretizations. Their general-ization to implicit-explicit Runge-Kutta methods represents a new and potentially powerful technique for the numerical solution of ODEs that have terms of different nature, such as those obtained from semi-discretization of convection-diffusion equa-tions. Details of all such schemes created thus far are also given, and their stability 13 regions for a representative test equation i l lustrated. This chapter forms part of the paper [10]. Chapter 6 contains the results f rom the use of the algor i thm in solving various D A I s . The first part of the chapter deals wi th the problem of programming a mobile robot. Simulat ions show our solution method to be computat ional ly efficient on a number of challenging problems. The complementari ty of this approach to op t ima l control is discussed. It is shown how a D A I solution can be a good start ing point for a solut ion by means of opt imal control , and how it can be used as a pract ica l method for implement ing an already available opt imal control solut ion. A n i l lustrat ion of the benefits of a local versus global approach is given. F ina l ly , some simulations on another robot ic system, that of a simple robot a rm, are presented. Overa l l , the results make several noteworthy contr ibutions to field of D A I s and their numerical solut ion. Chapter 7 discusses our conclusions and possible future directions for this research. 14 C h a p t e r 2 B a c k g r o u n d The teleoperators of today are the autonomous robots of tomorrow. - Murray, Li , and Sastry [81] Over the past few decades, the fields of robotics and robot programming have been burgeoning areas of research. Many researchers, in both academia and industry, have studied the problem of point-to-point robot motion in a variety of forms, leading to a huge literature on the subject. Consequently, it is appropriate to outline the place of the programmed-constraint methodology relative to mainstream approaches for such robot programming (or robot navigation) problems. This pro-vides an overview of the problem that we wish to solve and some basis for our particular numerical approach. In this thesis, we adopt a very generic view of what constitutes a robot pro-gram or task. A robot task is considered to be a description of a goal which'the robot is to accomplish. This definition purposely leaves a lot of room for interpretation. A program definition can be as simple and general as "Move to one end of the room." However, an effectively endless amount of complexity can be added. For example, 15 an equally val id robot program is "Move to one end of the room, picking up object 0 along the way, stopping exactly at point X, and optimizing over time and energy for the entire trip." Clearly, different methods or conjunctions of different methods may be more appropriate than others for a given task.. 2.1 Approaches to Robot Programming Tradi t ional ly , the typical robot programming problem involves point-to-point mo-t ion in the configuration space of the robot. For example, the robot a rm of Sec-t ion 1.3.2 might be required to move f rom its present configuration (^ 1,^ 2) to the vert ical posit ion (f-,0). The solution of such a problem is usually the product of three layers of t reatment, as i l lustrated in Figure 2.1. CRobot P rog ramm Prob lem * ~ Pa th P lann ing (geometric) I Trajectory P lann ing (add time) r Cont ro l Scheme (track trajectory) F igure 2.1: Tradi t ional robot progamming solution method F i r s t , the path-planning problem is solved: Given only geometric data such as 16 in i t ia l and desired f inal configurations, robot size, and obstacle shapes and locat ions, construct a collision-free curve Unking in i t ia l and f inal states of the robot. Th is beginning phase of the solution process is well researched. In [102], a solution based on cell decomposition is given to the purely geometric problem of f inding a coll ision-free path (given perfect information) in an environment cluttered wi th polynomia l ly described obstacles. A near-opt imal solution is developed in [27], based on a similar idea, called a roadmap. Having solved the path-planning problem, the t radi t ional robot programmer now progresses to the trajectory-planning stage: Given a collision-free pa th , f ind a t ime parameter izat ion of i t , now accounting for constraints imposed by the robot 's dynamics. U p to this point in the solution process, the dynamics of the robot have been completely ignored. Typical ly , conventional path planners are holonomic and impl ic i t ly assume that arbi t rary mot ion is possible in the free parts of configurat ion space. However, if a system contains nonholonomic constra ints 1 , many of the so-lut ions produced by these path planners cannot be directly applied. If they could, the famil iar problem of paral lel parking would be t r iv ia l . The mobile robot system (1.2) is an example of a nonholonomic system [81]. Nonholonomic systems are char-acterized by the fact that they are local ly uncontrollable (ie. the l inearized system is uncontrol lable) yet globally controllable [122]. Thus , depending on what level of constraints are taken into account, it is possible for the solution process to be stuck wi th a t ra jectory that cannot be followed by the system. Tha t is, the paths generated may violate the often subtle nonholonomic constraints. It is also possible to construct a finite set of stable control strategies over 1 A n equality constraint is called nonholonomic if it is satisfied by the generalized velocity of a (mechanical) system but it cannot be expressed as an equivalent condition on the generalized coordinates. 17 local t ime horizons and chain them together to produce a desired global mot ion. Such an approach would yield a simple nonholonomic mot ion planner that could also handle hidden or moving obstacles. For example, in control of aircraft, a set of stable maneuvers might include bank 10 degrees right or increase altitude by 5000 metres, etc. In such a manner, a tree Unking al l such combinations of maneuvers can be generated. The main job of the planner would then be to perform a search of the tree to f ind a suitable trajectory. Because the complexity of the tree generally grows exponential ly in size, a mechanism for pruning the less-fruitful or even dead-end branches must also be in place. The search is often performed using techniques common in the field of art i f ical intelligence [31]. However, we note that such an approach may not be computat ional ly efficient nor easy to program: Even after prun ing, there may st i l l be a large number of possibilit ies to be examined, especially, for example, if moving obstacles in the environment force the time horizon to be very short. The t ime-parameterized curve is called a reference trajectory. There has been a great deal wr i t ten about the construct ion of reference trajectories, usually under the names of motion planning, or nonholonomic motion planning. See [115, 38, 76, 81] for various examples and [69] for a survey. Surv iv ing the first two stages allows the programmer to graduate to the f inal stage of the solution process, solving the tracking problem: Determine a control that wi l l allow the physical plant to follow the reference trajectory as closely as possible. Th is is most commonly referred to as solving the inverse dynamics problem. Th i r ty -and forty-year-old technologies in the form of P D and P I D controllers [89, 121] and heuristics such as Ziegler-Nichols Rules [89] are often pressed into act ion to realize this f inal stage of the solution process. Happi ly, such methods are persistently robust 18 and of great pract ical use, despite their age. A lso worthy of mention at this point are artificial potential field methods. These consist of construct ing a real-valued map <f> on the robot 's free configura-t ion space (which can be associated wi th the viabi l i ty set K(t)) w i th the fol lowing properties: 1. there exists a m in imum of <j> at the goal configuration, 2. 4> is uni formly max ima l on the boundary of K(t), denoted dK(t). A n example of an acceptable potent ial funct ion is depicted in Figure 2.2. Once a suitable <fi is constructed, it determines a feedback law in which — V<f> (subject perhaps to some added dissipation) is used as the input to the robot 's actuators. y Figure 2.2: A candidate for a potent ial funct ion. The concept of potent ial funct ion was pioneered by Kha t i b in the 1980's [64] 19 in the context of obstacle avoidance, and it was further developed by Hogan [53] and Newman and Hogan [82] in the context of force control. There are also other independent developers of this methodology f rom outside of No r th Amer i ca [80, 90]. We have seen how the typical robot programming problem is split into three phases that are solved separately. The first two are planning layers and are usually associated wi th high-level, off-line control. In contrast, the f inal phase, which does the actual control l ing, is a low-level, on-line process. Potent ia l field methods offer a unified approach to task description and control , because a suitable (f> represents path p lanning, t rajectory planning, and a feedback control all at once. For example, the classic P D controller can be looked at as the gradient of a Hooke's law type of potent ia l [66]. Moreover, such computat ions can al l be carried out in real t ime. Thus , it is not surprising that these methods have a successful history in practice [64, 53, 82]. The Achi l les ' heel of potent ial field methods, however, is the presence of spurious m in ima in the potent ial funct ion. These become especially apparent in cluttered environments. Unfortunately, the construct ion of complex potent ia l func-tions through the combinat ion of simpler ones does not necessarily lead to easily predictable (or desirable) behaviour. This leads to disappoint ing results, especially in the transient behaviour of systems subject to these potentials [114, 22, 64]. In response to this, R imon and Kodi tschek [68, 96] developed the concept of navigation function in the early 1990's. They augment the required properties of potent ia l functions to include functions that 1. have a unique m in imum at the goal configuration, 2. are at least twice differentiable, 20 3. have a nonsingular Hessian at all its critical points. The navigation function automatically yields a bounded-torque feedback con-troller, guaranteeing collision-free motion and convergence to the destination for al-most all initial configurations [96]. In principle, such functions can be shown to exist for any robot-obstacle combination. However, except in certain specialized situa-tions [96], constructive techniques are still lacking. Moreover, certain assumptions in the theory, such as stationary obstacles, fixed destination, perfect information, ideal sensors, and ideal bounded-torque actuators, must ultimately be relaxed in a practical implementation. 2 .2 Constraint Programming in Robotics Although the traditional method is most commonly used for solving the robot nav-igation problem, it is not necessarily the most natural characterization. There can be a degree of arbitrariness involved in the programming process, often leading to program inflexibility, the severity of which is accentuated in cluttered and dynamic environments with changing task requirements. For example, suppose a robot is programmed to move from point A to point C, via point B. See Figure 2.3. It is generally not easy to modify this program to instruct the robot to proceed directly to point C before completing its path to B. On the other hand, motion-planning approaches provide a very high level of programming. Of particular note is the optimal-control solution, where the robot motion optimizes a specified functional, such as the time taken or the energy ex-pended during the motion. However, these approaches often incur great computa-tional expense and are therefore of limited practical use in some situations, particu-21 Figure 2.3: The eternal question: To B or not to B. larly where not all information is available at the outset. This is of definite concern for systems that are autonomous and reactive because much of the 'programming' must be done during the execution. For example, control of a space-shuttle arm [101] is perhaps more suitable for effective programming using optimal control because the environment is amenable to allowing off-line computation, and the program will not be subject to unexpected changes2. This is in contrast to a robot working in a hospital setting, where the long-term motion of patients, staff, carts, etc., cannot be known. The method of programmed constraints has recently been advocated as an executable specification language for robot programming. It can be seen as an intermediate-level language, residing between low-level, trajectory-based approaches and high-level motion planning. The Least Constraint approach [87], which incor-porates a weaker specification of control actions, was proposed by Pai and first appeared in the late 1980's. In this methodology, large time-varying sets of nonzero measure (which we identify as viability sets) are the means by which desired goals 2As the amount of space junk increases, however, this assertion may have to be modified. 22 are specified, in direct contrast to explicit t rajectory specification. The underly ing phi losophy is that al l locations wi th in the viabi l i ty set are equally acceptable, and it is often unnecessary and unnatural to specify more for a task to be completed. . The v iabi l i ty sets are described by the (nonempty) intersection of inequal i ty constraints that are satisfied at run t ime to produce the control . A solut ion to such a program necessarily produces behaviour that satisfies al l the constraints. For example, consider the problem of a point robot x(t) w i th dynamics x — u. In a potent ia l field approach, the control is expressed as u = — V</> — c(x), where (f> is an art i f ic ial potent ia l field to be specified, and c(x) is a dissipation term. Suppose that the robot is constrained to the region 0 < x < 1. A suitable (barrier) potent ia l funct ion to do this is <f>i(x) = — logo; — l og ( l — x). Now suppose that the desired dest inat ion point is Xf. A suitable potent ial funct ion to produce this result is (p2 — i ( x — Xf)2. Simply adding <f>\ and (f>2 to make a new potent ia l funct ion <f> clearly does not yield the desired result x{t) —> xj if xf is not near | ! However, subject ing the dynamics to the intersection of constraints of the form x(t) > w0(t), x(t) < toi(i) wi l l always produce the desired effect, provided that the mot ion is possible for the choice of wo(t), wi(t) satisfying u>o(0) = 0, u>i(0) = 1, and wo(T) = Xf — e, w\(T) = Xf + e for suitable T and e. Th is makes the Least Constraint approach very similar to constraint program-ming languages ( C P L s ) [72]. The dist inguishing feature of C P L s is their declarative nature, as opposed to the algori thmic nature of imperat ive computer languages like Pasca l or F O R T R A N . The constraint programmer simply states the relations be-tween objects, and it is up to the constraint-satisfaction system to find a solut ion. In this way, the algor i thm that is presented in Chapter 3 can also be viewed as a 23 part icular case of such a system. A C P L allows the programmer to focus on the pure programming aspect of the problem, and not on the algori thmic details of how the different subtasks are executed. This makes it more tractable, for example, to teach mobile robots how to play soccer (see [78] for a discussion of other issues involved in this). The programmer can concentrate on more abstract goals such as playing defense, while being much less preoccupied wi th specific details of robot mot ion. It is also a powerful technique for nontradi t ional control problems such as min imal ly invasive surgery performed by teleoperated robots [61, 81]. Besides also being a unified approach to programming, the advantage of C P L s that is part icular ly attract ive to robot programming is the ease wi th which complex programs can be built up and modif ied. For example, consider the problem of control l ing a walk ing machine [87, 86]. For this problem, programmers would prefer an environment in which assertions about the behaviour are specified incrementally because a significant part of the solution process is in fact discovering the task requirements themselves. For example, an in i t ia l attempt at programming a robot to walk might include specifying a region in which the leading foot is supposed to land in order to mainta in stabil i ty, and requiring that this foot be clear of the ground during its mot ion. It might be then discovered that one leg crosses behind the other and the robot tr ips. The programmer would like to modify the exist ing program by simply adding new assertions. Th is is not possible in current robot programming languages. F ina l ly , the use of constraints for robot programming also offers some advan-tages that can be complementary to the more t radi t ional approaches. For example, the simpl ic i ty and intui t ive geometric appeal of explicit path specification can be retained by means of a fictitious moving circle (or 'spot l ight ' , as in (1.6)) whose 24 center follows a predetermined path. The radius of the spotlight is a measure of the tolerance wi th in which the path should be followed. There are several advantages to this approach. F i rs t , it is easier for the programmer to specify a geometric entity such as the path of the spotlight and allow the plant itself to ( impl ic i t ly) solve the inverse dynamics problem of staying wi th in i t . Moreover, it clearly makes more sense to program in terms of tolerances f rom an implementat ional v iewpoint. Sec-ond , f lexibi l i ty in avoiding obstacles or changing tasks is not sacrificed because the spotl ight path can be changed at any t ime without adversely affecting the solut ion procedure 3 . In other words, new informat ion f rom a dynamic environment can be processed accordingly. T h i r d , this method also dispenses wi th the problem of being forced to choose a funct ional to optimize so as to apply an opt imal-contro l approach. Often in robot ics, as in everyday life, there is no clear choice of performance mea-sure for a given task. For example, when placing an object on a table, there is (usually) no need to select a locat ion that is, for example, maximally away f rom the edges. Any locat ion that is 'sufficiently' far f rom the edges is equally acceptable. Thus , the solution to the place-object-on-table task is nonunique. Opt imal -cont ro l solvers generally have a great deal of difficulty dealing wi th nonuniqueness of solu-t ions, and an i l l-posed performance measure can be a major cause. Other pract ica l difficulties include determinat ion of an in i t ia l t rajectory and control profile which is 'close enough' to the exact solution and f rom which a nonlinear equation solver can converge. Th is also leads to the question of detecting convergence to undesired m in ima. There are processes for which opt imal control is a natura l (and even easily computable) control strategy. For example, it is often desirable to use t ime-opt imal 3 O f course, unexpected events such as obstacles which stray into the spotlight area are handled automatically, to within the limitations of the system. 25 control on manufacturing systems, especially those involving repetitive tasks. Also, the benefits of the additional effort exerted in optimizing the control are obvious. In this case, we offer the DAI framework for programming in more of a complementary sense, rather than strictly as an alternative to optimal control. In Chapter 3, we describe an algorithm that is designed to solve the DAIs that arise from the programmed-constraint approach to robot programming. The central idea is based on the Least Constraint framework [87]. The solution method combines the integration of the equations of motion with the timely updating of a default control, where an attempt is made to keep the solution trajectory away from all constraint boundaries. This is achieved in the following way: A local planning mechanism is used to delineate buffer zones around the boundary of the viability set. The choice of control is then based on the minimization of an artificial potential function where the constraint boundaries act as barriers. The artificial potential function is a non-negative, continuous, and differentiable function whose value tends to +00 as the system approaches the boundary of the viability set, and with its influence limited to a given extent within the interior. These are all properties shared with the potential functions originally proposed by Khatib [64]. However, there is a difference in the form of the artificial potential function near the boundary of the viability set: We make use of barriers with logarithmic singularities whereas inverse square barriers are used in [64]. Some reasons for our choice are given in Chapter 3. 2.3 Constraint Programming as a Viability Problem In Chapter 1, the mathematical structure of the DAI problem is given as a viability problem (1.1). Viability theory, or controlled invariance [32], is a useful mathemat-26 ica l f ramework for the description of evolving macrosystems arising in areas such as biology, economics, and cognitive sciences, as well as differential games and control theory (see, for example, [13, 14, 111, 32] and the references therein). We concen-trate in part icular on the main results f rom viabi l i ty theory as applied to control systems [13, 14, 32]. V iab i l i t y theory has recently emerged as a popular appl icat ion for control systems, although it has not always enjoyed this role (see, for example, [12] and the references therein). We begin wi th a very brief descript ion of v iabi l i ty theory. V iab i l i t y theory deals mainly wi th three central concepts [13]: • A nondeterministic evolution that provides many opportunit ies to explore the environment. This corresponds in part to the assumption of a v iabi l i ty region w i th nonempty interior. • V iab i l i t y constraints that must be respected by the system at al l t imes for a solut ion to be meaningful. Th is is also in fact the essence behind the definit ion of a D A I problem! • A n inertia principle which states that the controls of the system are changed only when viabi l i ty is at stake. We assume the existence of an a pr ior i control strategy, unat(t), which is to be used by the system when it is 'safe' w i th in the interior or the viabi l i ty set. Despite the notat ion, this con t ro l can be open or closed loop. Examples of each are given in Chapter 3. In this thesis, an iner t ia principle would more generally refer to an adherence to the control unat(t). We now provide a few more details for each of these items: • B y nondeterminism, it is impl ied that , at each instant, there exist many available viable evolutions, which may depend on the current state, or past evolut ion, 27 etc. In this sense, the dynamics are not considered to be determinist ic. Such a lack of determinism may also stem from disturbances or perturbat ions in the system, including those due to modell ing errors, measurement errors or incomplete in format ion. • There are many applications in which the evolution of a dynamica l system is not meaningful unless certain constraints, called viability constraints, are obeyed. Th is is a common feature in D A E s [24, 3, 2, 50, 4], most notably mul t ibody sys-tems [120, 51] where, for example, distances between joints must be constant, and symplect ic integrators [49, 107], which preserve differential 2-forms of a flow. For example, if a robot-programming task involves the mot ion of a robot in an arena, trajectories that would take the robot outside of the confines of the arena have no physical significance or pract ical uti l i ty. The goal is therefore to construct a viable solut ion, that is, one which satisfies the constraints at all times. Viab i l i t y theorems yield selection procedures for viable evolutions [13, 14]. Tha t is, they characterize the relations between dynamics and constraints in order to guarantee the existence of a viable solution start ing f rom any in i t ia l state. These theorems can also yield feedbacks that can be used to mainta in viabi l i ty, or even improve the state according to some performance measure (or preference relation). However, this theory is not always of a constructive nature and usually does not offer a strategy for when the system is st i l l well inside K(t): the analysis focuses on what to do when the system reaches the boundary, dK(i). Th is is of part icular concern for nonholonomic systems, which have fewer local degrees of freedom than global ones and hence may not be able to act on such a strategy. The algor i thm presented in Chapter 3 is offered as a method to construct a solution to a viabi l i ty problem f rom the programmed-constraint method to robot programming. 28 Similar to the setting of a DAI program, viability theory does not require optimization of an intertemporal objective function. This, of course, is in contrast to optimal-control theory. The choice of control is not predetermined at some initial time. Rather, the theory allows for the possibility to adapt to a changing environ-ment as defined by the viability constraints. Also in contrast with the assumptions of optimal-control theory but similar to DAI philosophy, viability theory does not require any knowledge of the future4. We argue that some 'planning' is very useful in solving DAIs. However, it is implemented as a limited extrapolation to the future, based only on present (and possibly local past) information. So our view of the future is myopic at best, and may be viewed also as anticipation. It is natural to argue that selection through viability alone may be too gen-eral. This brings up the aspects of complexity and robustness in choosing a viable trajectory. For example, it is possible that the state of a system becomes more robust as it moves into the interior of the viability set. In this situation, only those trajectories that move away from the boundary remain viable after a given length of time. For a practical, efficient DAI solver, both these considerations must be weighed into the choice of a selection procedure. Clearly, a balance must be struck between the computational effort spent on determining a control and the improvement of the overall 'safety' of the system. • We have been advocating what amounts to a generalized inertia principle, which states that the controls are kept on a certain strategy, u n a t ( i ) , as long as the viability of the system is not at stake. However, it is clear that despite this, the state of the system may evolve so that it reaches the viability boundary with a nonzero outward velocity. Such an event corresponds to a period of crisis: The 4although trajectories are certainly implicitly constrained by the choices of the past (as we are). 29 system must now update the schedule given by unat(t) with a new control that will steer it toward the interior of the viability region. Of course, another way to resolve such a situation is to relax the constraints. Then, the system will lie in the viability region defined by the new constraints. For example, if a table top is described by a set of constraints that leaves a border inside the edges, then these constraints can be relaxed to true physical edges. When neither course of action is possible, by definition, the trajectory leaves the viability set and terminates. Many possibilities present themselves when choosing a control at a crisis point [13, 14, 32]. For example, traditional strategies from viability theory include the choice of a control that provides a viable solution with minimal velocity. These are called heavy viable evolutions, or slow solutions [13, 14]. When the controls are velocities, heavy solutions have minimal acceleration or in other words, maximal inertia. Alternatively, the algorithm presented in Chapter 3 chooses a control based on the minimization of an artificial potential function when viability is deemed to be at stake. We will see in Chapter 4 that this choice corresponds to a type of heavy solution in certain situations. The main result from viability theory of relevance to problem (1.1) is the following: Denote the right-hand side of the appropriate differential inclusion asso-ciated with (1.1) by F(q). Now assume that F(q) is a Peano map5. If a (constant) closed set K is a so-called viability domain, then for any initial state go € K, there exists a viable solution to (1.1) on [0,T]. A subset K 6 Dom(F) is a viability domain if and only if VqeK, F(q)C]TK(q)^Hi, where Tft-(g) is the contingent- cone [13, 12, 32] to K at q. In less technical terms 5 A Peano map is a nontrivial, upper semi-continuous, set-valued map with nonempty, compact, convex values and linear growth. 30 and in the context of (1.1), this means that for every state q, there exists a feasible control u that can provide an inward velocity, i.e., a velocity directed towards the interior of K, whenever the system reaches the boundary. Thus , if such a selection is always possible, the system wi l l remain viable. Every D A I solver is ul t imately concerned wi th construct ing a feasible control u that makes K a viabi l i ty domain. V iab i l i t y theory is mainly concerned w i th the properties of the set K and the results are often not of a constructive na tu re 6 . Because we are dealing wi th a t ime-varying set K(t), the results must be appl ied to the graph of K(t), Graph(I i ' ) = {(t,q) e S x T | q(t) € K(t)}. Some theoret ical results for this case are presented in [70]. These ideas can also be related to the concept of stable bridge f rom differential game theory. In this case, the two opposing players are endowed wi th controls over a dynamica l system. One player would like that the system be in some target set at a f ixed later t ime. It is possible to bui ld a universal feedback to guarantee this based on an extremal strategy in the presence of a known stable bridge [111]. Th is is very interesting as a way to get back into the viable set and merits future invest igat ion. 2.4 Numerical Methods Numer ica l treatment of ordinary differential equations wi th inequalit ies natural ly divides into two types: those that deal wi th inequality invariants and those that deal w i th uni lateral (one-sided) constraints. Consider a general O D E and some 6 w i t h proximal aiming [32] being a notable exception. 31 inequalities: y = /(*,!/(*)) (2.1a) 0 < c(t,y(t)) (2.1b) We say that (2.1b) is an inequality invariant if the satisfaction of (2.1b) at some (initial) time guarantees that it is automatically satisfied by the exact solution y(t) of (2.1a) for all later times. Hence, it represents a set of conclusions about y(t). For example, physical properties such as chemical concentration are known to remain non-negative. Difficulties in practice may arise because the numerically computed solution will not generally satisfy (2.1b). On the other hand, a unilateral constraint represents conditions imposed on y(t), such as described in the examples of Chap-ter 1. Hence, a numerical method for solving this problem must in fact alter the trajectory, perhaps through the addition of reactive forces or Lagrange multipliers, in order that (2.1b) be satisfied. Some work has been done for the case of inequality invariants, for example [46, 103]. It has been shown [103] that all the standard one-step numerical inte-grators (of which explicit Runge-Kutta methods are prototypical) maintain linear invariants having constant coefficients. This result does not extend to the class of standard implicit methods [103]. In both [46, 103], a minimum Z2-norm projection is utilized to project the discrete solution back onto the manifold defined by the ac-tive constraints after each integration step. A constraint of the form C{(t,y(t)) > 0 is said to be active at time t* if Ci(t*,y(t*)) = 0. We note also that the nonzero Lagrange multipliers at t* can only be associated with active constraints. In essence, the procedure in [46, 103] amounts to a selection of a Lagrange multiplier (similar in spirit to a heavy viable solution) in order to maintain the invariants. 32 A similar technique is used in [85] to satisfy unilateral constraints arising in rigid-body contact problems with Coulomb friction. There, the Lagrange multipliers are selected as the solution to a certain quadratic programming problem. Thus far, all the solution methods described resemble active-set strategies. This results in the integration of a DAE of varying dimension, thereby making the problem much more difficult and computationally expensive, as different constraints drift in and out of the active set. Moreover, such techniques are not even always relevant in robotics problems, because physical limitations (often in the form of nonholonomic constraints) may not allow the system to 'slide' along the active con-straint boundaries. In this case, the system has fewer local degrees of freedom than global ones. For example, a car approaching a wall (constraint) at an acute angle can come to rest against it, but it cannot slide parallel to it. Clearly, there is an essential need for some planning in an algorithm to solve such a problem. We are particularly interested in the numerical solution of the control system (l.la)-(l . lb), which is required to evolve in a viability region described by (1.1c). Such a control system can always be written as an initial-value problem for an ordinary differential inclusion [14, 32] yeF(t,y(t)), (2.2) for almost all i G [^ o,?1] and with y(to) = yo-In general, solutions of (2.2) form a set of functions Y, representing a bun-dle of trajectories. Because of this nonuniqueness of solutions, many of the usual concepts of numerical analysis must be modified when applied in this context. There are two approaches to approximately 'solve' (2.2). The first is more in tune with traditional numerical analysis: Use a finite-difference scheme, together with a suitable selection procedure, to produce a sequence of discrete values {y^l^i 33 for the solution on a suitable grid UN = {t$ < i f < . . . < t% = T}. In this case, the inclusion reduces to an ODE with discontinuous right-hand side. Because we can place a mesh point at every control discontinuity, Runge-Kutta integration methods are favoured over linear multistep methods. The second approach is rather intriguing, and its investigation is a possible candidate for future work. It consists of approximating the entire solution set Y of (2.2), and not simply one specific trajectory. One special case of note for this approach is the approximation of the reachable sets. This could provide useful insights to the programming process, especially in the design of the viability set. See [39] and the references within for more discussion of this approach. For the general differential inclusion, it is not worthwhile to look for higher-order standard methods. For example, consider the following ODE with discontinu-ous right-hand side describing forced vibrations with viscous and Coulomb damping [39, 112, 49, 120]: y + (a0 + sgn(y))y + y = f(t), where f(t) is smooth. For this problem, it is shown in [39] that higher-order schemes exhibit the same difficulties as lower-order schemes in regions where the classical solution breaks down. However, the additional effort is worthwhile on subintervals where the right-hand side is single valued and smooth. Fortunately, this is the case for the control system (1.1), because we are able to dictate when the control is potentially discon-tinuous and place a mesh point at the appropriate spot. Selection strategies are used to improve the qualitative properties of the approximate solution when there are a bundle of solutions. For example, a solution can be chosen [39] with 34 • minimum norm where /,• G F is the selection on interval or • minimal variation \\yjf — We note that the first example is a case of the heavy solution mentioned previously. The algorithm proposed in the next chapter can also be viewed as a nonstandard selection strategy with a corresponding nonstandard objective function. We will see that the selection made is so as to preserve the relative distance of the system to the constraint boundaries defined in the graph of K ( t ) from one time step to the next. It is a hybrid selection strategy, switching between an a priori schedule unekt(t) and a control based on the minimization of an artificial barrier potential. The switching is controlled by a local prediction strategy. Formal higher-order convergence of the method would be a suitable avenue for further research from a purely theoretical view. 35 Chapter 3 A DAI Solver We now describe in more detail the method used to produce a solution to the robot-programming problem defined by (1.1). An earlier version has appeared in [109]. Most illustrations will be given as applied to the Dynamite system (1.2). Following an overview, we proceed with a detailed specification of the various components of the algorithm. A summary of this is given in Section 3.5. 3.1 Overview We recall that the objective of a DAI solver is to facilitate the numerical integration of the viability problem (1.1). At each step, a local selection is made for the control u(t), based only on 'current' information (or estimates thereof) about the system and the viability set K(t). The default control ?j n a t is maintained as long as the viability of the solution is not in jeopardy. This strategy corresponds to a general-ized inertia principle [13]. When the system is deemed too close to dK by a local prediction (or anticipation) strategy, a (new) control u*(t) is invoked. The anticipa-tion process involves not only monitoring the constraints and their various rates of 36 change, but also whether a sufficiently smal l deviat ion of the control f rom unat w i l l allow the system to avoid each constraint boundary indiv idual ly, though perhaps not simultaneously. Th is is the concept that we cal l weak approximate safety. Desirable features of a solution method should include the fol lowing: • Small computational expense in determining the control. Th is is required in two important situations. F i rs t , if the problem is being solved as a starter for an opt imal-contro l solver, we would prefer that the D A I computat ion not be a significant addi t ional expense. Smal l computat ional expense is also impor tant in an implementat ion, because the commands to be issued by the controller must be determined in real t ime. • Robustness A suitable strategy should be capable of providing solutions in difficult or varying geometries, as well as exhibi t ing a degree of insensit ivi ty to minor random errors in measurement, model l ing, and f inding the precise value of the local ly opt imal control. A l though we wish to avoid a global mot ion planning problem, a completely local approach wi l l be severely l imi ted in the scope of problems that can be solved. In many si tuat ions, viable solutions are easily found wi th rudimentary planning schemes, but they are impossible to obtain without these schemes. Accordingly , we desire a predict ion strategy that enlarges the A'-reachable set over that which would be obtained without planning. The K-reachable Or K-attainable set at time t G [0,*/] is defined as RK(qo,t) = {?(<) G K n | <?(•) G QK on [0,t] w i th g(0) = q0}-It is the usual reachable set, but wi th the added restr ict ion of v iabi l i ty - hence, its dependence on K. 37 Thus , as part of a planning procedure, an anticipatory or buffer zone near the boundary of the feasibil ity region must be created and maintained in order to give any algor i thm the opportuni ty to choose a control (or sequence of controls, perhaps at consecutive t ime steps) to keep the system inside the v iabi l i ty region. The algor i thm that we present attempts to solve the v iabi l i ty problem by avoiding the boundaries of the viabi l i ty region whenever possible. A l though not str ict ly necessary, we assume that the user wi l l have prescribed a default control profile tinat? which should yield some k ind of preferred or natura l behaviour for the system. Th is is an opportuni ty for the user to exercise some influence on the solut ion produced. Let the controls in (1.2) be redefined to be 1 a = nd, a = — tan(/3V0-Li Then , for example, unat = (Vw^na t ) = 0 for the mobile robot system corresponds to slowing down but continuing in the same direction when viabi l i ty is not at stake. Th is often leads to the effect of the car being bumped along by the dr iv ing constraint. However, the choice a n a t = ")v simulates the absence of damping in the system (effectively, 7 = 0). A s a last resort, the user always has the opt ion of s imply choosing u n a t = 0 and allowing the local min imizat ion process to determine a control f rom scratch. However, this may not lead to an intui t ive or computat ional ly efficient solut ion in al l cases. Th is is a very application-dependent aspect. It is sometimes more instruct ive to view the solution process as a control strategy in itself, whereas at other t imes, it can be viewed as simply a correction (improvement or super-routine) to an 'a lmost ' acceptable u n a t . The advantage to the local nature of the approach is that it can to some extent be immune to surprise events, including measurement or model errors and incomplete informat ion. We note that despite its name, the natura l control u n a t is not exact ly 'na tura l ' 38 in the sense of Koditschek [67]. What Koditschek refers to as natural motion is actually unforced motion of a damped mechanical system. Although a common point in the two descriptions would be the choice una,t(t) = (ipna.t, a n at) = 0 in the mobile robot system (1.2), we take 'natural' to mean an a priori, often open-loop control provided by the programmer as the desired behaviour of the system when viability is not at stake. The values for the controls are computed based on the approximate mini-mization of an artificial C1 logarithmic barrier potential, where the singularities are aligned with dK(t). In this way, future states that continue to approach the bound-ary are penalized. The artificial potential function chosen attempts to maintain the same relative distance of the system to the boundaries between time steps. The algorithm can be divided into three basic components: time discretiza-tion, local planning, and local control. Only a brief treatment of the time discretiza-tion is given in the following section; a more extensive treatment is deferred until Chapter 5. However, a detailed treatment of local planning and local control is given in the following sections. A summary of the entire algorithm appears in Section 3.5. 3.2 Time Discretization A suitable discretization scheme is chosen based on the differential problem (1.1a). Consider a typical one-step method to advance the system from ( i n - i 5 <7n-i) with control w n _i to (tn,qn) where tn = tn-\ + At. Recalling that w n _i is constant on the interval (tn-i,tn), we have qn = $ (* n _ i ,g n _ i ;u n _ i ; A*). (3.1) We note that an r-step multistep method would also involve the states 39 <7n-2> ln-3-, • • • j Qn-r in (3.1). However, mult istep methods are of l i t t le use in this context because they attempt to continuously extrapolate f rom previous states to the next state, whereas u is generally discontinuous between subintervals. Th is si tu-at ion is mi t igated by one-step methods provided that a mesh point can be placed at each jump in the value of the control , i.e., provided u is constant on each integrat ion subinterval. Fortunately, this is the case for the D A I (1.1). In solving the min imizat ion problem (3.5), we view (3.1) as a function of the control. In such cases, we wi l l employ the notat ion (in = qn(u). This turns out to be a useful way of characterizing future states. Choices for par-t icular discretizations of (3.1), especially ones that are delay free, are discussed in Chapter 5. 3.3 Local Planning The second component of the algor i thm is local planning. The underly ing mo-t ivat ion for local planning is based on the idea that adding a planning layer to the programmed-constraint approach would allow more problems to be successfully solved. A l though this is intuit ively appeal ing, it also has a basis in concepts f rom viabi l i ty theory [13]. It is possible that the state of a system becomes more robust as it moves into the interior of the viabi l i ty set. So, only those trajectories which choose to move away f rom the boundary remain viable after a given length of t ime. In this way, movement away f rom the boundary corresponds to increasing the size of the i i ' - reachable set. A similar view can be adopted of taking steps to change the evolut ion of the system before the boundary is reached. Typ ica l results f rom viabi l -40 ity theory or differential games do not offer a strategy for control until the boundary of the viability set is reached [13, 111, 32]. By then, depending on the form of the control system (1.1), the intersection of the reachable set and the viability set may be empty. This is especially true in the case of nonholonomic constraints, under which arbitrary motions are not possible locally. Due to the physical limitations of the system, it is important to monitor the relative movement of the boundaries and make a decision as to whether they can be easily avoided. If not, then the system should act immediately towards improving upon this. We are using the concept of weak approximate safety to determine when a control step should be executed. We form a local approximation to the current behaviour of the constraints, and we perform a test on whether each constraint can be avoided individually by a control that is sufficiently close to t t n a t - The size of this deviation is governed by a sensitivity factor £/frac, which represents the size of a relative deviation that is deemed to be 'small'. The quantity C/frac is defined to be a diagonal matrix with nonzero entries t i f r a C l i ? - , .7 = 1,2,..., n C n t r i -When the condition of weak approximate safety holds, the system is deemed safe and the system can proceed with « n a t . Otherwise, a new control step is com-puted, based on the minimization of an artificial potential function. Having discussed the need for planning, we now discuss its implementation through the use of buffer zones. Let the quantities c s afe i; > 0, i = 1,2,..., n c n s t r , denote the inner edge of the buffer zone. The buffer zone itself at time t = tn-\ is then the set of all states q such that q £ J?A'(?O> tn-i) a n d 0 < c(tn-i,q) < c s a f e . We generalize the concept of an active set to refer to the set of those constraints whose buffer zone is currently being penetrated by the system. Our strategy is 41 based on treat ing the constraints c as independent quantities and not inferr ing much about their dependence on q,t or indirect ly on u. Tha t is, we treat the components of c s imply as functions that we would like to remain positive over t ime. Hence, the quantit ies c and c at a given t ime are useful in predict ing how c may evolve locally. These quantit ies may be computed analyt ical ly or numerical ly by difference quotients. For example, for C\ f rom (1.3) of the mobile robot system (1.2), we have c'i = x = vcos0 (3.2a) and c i = v cos 6 — v sin 66 = a cos 6 — v2 sin 6a. (3.2b) Wherever they occur, the controls are treated as constant over each t ime step, and so for the purposes of predict ion, we take a = a = 0 . Th is is consistent w i th the common practice of sample and hold. The theoretical mot ivat ion behind our predict ion strategy is summarized in Figure 3.1. Let the system be in a viable configuration qn-\ at t ime t = tn-\. A strategy based on viabi l i ty alone or no knowledge of the future would be to simply mainta in the schedule «nat(0 unt i l the system reaches the boundary (si tuat ion (a)). Clear ly, there wi l l be l imitat ions to the robustness of such a strategy. Our predict ion strategy involves using time-derivative informat ion of the constraints. If c > 0 at ( i n _ i , q n - i ) i w e continue normal integrat ion wi th u(t) = t i n a t ( i ) , because the system is not moving local ly towards the boundary of the feasibil ity region (si tuat ion (b)). The system wi l l be viable using unat(t) under a l inear model for the evolut ion of the constraint equations. If for some i, c\ < 0, we compute c and form the discriminant 42 A wi th components defined by A t ' = 1C{Ci Cj i — 1, 2, . . . , 7 i C nstr- (^'^) • (a) (b) (c) F igure 3.1: Schematic summary of predict ion strategy. If A > 0, then the parabolas given by c + c(t - + ^c(< - i n - i ) 2 , i > i „ _ i , have no real roots and hence locally we deem that constraint v io lat ion wi l l not occur based on a second-order model for the evolution of the constraint equations (si tuat ion (c)). If A j < 0, this indicates that an impact wi th the boundary of c8- is l ikely to occur, provided ti,Ci remain relatively constant and u{t) = unSLt{t). O f course, this is not necessarily a good long-time predictor, because c ^ c ; are not generally constant. However, these are reasonable local-t ime predictors and can be useful over the entire interval of integrat ion provided they are updated wi th sufficient frequency. However, there is a shortcoming because, in most common mul t ibody and robotics systems, constraints occur in opposing pairs (so-called box constraints are typ ica l examples). In the case of a shrinking feasibil ity region (which is usual in nontr iv ia l problems), it is easily seen that the conditions c > 0 or A > 0 can never 43 be satisfied. We would therefore like a test of more practical utility, the motivation for which is described now, using the mobile robot example. Example 3.1 Suppose the mobile robot is moving toward the edge of the table described by x = £ m a x , (in the absence of friction) with 6 = 0 and at some (small) velocity e cm s _ 1 . We temporarily assume that there are no driving constraints. Then c 2 = -e < 0, c 2 = 0, so that A 2 = -e 2 < 0. Accordingly, the simple test would predict that a control step should be taken. However, if a m a x > then in theory no planning is ever required (in the absence of discretization errors). That is, it will be possible to stop the car before it goes over the edge in the very last step. Now suppose a plane wave is introduced with vw > 0 and (f = | . This test would initiate a control at every step. So, we have a conflict: On the one hand, no planning is very inexpensive (computational cost may only be little more than that of a nonstiff integrator) but it suffers from not being robust in finding solutions; on the other hand, solving the minimization problem to be described in (3.5) at every step is too expensive computationally. A We desire a method to act as a compromise by detecting situations when ^nat ( i ) needs no modification. We do this by establishing relative degrees of 'safety' for the system inside the viability set K(t). The method is based on a second-order approximation to the constraints and is intended as a means to establish how the system may respond to a control that 'slightly' differs from unat(t) by an amount Au. The classifications are summarized in Table 3.1. 44 Strong Approximate Safety there exists a vertex Au such that either (b) or (c) in Figure 3.1 holds for all C j simultaneously Weak Approximate Safety there exists a vertex Au such that either (b) or (c) in Figure 3.1 holds for all ci individually Table 3.1: Hierarchy of Approximate Safety We now discuss some implementational details, with illustrations from Exam-ple 3.1. We recompute c2, c2 with a control having components « n a t ± Au where Au is a vector with components AUA = i i f r a c jumax ,• and M f r a c ,• £ (fj, l " n a t J " " " ' ^ j = 1,2,... ,n c ntri. The signs of the individual deviations AUJ are as yet unknown, be-cause it is not known a priori which will be the most advantageous combination. For example, in the preceding situation, the appropriate af r a c is negative. In the case that the user has specified a component that is too large, it is truncated at the upper bound. Such a strategy is not difficult to implement in practice. The expressions for c, c can be reduced to c = GAu + r (3.4a) c = HAu + s, (3.4b) where G, H £ sftncnstrxncntri a m j r^ s £ ^ncnstT_ Tj s mg these forms, we determine whether a choice of Au can be found that will make c > 0; and if not, then A > 0. More precisely, we are searching the vertices of an racntri-dimensional box with sides 2 u f r a c j « m a X ) j centred at u = unat to see if the conditions c > 0 or A > 0 hold. If such a vertex exists, then we define the system to be in a state of strong approximate safety. This indicates that, based on our second-order approximation of c(t), a 45 (small) deviat ion in control wi l l l ikely enable the system to avoid al l constraints. Unfortunately, this condit ion cannot be satisfied in the typ ica l case of opposing constraints and shrinking feasible region. Moreover, testing for strong approximate safety can be computat ional ly expensive. It can easily be seen that the number of vertices increases exponential ly wi th the number of constraints. However, a pract ical test can be borne f rom relaxing this requirement. If AUJ appears w i th coefficient gij, hij in the expressions (3.4a) and (3.4b) respectively, then we recompute c ; , A ; using the nonzero values \gijAuj\ and \h{jAuj\. In this case, we are searching the vertices for c; > 0, A ; > 0, not ing that it is possible that a different vertex would be suitable for different constraints. If such vertices can be found, we define the system to be in a condit ion of weak approximate safety: a smal l control can be expected to avoid any one constraint, but not necessarily al l constraints at once. For example, recall equations (3.2a), (3.2b) for c\, c\. Let unat = 0. The estimate of 'appropr iate ' control wi th components of magnitude Wfracj'^maxj then uses ' c{ = v cos 6 and c'i = a f r a c a m a x | cos f9| + a f r a c a m a x u 2 | sin f9| in the calculat ion of A i = 2c*xc\ - cf. If the condit ion of weak safety holds, then the integrat ion continues wi th u = Mnat- Thus far we have found this test to be of pract ical uti l i ty. The quant i ty £ / f r a c allows for errors in constraint measurements, the model , or predictions and 46 potential inconsistencies therein. It can be considered a sensitivity parameter, ini-tially specified by the user, but then dynamically altered. A full description of how it is updated appears in 3.4.1. 3.4 A n Algorithm for Local Control The third component of this algorithm is local control. This component of the algorithm takes over from the inertial behaviour t i n a t when the viability of the system is at stake. The control to save the system from imminent death is determined by the following strategy. Suppose that the system is in a viable state qn-\ at time £ n _i, and that the prediction strategy has deemed a control step necessary. We delineate the beginning of a buffer zone by assigning constants As an alternative to the 'unacceptable' u n a t , we obtain a new value for the control u* as an (approximate) solution to the following constrained nonlinear programming problem: Csafe — c(tn—l j Qn — 1) • u* G argmin (f>(u) (3.5a) such that Ci(qn(u)) > 0, (3.5b) where (3.5c) (3.5d) 47 and (with a slight abuse of notation), c(qn(u)) = c(tn,qn(u)) is used to emphasize that the constraints are being viewed as functions of the control. A sketch of the barrier function (3.5d) is given in Figure 3.2 in the case of one constraint. Csafe, Figure 3.2: A sketch of 4>i in the case of one constraint. We note that if u* satisfies (3.5), then "cnstr ] T & ( ? „ ( « ) ) d du i=i = 0, that is, Vc-(g„(«')) ^ 0, (3.6). where In(u*) = {i | c,- < c s a f e i i , i = 1,2,..., n c n s t r } is the set of indices for the 'active' constraints. We have chosen a barrier that not only disallows infeasible solutions, but also serves to keep the system away from the boundaries. We have chosen to use logarithmic barrier functions; however, other choices such as inverse or inverse square 48 functions are possible, depending on the desired interact ion of the system wi th the potent ia l funct ion, especially near the boundary [30, 44, 45, 64]. The part icular choice of the barrier funct ion (3.5d) was made as a deter-rent to a 'bang-bang' choice for the control when moving the system away f rom the boundary. In this way, the previous stabil i ty of the system evolving w i th a natura l value for the control is not excessively undermined. The advantage of logar i thmic barriers lies in their relatively slow divergence near the singularity. The logar i thmic terms are augmented by terms that make <f> a C x - func t ion , further enhancing this effect. General izat ions to barrier functions of higher orders of continuity are avail-able, but any added ut i l i ty in practice over (3.5d) is not clear. Moreover, (3.5d) has some nice theoretical properties, such as convexity, and it can be shown to produce intui t ively correct controls in the l imit of continuous observations (see Chapter 4). The problem (3.5) is solved by means of a penalty function method, that is, through the solution of a sequence of unconstrained min imizat ion problems where the objective funct ion <f> is augmented wi th a penalty term [43]. The strategy adopted for use in practice is taken f rom [42, 34, 43, 52, 94], a l though steps are taken so that an approximate solution to (3.5) is found quickly, sometimes at the expense of accuracy. Due to the local nature of the problem, however, an extremely accurate solution offers l i t t le advantage. A more detailed descript ion of the min i -mizat ion a lgor i thm appears in Append ix A . The presence of the logari thmic term in the definit ion of the barr ier potent ia l makes the choice of a control seem akin to interior-point solution methods f rom opt imizat ion. One possible interpretat ion in this context is that the solution of (3.5) is a part icular i terat ion of an interior-point method to minimize the objective funct ion V\- c ' — 1, where the sum is taken over al l i such that c,- < c s afe j,-, subject 49 to c(qn(u)) > 0. The part icular i terat ion is wi th barrier coefficient having the value unity. In other words, the objective funct ion to be minimized is in fact the one-norm of the differences between c; and c s afe i 8- for i such that c,- < c s a f e i I . Aga in we note that , due to the local nature of the solution and the t ime-varying feasibil i ty region, there is no advantage offered by a ful l solution of the classical interior-point a lgor i thm, in which —» 0 + . The ful l computat ion in which JJL is allowed to approach zero is too computat ional ly intensive for the job of an on-line controller. Moreover , because K(t) is evolving in t ime, such a complete opt imizat ion procedure cannot be justi f ied for a f ixed t. If a solut ion u* to (3.5) could be found, the system (1.1a) is advanced one t ime step, start ing f rom ( i „ _ i , qn-\)- In the absence of discret izat ion or model l ing errors, the existence of a u* guarantees that the system wi l l be viable at the next step. We then return to the 'natura l ' control unat (at least in i t ia l ly) as the control to at tempt for the next t ime step. 3 . 4 . 1 B u f f e r Z o n e a n d S e n s i t i v i t y P a r a m e t e r M a i n t e n a n c e A s another measure taken to improve the overall efficiency, the boundary of the buffer zone, c s a f e and the components of c7f r a c are updated as the system evolves. The quant i ty <7frac can be considered a sensit ivity parameter in the fol lowing sense: Reducing the values for the components W f r a c j implies that the control step wi l l be invoked more readily. Hence, the algor i thm does not rely as much on the accuracy of predictions or a relatively large value of the control to mainta in viabi l i ty. For this reason, we do not recommend the use of large values of C/f r a c in practice. The setting f7f r a c = 0 essentially invokes a control f rom (3.5) at every step. W h e n the system is forced to choose Uj = w m a x j for some or al l components j , the stabi l i ty 50 of finding a viable solution can be undermined. For example, it may be a sign that (perhaps due to a lack of adequate planning) the intersection of the iif-reachable set and the viability region is becoming very small, possibly making it harder to find a viable solution. Starting from an initial value Ufvac provided by the user, prediction and integration are carried out as described in the previous modules. There are three situations in which we update the components of (7frac. These situations, and our respective strategies for their treatment, are the following: • If there is an unexpected constraint violation, U{Tac <— ^U{Tac. An unexpected constraint violation occurs when the prediction method fails to warn the con-trol system that the next step (taken with u — unat) would yield an infeasible state. This could be due to a problem with the model or the prediction strat-egy, or due to the prediction that an application of a control of the form unat i ^frac^max yields an impact in less that one time step from a safe state. • The choice of a control yields \UJ\ = umaxj for any j = 1,2,..., racntri. For such j ' s , UfTaCtj <— ^Ufracj- This is in fact a step to err on the side of pru-dence. We wish to avoid using components of size umax for the purposes of stability of finding a solution and to reduce computational expense, because in general, obtaining optimal solutions on the boundary of \u\ < umax is the most expensive optimization calculation to determine u. In a practical par-tial minimization strategy used to solve (3.5), components Uj of iterates whose magnitudes exceed umax>j are truncated at umaxj. This is taken to be the final answer instead of iterating toward formal convergence to this quantity. This has produced a dramatic speedup because needless iterations of successive min-51 imization problems are averted where the penalty on violation u m a x j — \v,j\ > 0 is increased. • The choice of a control is u « M n a t (to within a small percentage of u m a x ) , then we set C/frac <— 2<7frac. Here we deem that the constraints have combined themselves in such a way that the barrier function is locally flat. Hence the system is (at least temporarily) in a state where natural motion is acceptable, so the sensitivity can be relaxed. Buffer zones can also be renormalized at this point in order to deter the system from 'bouncing off' constraint boundaries. No discontinuous interaction is caused with the barrier because the barrier function and its derivative are continuous at the location of the system when the barrier is established. Example 3.2 (Effect of Sensitivity Parameter) Consider the mobile robot with state (10 50 0 2) T at time t = 5. Let 7 = 0 and « n a t = 0. Then ci = 2, c2 = - 2 , c 3, c 4 = 0. In particular, A 2 := c 2c 2 - c\ = 0 - (-2) 2 = -4 < 0. Now with t/f rac = 0.5/, we have c 2 = — v cos 8 = —2, c* = | — a cos 9\ + v2\ sin 0a\ = 75 A ; = 2(222)(75)- (-2) 2 > 0. In this case, the integration would continue with the control u = w n a t . However, for 52 the case £7frac — gg goo- '^ = 2(222)-^-- (-2) 2 < 0, in which case a buffer zone would be created and an appropriate u found from solving the minimization problem (3.5). - A 3.5 S u m m a r y The various components of the algorithm are combined as follows. Recall that the basic strategy is to proceed with default motion until a buffer zone has been established. The control function u(t) is taken to be piecewise constant on each interval [tn-\,tn). For defmiteness but without loss of generality, we assume that the buffer zone is not activated. Starting from a viable state qn-i at time i„_i, and a sensitivity parameter1 (7frac: 1. Predict safety of proceeding with u„_i = t t n at(*n - i ) -2. If weak approximate safety then (a) Attempt integration of (1.1a) over the time interval [tn-i,tn) using the default control w n _i = ^ n a t ^ n - i ) • (b) If resulting state qn is viable, then (step is successful) i . Accept w n - i , qn; i i . Increment n\ iii . Break to Step 7; else (unexpected constraint violation) 1 initialized by the user 53 i . Reduce U{Tac; i i . Continue to Step 3; 3. If buffer zone is not activated, then activate it by setting c s af e = c(t„_i, g„_ 4. Solve the barrier minimization problem (3.5) to obtain a control u*(tn^i). 5. Attempt integration of (1.1a) over the time interval [tn-i,tn) using the VJ dated control = u*(tn-\). 6. If qn is viable, then (a) Accept u n _ i , qn; (b) Increment n; (c) For each j such that UJ = umaxj, reduce M f r a c j ! (d) If un-i = una,t(tn-i), increase (7 f r a c; (e) Continue to step 7. else (failure) System dies. 7. If M n _ i = u n a t ( i n _ i ) , then deactivate buffer zone. 54 Chapter 4 T h e o r e t i c a l C o n s i d e r a t i o n s Trial and error is convincing, but mathematics is conclusive. - Gil Strang We now present some theoretical results for the algorithm, focusing on the control obtained from the barrier function in certain identifiable situations. We show explicitly that the control produced maintains the relative position of the system within the viability set from one time step to the next. These results are given for model systems with velocity and acceleration controls in various viability regions, such as n-dimensional rectangular or spherical spotlights and shrinking regions. In this chapter, we sometimes suppress the additional dependence on qn in the notation of (3.5b) and write simply ct- = C{(u). We note that qn(u) is linear in u if, for example, a forward Euler discretization is applied to (1.1a): qn — qn—i + At [ / ( * „ _ i , qn-i) + 5 ( i „ _ i , g „ _ i ) u ] , or if a composite Euler discretization (cf. [109] and Chapter 5) is applied to a system such as (5.1) (cf. page 73) with f\(t,q2) being linear in q2. 55 It is useful to recall a few definitions and standard results f rom convex anal-ysis [113, 97]. Definition 4.0.1 (Convex Function) A function f : D o m ( / ) C •>RSTlcntri —> 3? is said to be convex i / D o m ( / ) is convex and f(Xu+(l-X)v)<Xf(u) + (l-X)f(v), for al l u, v e D o m ( / ) and A 6 [0,1]. A funct ion / is said to be strictly convex if the above inequali ty is strict for al l u ^ v and A € (0,1) . Lemma 4.1 Let f : / —> •ft be differentiable, where I C D o m ( / ) is an interval in 9?. If f is nondecreasing, then f is convex. Lemma 4.2 (Finite Sum of Convex Functions) The sum of a finite number of convex functions is convex. Theorem 4.1 (Uniqueness of Minimum) If f is strictly convex, then f has at most one global minimum point. Definition 4.0.2 (Convex Program) The problem of minimizing f(u) subject to the linear equality constraints Au = b and the inequality constraints c(u) > 0 is a convex program if f(u) and —c(u) are convex. We wi l l not need to incorporate the use of equality constraints in our analysis because we are concerned wi th solutions of O D E s (cf. (1.2)) subject only to inequali ty constraints. However, we wi l l make use of the fol lowing property of convex programs: 56 Theorem 4.2 (Local Constrained Minimizer of Convex Program) / / u* is a local constrained minimizer of a convex programming problem, then it is also a global constrained minimizer. This theorem is useful because it allows us to categorize situations where more than one control solves (3.5) as multiple, as opposed to spurious, which leaves the impression that the minimum is undesirable. More details on this idea will be given in the theoretical analysis of the spherical spotlight problem. 4.1 Convexity of Barrier In this section, we show that the barrier function (3.5d).is convex if the constraints are linear functions of the control u. For simplicity of notation, let c s af e = 1 and consider for now only scalar-valued u and c. The controls u belong to a set / , where / is of the form [ - t i m a x , t i m a x ] . However, the barrier function cf> is only defined for u G U C J, where U = {u G I \ c(u) > 0}. We note that U is convex if c is linear. We will call U the set of admissible controls, also denoted by Dom(< )^. With these simplifications, <fi takes the form We will call this less complicated version of <$> the reduced barrier function. Theorem 4.3 (Convexity of Reduced Barrier Function) Let the control u be admissible. If the constraint c(u) is linear, then the reduced barrier function (4.1) (f>(u) = (j)(c(u)) is convex. 57 Proo f : The proof consists of two parts. We begin by showing that <f>(c) is convex. Let <K') = c - l - l o g c 0 < c < l 0 c > 1 Then 0 < c < 1 0 c > 1 Clear ly , (f>'(c) is nondecreasing for al l c > 0. Thus , by L e m m a 4.1, <f>(c) is convex. The proof is now made complete by not ing that , for al l 0 < A < 1 and U\,U2 € U (so that c(\u\ + (1 — A)w 2) > 0), we have Th is result can be extended to the case of mult ivariate input . In the fol lowing, let u = (ui,u2,...,uncnttl). L e m m a 4.3 (Convex i t y of Reduced Ba r r i e r w i th Mu l t i va r i a te Input) Let the control u be admissible and the constraint c(u) be linear. Then, the reduced barrier function (j>(u) is convex. Proo f : The proof is a straightforward extension of Theorem 4.3. • T h e o r e m 4.4 (Convex i t y of Ba r r i e r ) If the constraints Ci(u) are linear in u for i = 1 ,2 , . . . ,n C nstrj then the barrier function (3.5d) is convex. Proo f : The proof follows immediately f rom Theorem 4.2. • 4>{c(\Ul + (1 - A)tt 2)) = # A c ( « i ) + (.1 - A)c(u 2 ) ) < A0(c(«i)) + ( 1 - A ) # c ( u 2 ) ) . Hence, <f>(u) is convex. • 58 4.2 Simple Examples In order to better understand the behaviour of the algorithm proposed in Chap-ter 3, we examine what control is computed upon minimization of (3.5) for several theoretical examples. We begin with a few examples for systems with n degrees of freedom in situations involving typical programmed constraints, such as moving or shrinking rectangular 'spotlights'. We then proceed to give results for systems with more general dynamics and constraints, including that for motion within a spherical spotlight. To simplify the exposition, we first remove the effect of the prediction strat-egy. Note that this is only of theoretical interest, because in order for this setting to work in practice, it would also require continuous observational data and control (and perhaps an idealized computer). Hence, its scope is limited in practice, but it is a useful exercise in aiding our understanding of the control selection. Example 4.1 (Rectangular 'Spotlight' with Velocity Controls) Consider the control system x = u, (4.2a) u £ U = {u | || u ||oo < umax} , (4.2b) a:(0) = a*, (4.2c) where x,u € 5Jn. Let the viability set K(t) be denned by (1.1c), with constraint functions c(t) £ SR2ra c> WWr-(, W-^)) ( ^ V c + ( t ) ) \ r + (x(t) - xc{t)) 59 where c~, c + , r G iR n. This is the n-dimensional analogue of a moving rectangular spotlight. The objective is to choose a control u such that each component of the system Xi(t) remains within a predetermined distance r8- from a moving point xCii(t). We know the exact solution of (4.2) on any interval [0,t] where u is constant, x(t) = XQ + ut. As we did in Chapter 3, we also regard this solution as.a function of the control, x(t; u) = XQ + ut, and correspondingly, the constraints as c = c(t, x(t; u)). We now explicitly compute the control obtained from solving (3.5) on such an inter-val and interpret it in the limit t —> 0. This is an idealized situation that represents the continuous limit of the observation rate of xc(t)', and also a continuous control strategy u(t). We begin by delineating the buffer zone: Csafe / r - (x0 - xc(0)) ^ \ r + (x0- xc(0)) ) The barrier function to be minimized is "cnstr i=l where d>f = —1 1 — los We note that only one of c~, c\ can be active at any given time for fixed i. For simplicity but without loss of generality, we can assume the active constraints 60 Ui(t) = < correspond to c . Now, from (3.6), the necessary (and, because <j> is now in fact strictly convex, sufficient) condition for an optimum yields, r - (x0 - xc(0)) = r - (x0 + ut - xc(t)). Hence, we have, xc(i) - xc(0) u = —— —. t In the limit t —> 0 + , we can generalize this for all t to yield the control strategy 2-c,i(^ ) l^ c,*(^)| ^  "max sgn( i C ) i ( i ) )« m a x otherwise. Thus, the control obtained from the minimization (3.5) matches the velocity of the system with that of the centre of the spotlight as closely as possible. It is optimal in the following sense: If || xc jj^ < umax, the strategy always yields a globally viable solution. We note that, because this is not a differential game where the motion of the n-dimensional spotlight may be governed by an evil adversary [60, 111, 73], a condition such as || xc ||co < u m a x can be enforced by the programmer, and hence represents a guide in designing the program. When \xi(t) — xCii(t)\ = r4-, it is possible to interpret this as a heavy solution because it is the control of smallest size that will preserve viability. Example 4.2 (Rectangular 'Spotlight' with Acceleration Controls) In a similar fashion to the previous example, consider now x = u, (4.4a) u £ U, (4.4b) x(0) = x0 (4.4c) i(0) = v0. (4.4d) 61 Let U and the viability set K(t) be as in Example 4.1. Here, the exact solution over a time interval [0,t] with constant u is x(t;u) = XQ + vot + \ui?. Again, the barrier function <j> is strictly convex in this situation, so (3.6) implies that the necessary and sufficient condition for (3.5) to be minimized is xc(t) - xc(0) - v0t u = —— -ir-i . Expanding xc(t) about t = 0 yields u(0) = » c(0) + iei°]f ^° + 0(t). (4.5) We note that as t —> 0 + for VQ ^ xc(0), the norm of M(0) grows without bound. Hence, some care needs to be exercised in its explanation. The interpretation of (4.5) as a continuous control strategy breaks down into two parts because there are two situations in which to consider (4.5). In the first situation, we consider the case VQ ^ xc(0). For t sufficiently small, the value of the components of the feasible control will reach one of the limits imposed by (4.2b). At this point, the components of the control will satisfy U{ = sgn(xCi;(0) — vo,i)umax- Hence, the control uses maximum allowable accelerations in such a way as to match the velocity of the spotlight. Now let us assume that the parameters xo, fo, xc(0), xc(0), Vax a r e such that velocities of the system and the spotlight can be matched after time r. For example, in the case n = 1, this would imply that r satisfies ic(0) + f xc{t)dt = VQ + sgn(xc(0) - v0)umeLXT. (4.6) Jo Without loss of generality, we can now identify the control strategy for t > r with the second possible situation at t = 0, namely, VQ = xc(0). Now, u(t) = xc(t). 62 Thus, if |x c(i)| < M m a x , the continuous control strategy can be summarized as follows: sgn(xc(i) - v(t))umax 0 < t < r u(t) = (4.7) Xc(t) t > T, where r satisfies (4.6). The control obtained from the minimization (3.5) uses a maximal acceleration to match the velocity of the system to that of the spotlight centre. After this is achieved, it matches the spotlight acceleration. We now show that, for every finite interval of length t upon which the spot-light acceleration xc(t) is constant and satisfies || xc(t) ||oo < t t m a x , the relative posi-tion of the system within the spotlight remains the same: The change in the position of centre of the spotlight in time t is simply *c(t) ~ xc{0) = xc(0)t + i£ c (0) i 2 . Using (4.5), the corresponding change in the robot position is x(t)-x0 = vot+i(ic{0l~Vo +xc(0)) t2 2 = xe(0)t+lxc(0)i In a practical setting where the sampling time cannot be made arbitrarily small, this means that the relative position of the system within the spotlight is preserved from mesh point to mesh point while the velocity matching takes place. Thus, we note that, as is common with numerical optimization codes that solve discretizations of continuous problems subject to state-dependent inequality con-straints, viability can only be guaranteed at grid points. The possibility of infea-sibility exists in between. However, this can generally be mitigated by a suitable mesh refinement. 63 Similar results can be proved for the damped system x = u — "fX, where the control to be applied on the subinterval [0,t] is obtained as _ ScQQ - a--c(O) ~ fo(l ~ e-^) U~~ • i - ^ r ( l - e - ^ ) Expanding in powers of •yt, we obtain a:c(0) + xc(0)t + ix c (0)i 2 + . . . - xc(0) - W 7 * - i 7 2 2 2 + ...) 7/ ~ * - ^{it- hl2t2 + -..) -* xc{0)+ + 7"o, as •yt -> 0+ and we can see explicitly how the damping is accounted for when compared to the undamped case. As a particularly important example, we now consider the problem of the n-dimensional rectangular spotlight that is moving and contracting. The ability to write a program by constructing regions like this one is a very useful aspect of the programmed-constraint environment. The ability to compute a control that produces a viable solution is a very useful aspect of a DAI solver. Example 4.3 (Shrinking Rectangular 'Spotlight') Consider now the control system (4.2) subject to the constraints ' r - g(t) - (x(t) - xc(t)) v r - g(t) + (x{t) - xc(t)) where the function g(t) € Rn determines the shrinking properties of the spotlight. For clarity, we assume that g(0) = 0 and g'(t) > 0. Typically, the program will end 64 after some time T with g(T) = r — e, where e represents a vector of desired final tolerances for each component of the system from the spotlight centre. In this case, we note that all the constraints will be active after time t. The necessary conditions (3.6) now imply that the control satisfies n - gj(t) + (xCii(t) - Xj(t; u)) _ rj + (xCtj(0) - x0ii) n - gi(t) - (xCti(t) - Xi(t; u)) ri - (xC)l-(0) - x0ti)' Hence, the control is chosen so that the overall ratios of the constraint values are preserved by the end of the integration step. Although these models may seem simple, they have been used in more so-phisticated applications. For example, the control system described in (4.7) has been used in [54] to describe the open chain manipulator dynamics of a SIGMA robot (a simple two-degree-of-freedom Cartesian robot made by Olivetti) for con-trol in a time-optimal contour-following task. In addition, the forms of some models presented here with velocity controls are special cases of the canonical form of non-holonomic control systems [81, 54, 37]: "cntrl i=l We note also that right-hand side of this equation is linear in the control variables u. Hence, the hypotheses of the convexity theorems in Chapter 4.1 are satisfied for constraints c(t, q) that are linear functions of the state. Thus far, we have dealt with examples where the constraints c were linear functions of the control u. In this case, we often made use of the convexity and sometimes strict convexity properties of the barrier to find a solution u. Although the algorithm is based on discrete observations, we have given interpretations of corresponding continuous control strategies in the limit of infinitesimal sampling intervals. Our final test case for this section involves an n-dimensional 'spherical' 65 spotlight. In this case, the control defined will not be unique. However, we will see that the minima are not spurious, but rather multiple, because each of them would yield the same acceptable solution. This corresponds to the nonuniqueness property of the solution. We will see that in practice the particular choice of solution can be influenced by the default control una,t(t) specified by the user. Example 4.4 (Spherical Spotlight) Consider the control system (4.2) required to move within an n-dimensional spherical spotlight with centre at (xc(t),yc(t)) and radius r, the two-dimensional version of which is described in (1.6). As usual, we begin by defining n Csafe = r2 - ~ ^-(O)) 2, i=l and we write n c(t; u) = r2 - ^2(xt(t; u) - xCti(t))2. i=l The necessary conditions (3.6) for a minimum of (3.5d) lead to n nonlinear equations for the components of u: -2t (— - -z^—] • (tit + x 0 - xc(t)) = 0 V C s a f e C(t;u)j The solution that yields a minimum of (3.5) is where u satisfies c(t; u) = c s afe, that is, n n Yfuit + x0ti - xc,i(t))2 = J2(x0s - M O ) ) 2 . (4-8) i=l t'=l This can be rewritten as n n u ) - xc,i(t))2 = E( x o,* - M O ) ) 2 , (4-9) i=l i=l 66 and hence we see again that the control attempts to preserve the relative distance from the centre of the spotlight from one time step to the next. To obtain some geometric intuition, consider the case for n = 2. Let the (circular) spotlight have unit radius and initial position (xCii(0), xC^(Q)) = (0.9,0) and assume its position at the next sampling time At is (xCti(At), xCi2(At)) = (1, 0). Let the initial position of the system be (xi(0), £2(0)) = (0,0) and let umeiX = 1. Figure 4.1 gives a surface plot of the barrier (3.5d) to be minimized in this case. From this plot, we can see how the set of minimizers satisfies a relation such as equation (4.9). The particular selection made by a numerical implementation of the con-troller will depend on the initial guess given to the minimization routine. This is chosen to be the current value of unat(t). Hence, in the case of possibly multiple extrema, the control produced will depend on the programmer's selection of w n a t ( i) . All solutions produce acceptable behaviour. However, it may not be desirable for solutions to 'jump' between points inside the spotlight corresponding to different values of the control satisfying (4.8). Hence, the use of unat = 0 will tend to yield 'heavy-looking' solutions that mimic an active-set solution. 4.3 Interpretation of Local Control When c, c are reliable predictors of the constraint behaviour over the medium to long term, then our overall strategy allows the system to 'drift' within the viability set until such time that the viability is at risk. The advantage of such a strategy is mainly computational; however, it shares the concept of an inertial principle from viability theory [13, 12]. It is also in agreement with our philosophy that all points in the interior of K(t) are equally acceptable. 67 <f>(u1,u2) Figure 4.1: Barrier function for two-dimensional circular spotlight. 68 The characteristic of the computed control is that it attempts to maintain the relative distances to the constraint boundaries at the beginning and at the end of a control step. The result of this behaviour is especially apparent in Example 4.3. In the case of nonlinear constraints, the control may not be uniquely defined. This nonuniqueness is a notable drawback with other potential function methods due to their stronger specification of the control. In such cases, the extra possibilities were spurious because they did not ultimately lead to the desired behaviour. In our case, the minima are better referred to as multiple, because they are indeed all equally acceptable a priori. The particular solution chosen will then depend on the initial guess provided to the minimization routine. Because in practice we take this to be the value of unat(tn-\), we see that the programmer's choice of the default control is a way in which the robot behaviour can be influenced. In some situations, there can be the illusion of an active-set solution where the system appears to slide along the boundary of the viability region. The following figure illustrates this effect in the case of the motion of a mobile robot in a spotlight. The position of centroid of the car within the spotlight is depicted as a circle in Figure 4.2 for a few successive time frames. These are in fact some results from the simulations presented in Chapter 6. Our final comment on the characterization of the control selected by our algorithm deals with its relation to the barrier trajectory (or central path) of interior-point methods applied to dynamic programming problems. Let us quickly recall a few relevant concepts. Consider the problem of minimizing a real-valued function f(x) such that Ci(x) > 0 for i = 1,2,.... j ^cnstr- The generic barrier algorithm solves a sequence of 69 Figure 4.2: Pseudo-Active-Set Behaviour of a Nonholonomic System: Relative (x, y) Position of Mobile Robot in Spotlight at Consecutive Time Steps problems1 "cnstr min f(x) -/J, ^2 log Ci(x), x i=l for a decreasing sequence {/ijt} —> 0 + . Under suitable conditions, it can be shown that the sequence x(ji) of barrier minimizers not only converges to the optimum x*, but that it also defines a smooth path [41] called the barrier trajectory, or central path as it is known in the case of linear programming. The situation is illustrated in Figure 4.3. We note that following such a path is often in conflict with moving toward the geometrical or analytic centre defined by fj,k —> oo. Indeed, in the case of linear programming, the optimum is guaranteed to be on the constraint boundary. We note that, in their haste to converge to the optimum x*, practical algo-rithms do not completely solve the unconstrained minimization problem for a given Hk- Rather, they terminate that part of the solution process early and reduce the barrier parameter before continuing. This leads to the appropriate name of path-following methods [124, 117]. The motivation is that there is nothing special to be I t is implicitly assumed that a(x) > 0 for i = 1,2,..., 7icnstr-70 Figure 4.3: The barrier trajectory, x^. gained from following the barrier trajectory exactly. We share this philosophy from the computational point of view, because we want an efficient method. Also, our strategy is to not specify the control too strongly. Solving the barrier problem for one 'smallish' value of /i also corresponds to the shortcut method of Fletcher [44] by which an approximate solution to a dynamic programming problem is solved for a single value of the parameter appearing in an augmented penalty term. In this way, it can be interpreted as a shortcut solution towards the minimization of where the sum is taken over all i such that c,- < csafe); and subject to c(x) > 0. The presence of the logarithmic barrier term makes our solution procedure a path-following method, but in a different sense than is usual. Usually, iterates of a path-following method move towards the optimum in the direction of the barrier trajectory. However, the feasibility set for such problems is static. Because we consider a more general case of a viability set that varies in time, it is necessary to generalize the concept of barrier trajectory to a barrier graph, thus including time. 71 The barrier graph can be defined as Graph(a;M) = {(t, x^) £ 5? X W1 \ x^ is the barrier trajectory at time t}. Our method then follows the path of .a particular point on each barrier trajectory (namely the point with fik = 1) through time. We feel that this is a natural strategy for a controller designed to solve a DAI from robot programming: It is a compromise between not getting too close to the boundary while not expending too much effort to move toward the (nonstationary) analytical center. 72 Chapter 5 D e l a y - F r e e D i s c r e t i z a t i o n s If you write some software which is half-way useful, sooner or later some-one will use it on discontinuities. - A . R . Cur t is The state equations of the D A I problem form a differential system (1.1a). A discret izat ion of (1.1a) is required for two purposes. F i rs t , it allows for the possibi l i ty of numerical s imulat ion. Second, numerical integrat ion can be used for off-line verif ication or on-l ine, super-real-time simulat ion. 5.1 M o t i v a t i o n Consider a differential system that is separable as follows ? i = . / i ( * , ? 2 ) ( 5 - l a ) 9.2 = f2(t,q1,q2) + B2(t,q1,q2)u. (5.1b) There is intuit ive appeal in choosing a discretization of (5.1) that is also delay free. There is a discretization delay of the influence of the choice of control 73 function u propagated between stages of an explicit Runge-Kutta method applied in the standard way. In general, if the differential equation for a state variable x must be differentiated (p — 1) times before it contains a control variable, then there is a (p — l)-stage delay in the influence of a control on x. Correspondingly, constraints c(x) are said to have index p. As an illustration, consider a standard application of the forward Euler scheme to (5.1): Ol,n = <?l,n-l + A i / i ( / n - l , ?2,n-l) ?2,n = <h,n-\ + A<( /2 ( i n _ l ,g i ,„_ l ,g2 ,n - l ) + -^ (^- l , <7l,n-l, 92,n-l)) M n-1 Thus, the discretization renders q\ independent of u until after the first stage (which, in this extreme case, also corresponds to an entire time step). Moreover, this effect will occur every time there is a discontinuous jump in the control. In order to have a time-continuous dependence for all components of the state on the control, the composite Euler scheme may be employed [109, 49]. This scheme uses forward Euler on the differential equation for q2 and backward Euler on the differential equation for q\: 92,n = 92,n-l + A2 (/2(*n-l)9l,n-l} </2,n-l) + -#2(^-1, 9 l ,n- l , <Z2,rc-l)) un-l 9l,n = 9l,n-l + A i / i ( i n _ i , 92,71) It is easy to see that this modification now makes q\ dependent on u with no nu-merical delay. We note that delays between issuance and execution of commands are common in practice. However, the choice of a delay-free discretization does not compound this practical issue with an artificial component. The system (5.1) can be viewed as an instance of a partitioned system of ordinary differential equations. The composite Euler scheme is then a simple dis-74 cretization scheme for such a system. We now generalize such composite Runge-Kutta schemes to higher order, in the broader context of implicit-explicit (IMEX) schemes [7] applied to more general systems than (5.1). 5.2 Implicit-Explicit Runge-Kutta Methods When a set of differential equations involves terms of different types, it is a natural idea to employ different discretizations for them. Such schemes are often well suited for the time integration of convection-diffusion (or hyperbolic-parabolic) equations, such as ut = uux + vAu, v > 0. (5-2) Discretization of the spatial derivatives yields a very large ODE system in time, u = f(u)+g(u), (5.3) where, in this case, / corresponds to the discretization of the convection (hyperbolic) term, uux, and g corresponds to the discretization of the diffusion (parabolic) term, vAu. Typically, / is highly nonlinear, but nonstiff, whereas g is linear, but stiff. Hence, a fully implicit method would be complicated by questions of asymmetry and indefiniteness in the Jacobian of / , and a fully explicit treatment would suffer from a severe time-step restriction because of the stiffness of g. However, / can usually be adequately handled by an explicit scheme and the implicit system for g is often linear, symmetric, positive definite, and sparse. An IMEX scheme consists of applying an implicit discretization for g and an explicit one for / . Linear multistep IMEX schemes have been used by many researchers, espe-cially in conjunction with spectral methods [65, 28]. Schemes of this type have been 75 studied as far back as the late 1970's [118, 36]. More recently, instances of these methods have been successfully applied to the incompressible Navier-Stokes equa-tions [63], and in environmental modelling studies [62]. A systematic, comparative study for PDEs of convection-diffusion type was carried out in [7], and a corre-sponding study for reaction-diffusion problems arising in morphology is reported in [100]. An early example of an IMEX RK scheme appears in the context of a parti-tioned RK method [48]. IMEX RK schemes have also appeared in very specialized cases as explicit symplectic Runge-Kutta methods for separable Hamiltonian sys-tems [98, 49]. The use of strongly damping schemes is required, for example, to reduce aliasing in poorly resolved computations. Schemes that do not possess such attenu-ation properties can produce qualitatively incorrect solutions, even upon refinement [100]. For diffusion-dominated problems, schemes utilizing Backward Differentiation Formulae (BDF) for g have optimal damping properties. The corresponding IMEX scheme is a semi-explicit BDF (SBDF) [118, 36, 63]. However, when diffusion is not dominant, the choice of method is less clear. Ideally, one would like a dissipative scheme for the hyperbolic term, so that the resulting IMEX scheme would have good stability and smoothing properties, independent of g(u). Yet, it is well known that this necessitates looking for schemes with order (and hence, backward steps) of at least three [7]. The main difficulty plaguing multistep IMEX schemes is the possibility that relatively small time steps become necessary due to stability restric-tions (e.g. [116]). Such restrictions become worse when working with higher-order multistep schemes, as demonstrated in [7]. On the other hand, the stability region of explicit Runge-Kutta schemes actually increases when three- and four-stage schemes 76 are contemplated. This is another natural motivation for us to be led to investigate the development and performance of higher-order IMEX Runge-Kutta schemes. When the system (5.3) arises from a PDE in more than one spatial variable, the simplicity and efficiency of solving the algebraic equations corresponding to the implicit part of the discretization at each time step are of paramount importance. In [7], it is shown that, under appropriate circumstances, only slightly more than one multigrid W-cycle per time-step is needed for certain IMEX schemes. The goal here would be for such a performance per Runge-Kutta stage. Hence, we concentrate on diagonally implicit Runge-Kutta (DIRK) schemes for g(u), paying particular attention to their attenuation properties. Because the effective order of general DIRK schemes drops to one for very stiff ODEs [50], we do not expect the Runge-Kutta schemes to be competitive with SBDF in the limit where g(u) is heavily dominant and very stiff. However, the Runge-Kutta schemes developed here are shown to have excellent properties for other parameter ranges [10]. It is well known that, despite its recent rise in popularity, the use of Runge-Kutta schemes for the integration of time-dependent PDEs has some drawbacks. One problem is a loss of accuracy when time-dependent inflow boundary conditions are prescribed. In [29], it is shown that the conventional (and intuitive) method of imposing inflow boundary conditions at intermediate stages reduces the accuracy to first order in space near the inflow boundary, and to second order overall, in-dependent of the order of accuracy of the spatial-difference operator. A remedy, called the simultaneous approximation term procedure, is proposed in [1]. Note also that, in the (usual) case where the accuracy of the solution at the end of the step is higher than at internal stages, the accuracy to which these internal stages must be calculated (say by an iterative method, such as multigrid) cannot be lowered. 77 The system (5.3) can be cast as a partitioned system, v = f(v, w) (5.4a) w (5.4b) with u = v + w, f = f(v + w) and g — g(v + w). The IMEX Runge-Kutta schemes derived below can thus be seen as partitioned Runge-Kutta schemes. Their orders are discussed in [49], §11.15. They can also be seen as a particular class of splitting methods. We now develop some members of the new IMEX Runge-Kutta schemes. The schemes divide naturally into two classes: those whose last internal stage is identified with the solution at the end of the next time step, and those where an additional quadrature is used at the end of the step. The first class is particularly good for stiff problems, but it is the second class that seems to yield some of the more promising variants for a wide range of parameters. In particular, we note the performance of the scheme identified as (3,4,3). As is described below, the labelling denotes three internal stages for the implicit formula, four (effective) stages for the explicit formula, and a combined accuracy of order three. We now describe the construction of some IMEX Runge-Kutta schemes for equation (5.3), beginning with some notation. For g, we consider an implicit s-stage DIRK scheme [50] with coefficients A £ 1ZSXS, c,b £ 1ZS, in the usual Butcher tableau, 5.3 Construction of I M E X R K Methods c A bT 78 The Butcher tableau is a compact way of characterizing a Runge-Kutta method. When applied to y = f(t,y), the nth integration step from the point (tn-i,yn-i) with stepsize At is written as Ki = f (^t + CiAt,Y^aijKj^ s Vn = Vn-1 + &tYlbJKJ-Let a = s + 1. For / , we consider an (s + l)-stage explicit scheme with the abscissae c = and coefficients A G TZaxa,be Vf. To cast the DIRK scheme V c / as an ( 5 + l)-stage sc obtaining the tableau as well, we can pad the s 0 0 0 0 . . . 0 C l 0 an 0 . . . 0 C 2 0 « 2 1 « 2 2 . . . 0 Cs 0 « S 2 &SS 0 & 2 . . . 6S Referring to the coefficients of this padded tableau a s i 6 1ZaXa, b G 7Za, c G Vf, we see that c = c. This property simplifies significantly the form of the order conditions on the method's coefficients due to coupling between the individual schemes. An algorithm to perform one step from tn-\ to tn = + k of the IMEX scheme is given as follows: 1. Set K\ = /On-l)- (5.5a) 79 2. For i = 1,..., s do: • Solve for K{. Ki = 5(«(t))> (5.5b) where i i = w n _i + k ] P ai,iKj + k "-i+ijKj- (5.5c) j=i i=i • Evaluate = /(«(,•)). (5.5d) 3. Finally, evaluate wn = + k + k bjKj. (5.5e) i=i j=i We note that in Step 1, a term corresponding to Ko is ignored (because it is irrelevant computationally). However, it lends further support to the choice of implicit-explicit (and not vice versa) as the name for these schemes. We now consider two special cases, leading to two sub-families of methods, which satisfy one of the following conditions: 1.6 = 6 (and in particular b\ = 0). Thus, the quadrature step (5.5e) reduces to un = un-i + k^ bj(Kj + Kj+i)- (5.6) 3=1 • 2. bs+\ — 0. Hence, Ks+i need not be evaluated. Furthermore, if bj = asj, bj = a s + i j , j = 1,..., s (5.7) 80 (implying that the implicit scheme is stiffly accurate and c s = 1), then substi-tuting (5.5e) into the expression for u^, we see that U n = (5.8) This is useful for very stiff problems. We note also that the explicit scheme can now be generated from a general explicit s-stage scheme based on the abscissae c \ c s _ i , 0 0 0 0 • •• 0 Cl 0 0 • •• 0 C2 0-31 032 0 • •• 0 Cs-1 a si dS2 ' as3 • •• 0 h h 63 • In (5.5b), there are s nonlinear systems to solve for K{ involving g alone. The operators to be inverted in a Newton iteration are where gu = | £ . Assuming that the Jacobian gu is evaluated only once per step (at i n _ i ) , this operator is the same for each i if a,-,- is independent of i. In this case, the implicit schemes are known as singly diagonally implicit Runge-Kutta (SDIRK) schemes. However, if an iterative method like one-cycle multigrid is used for the implicit scheme, then perhaps it is less critical for the overall efficiency to insist on SDIRK, and a more general DIRK scheme can be contemplated. See [50], §IV.6, for stiffly accurate SDIRK schemes. In our search for particularly accurate and stable schemes, we have allowed the possibility for the diagonal elements of A to differ 81 from one another. Despite this, the schemes we present are all SDIRK, because there were sufficient excess degrees of freedom in the solution process to allow for this slightly more efficient choice. explicit one to advance p. Observe that if j2 is influenced by a control function, for instance, then at each stage i, this influence is propagated to Ki from K i . As mentioned, this does not happen through the standard application of an explicit scheme ([109]). Moreover, if / i is independent of q then and the scheme becomes explicit! We now proceed to construct some instances of our IMEX RK family of schemes. We will use the triplet (s,a,p) to identify a scheme (3), where s is the number of stages of the implicit scheme, a is the effective number of explicit scheme stages (so a = 5 + 1 for Case 1, i.e. (5.6), and a = s for Case 2, i.e. (5.7)-(5.8) above), and p is the combined order of the scheme. 5.3.1 Composite Euler (1,1,1) The well-known pair of backward and forward Euler schemes Relating back to equations (5.1), consider the case of (5.3) where u — We apply the implicit scheme to advance q and the (5.9) 1 1 0 0 1 1 82 can be padded to read 0 0 0 0 0 0 1 0 1 1 1 0 0 1 1 0 This yields the simplest IMEX RK method:1 + Kf(.un-l) + g(Un))-Note that this scheme satisfies (5.7). 5.3.2 Forward-Backward Euler (1,2,1) The above form of Euler's method does not satisfy b = b. The variant that does is 0 0 0 0 0 0 1 0 1 1 1 0 0 1 0 1 i.e. un = u n _ i + k(g(u^)) + /(w(i))) where = un_i + kg(u^) + A?/(un_i). This scheme involves one more evaluation of / per step than the (1,1,1) scheme. 5.3.3 Implicit-Explicit Midpoint (1,2,2) The Euler pairs are first-order accurate and it is well documented that the (1,1,1) scheme exhibits poor (and misleading!) performance when g = 0 (cf. [7, 10, 99, 100]). The following pair of implicit-explicit schemes 0 0 0 0 0 0 1 2 0 1 2 1 2 1 2 0 0 1 0 1 1 and the first in the family of I M E X linear multistep methods [7]. 83 correspond to applying explicit midpoint for / and implicit midpoint for g. It can be shown to be second-order accurate because the two schemes from which it is composed are each second-order accurate, and c = c [49]. The performance of this scheme is usually comparable to the unreasonably popular CNAB considered in [7], and has better symmetry properties. Applying the scheme (1,2,2) to the separable Hamiltonian system We note that this scheme is explicit, and it can also be shown to be symplectic. It is identical to the leapfrog/Verlet scheme [107] Pn = Pn-1 - kHq (Qn-1 + 7 7 # p ( P n - l ) ) 5 . 3 . 4 A t h i r d - o r d e r c o m b i n a t i o n ( 2 , 3 , 3 ) To analyze the damping properties of a (one-step) numerical scheme, it is convenient to consider the test equation u = au. We let z = ak and write one step of the scheme as un = R(z)un-i, where R(z) is called the stability function. For attenuation of the numerical solution, we must have 9 = •H'P(P)> P = we define |i?(-oo)| < 1. 84 It is shown in [50] that the stability function for a DIRK scheme takes the form P{z) R{z) (5.10) where P(z) is a polynomial2 of degree at most s. Solution of the order equations yields that there is a one-parameter family of solutions for a two-stage DIRK to be third-order accurate. We select the free parameter to minimize \R(—oo)|. The two-stage, third-order DIRK scheme with the best damping properties turns out to be the two-stage, third-order SDIRK scheme 7 1 - 7 7 0 I - 2 7 7 wi 1/2 1/2 th 7 = and R( — 00) = 1 — y/3 ~ —0.7321. This is a marked improvement over the midpoint scheme (1,2,2), which has no attenuation at the stiffness limit z —*• -co. The corresponding third-order explicit Runge-Kutta scheme (ERK) is 0 0 0 0 7 7 0 0 1 - 7 7 - 1 2(1-7) 0 0 1/2 1/2 The resulting IMEX combination is third-order accurate, has some dissipation near the imaginary axis (like all three-stage third-order ERK methods) and has some attenuation of the stability function at z = —00. 2Precise formulas for P(z) are given in [50], p.49. 85 5.3.5 L-stable, two-stage, second-order D I R K (2,3,2) It is possible that the attenuation of the above scheme may not be sufficient for some problems. A two-stage, second-order DIRK scheme that is stiffly accurate is ([50], p. 106) 7 7 0 1 1 - 7 7 1 - 7 7 where 7 = 2 £2. The corresponding three-stage, second-order ERK is of the form 0 0 0 0 7 7 0 0 1 6 1 - 6 0 0 1 - 7 7 The choice of 6 is made to get a dissipative stability region. We match the terms of the corresponding stability functions to obtain the stability region of a three-stage, third-order explicit RK scheme, with 6 = —2\/2/3. The resulting IMEX combination is second-order accurate. 5.3.6 L-stable, two-stage, second-order D I R K (2,2,2) We use the same form for schemes as in the previous IMEX scheme, retaining the implicit scheme. Thus, 7 is that as defined for the (2,3,2) scheme, but 6 is not yet specified. The value for 6 is determined from the requirement that (5.7) hold instead 86 of (5.6), i.e., b\ = 6, b2 = 1 — 8, b3 — 0. This gives a second-order scheme 7 7 0 1 1 - 7 7 1 - 7 7 wi th 6 = l - ± . 0 0 0 0 7 7 0 0 1 <5 1 - s 0 <5 1 - 8 0 5.3.7 L-stable, three-stage, third-order DIRK (3,4,3) We would now like to exploit the larger dissipat ivi ty region, which four-stage, fourth-order E R K schemes have near the imaginary axis . A three-stage, third-order D I R K scheme that is stiffly accurate is ([50], p. 106) 7 7 0 0 (1+7) 2 (1-7) 2 7 0 1 6l(7) 62(7) 7 6l(7) 62(7) 7 where 7 is the middle root of 6a; 3 - 18x2 + 9x - 1 = 0, 61(7) = - § 7 2 + 4 7 - \ and 62(7) = I72 — 57 + I . Numerical ly, this evaluates to .4358665215 .4358665215 0 0 .7179332608 .2820667392 .4358665215 0 1 1.208496649 - .644363171 .4358665215 1.208496649 - .644363171 .4358665215 The corresponding four-stage, third-order E R K , constructed such that it has the same (desirable) stabi l i ty region as al l four-stage, fourth-order E R K schemes, is 87 0 0 0 0 0 7 7 0 0 0 (1+7) 2 a 3 l ( 7 ) 032 (7) 0 0 1 041(7) « 4 2 0 4 3 0 0 6l(7) hirt) 7 The third-order conditions indicate that this is a two-parameter family of schemes. Taking our degrees of freedom to be 0 4 2 , ^ 4 3 , the remaining expressions are ( 9 3 , \ / l l 21 15 2 \ „ 7 „ 9 , • « s i = J a 4 2 + U " Y 7 + T 7 J a « - . 2 + 1 3 7 - 2 7 , f 9 3 2 \ ~ / 11 21 15 2 \ „ , 2 5 9 o « 3 2 = J ^ + i - T + Y 7 " ^ J a 4 3 + 4 - y 7 + - 7 , a 4 l = 1 — #42 — a 43-Selecting particular values for 642(7) and ^ 4 3 ( 7 ) so as to match the terms of the stability function of fourth-order E R K schemes, we obtain the scheme 0 0 0 0 0 .4358665215 .4358665215 0 0 0 .7179332608 .3212788860 .3966543747 0 0 1 -.105858296 .5529291479 .5529291479 0 0 1,208496649 -.644363171 .4358665215 The resulting I M E X combination is third-order accurate. 88 / 5 . 3 . 8 A four-stage, third-order combination ( 4 , 4 , 3 ) No pair consisting of a three-stage, L-stable DIRK and a four-stage ERK satisfying (5.7) with a combined third-order accuracy exists. The proof of this runs as follows: Solution of the order conditions at order three implies that there is a one-parameter family of three-stage, third-order DIRKs and four-stage, third-order ERKs that satisfy (5.7). Without loss of generality, let the free parameter be c\. A troublesome aspect of this family of solutions is that, despite the degree of freedom, 033 = 0 is a necessary condition for the solution of the order equations. Hence, the matrix A is singular, and the guarantee of L-stability for the DIRK is lost [50]. The only possible resolution then would be to use c\ to make the two leading coefficients of P(z) in (5.10) vanish, so that l i m ^ - c o R(z) = 0. This was found to be impossible because the coefficient of z2 in P(z) does not vanish for any real value of c\. Accordingly, in order to obtain a third-order formula satisfying (5.7), we are forced to go to a four-stage, third-order, L-stable DIRK, coupled with a four-stage ERK. After having satisfied the order conditions and imposed L-stability, we are left with a few families of schemes. Experimentation with the test equation (5.11) of the next section, we have selected a scheme that has rational coefficients that are not too large, and a diag-onal entry that is close to that of the four-stage, fourth-order, L-stable DIRK. In particular, we have chosen C\ = \, accompanied by the choices C3 = \ , 64 = \, and 643 = \. The resulting scheme is 89 1 2 1 2 0 0 0 2 3 1 6 1 2 0 0 1 2 1 2 1 2 1 2 0 1 3 2 3 2 1 2 1 2 3 2 3 2 1 2 1 2 0 0 0 0 0 0 1 2 1 2 0 0 0 0 2 3 11 18 1 18 0 0 0 1 2 5 6 5 6 1 2 0 0 1 1 4 7 4 3 4 7 4 0 1 7 3 7 0 4 4 4 4 5.4 Advantages of I M E X R K Methods As is shown in [7], using a centered discretization scheme for the spatial derivatives of. advection-diffusion PDEs, a von Neumann analysis reveals that a simple yet appropriate test equation is the scalar ODE (5.3) with / = idu, g = au, (5.11) where a,j3 are real constants, usually a < 0, f3 > 0, and i = y/— 1. For a given time step k, we can define x = ka, y = k/3, z — x + iy, and write an IMEX step (3) using (5.6) as s un = tt n_i + z Y b J u ( i ) = R{Z)Un-l, (5.12) i=i 90 where (1 + iyai+hi)un-i + Y^i(xaij + iyai+i,j+i)u{j) 1 — Xdi i = l,...,s. (5.13) For schemes where a = s, i.e. where (5.7) holds, an expression like (5.12) need not be used because un = u^sy Let us consider the two Euler schemes first. For the usual pair (1,1,1), equa-tion (5.13) implies _ l + iy 1 — X Thus, on the imaginary axis, \un\ < ( l + ?/2)|i£n_i|, and the scheme is unconditionally unstable for any y / 0 (cf. [7]). As x —> —oo and |y| < oo, however, the scheme is unconditionally stable and un 0, i.e. attenuation compatible with that of the exact solution is realized. For the other pair, (1,2,1), we get from (5.12)-(5.13) 1 - X J Now, along the imaginary axis, we have \un\2 < (l-y2 + y4)\un_1\2, so stability is achieved provided that M < i -This corresponds to the CFL condition. On the other hand, letting x —> -co, we also obtain the restriction \y\ < 1 in the high-attenuation limit. Thus, we see that the variant (1,2,1) has stability characteristics that are preferable over the usual backward-forward Euler scheme in the hyperbolic limit, but worse in the parabolic limit. 91 Now, for a fixed ratio x/y = a//3, it is of interest to determine the maximum y > 0 for which the scheme is stable, that is, < 1. This gives the stability restriction on the step-size for a given problem. In Figure 5.1, we plot the resulting curves for the IMEX schemes presented earlier of the second type, i.e. those sat-isfying (5.7). Corresponding curves for the schemes satisfying (5.6) are plotted in Figure 5.2. Common to all the schemes in Figure 5.2 is that, as a —• —oo, there is a time-step restriction involving /3, typically k/3 < 1. In contrast, no such restriction occurs for schemes satisfying (5.7), and hence (5.8), in Figure 5.1. The behavior of these latter schemes is qualitatively similar in the high-attenuation limit to that of the SBDF schemes [7]. On the other hand, the scheme (3,4,3) in Figure 5.2 suggests a potential advantage, in terms of allowable time steps, over a large range of parameter values. Some experimentation with these schemes has been performed in [10] on convection-diffusion problems in one and two dimensions for a wide range of mesh Reynolds numbers. These include a one-dimensional, variable-coefficient problem, the Burgers equation, and a two-dimensional problem solved via time-dependent multigrid. As expected from general properties of standard (one-step) Runge-Kutta versus linear multistep schemes, results from [10] show that the Runge-Kutta-based IMEX schemes have better stability regions than the best IMEX multistep schemes known. They are also found to average just over one fine-grid iteration per implicit stage of one-cycle multigrid. This is also a marked improvement over the scheme suggested in [48]. 92 Time-Step Restrictions k(5 0 \ 1 1 1 \ '(1,1,1) — \ (2,2,2) - y \ \ (4,4,3) 1 1 • L -10 2 -10 1 -10° - 1 0 - 1 -1 (T 2 a/ /? Figure 5.1: Time-step stability restrictions for various values of a and /?, for IMEX schemes satisfying (5.7). Note that all these schemes perform well in the high-attenuation limit. 93 Time-Step Restrictions 12 i r — 1 - i o 2 -10 1 - i o ° - i o _ 1 - I O - 2 Figure 5.2: Time-step stability restrictions for various values of a and /?, for IMEX schemes satisfying (5.6). Note that some of these schemes, especially (3,4,3), perform well when -a is not too large in magnitude, but they all have time-step restrictions in the high-attenuation limit. 94 C h a p t e r 6 N u m e r i c a l E x p e r i m e n t s In this chapter, numerical simulation is employed to demonstrate the general appli-cability of the solution procedure described and analyzed in Chapters 3-4, the types of solutions that are produced, and a measure of the computational efficiency. In the first section, we give some results from simulations of the mobile robot system (1.2) under the action of various programs. These include very general motions us-ing plane waves (1.5) as drivers, as well as more refined motions using plane waves and spotlights (1.6). We then proceed to combine the programmed-constraint ap-proach with optimal control, showing the complementarity of the two approaches and some of the potential advantages of a purely local approach in various situations. We conclude this chapter by examining a program for the robot arm described in Section 1.3.2. 95 6.1 Mobile Robot 6.1.1 Simulation The mobile robot example is described in Section 1.3.1. We will now examine different scenarios of varying difficulty and report on the performance and qualitative properties of the solutions. For the purposes of simulation, we make a few simplifications of the equations of motion (1.2). Let a = ^ tan(/?V0 a n d a = na be the new controls. Thus, the state equations of the mobile robot are . x = vcos9 (6.1a) y = vs'm9 (6.1b) 9 = va (6.1c) v = a — ~fv. (6.Id) This casts the equations (1.2) into the form (1.1). Such a redefinition of controls is possible since ip and a are in a one-to-one correspondence in the allowable range of ip. The bounds on the controls are \a\ \a\ The first bound represents the maximum allowable throttle setting and the second yields a minimum turning radius of 25 cm. The mobile robot is a nonholonomic system [81]. The nonholonomic con-straints are found by setting the sideways velocity of the wheels to zero. For the rear wheels, this yields sin 9x — cos 9y = 0 96 < 150 cm s~2 (6.2a) < — c m - 1 . (6.2b) 25 and for the front wheels, we have sin(0 + /3ip)x - cos(0 + f3ip)y - L6 cos(j3?p) = 0. The identification of nonholonomic constraints is generally a nontrivial task [81]. Moreover, control of nonholonomic systems is especially challenging because they are locally uncontrollable (the linearized system is uncontrollable) and yet globally controllable [81, 122, 76]. 6.1.2 Very General Programs One philosophy of weakly specifying a control is that simple tasks should be de-scribed in a simple manner and solved with correspondingly small computational effort. In this section, the robot program simply stated is to move to the end of the table with large x. The plane-wave constraint (1.5) is employed as driver, with 4> = f. The problem is complicated through the specification of different starting orientations and different times by which the motion should be completed. We also examine the qualitative differences in the trajectories produced, depending on the choice of natural control. Most of these results have appeared in [109]. For all simulations in this section, initial conditions of the system (1.2) are x(0) = 52.7, y(0) = 50, v(0) — 0, with 6>(0) and the other parameters specified in the case description or tables. We use the value 7 = | in the simulations. We utilize two possible choices for the default control, ttn at(i)- The choices are classified according to an 'effective' damping factor 7 , to be described subsequently. The first default control corresponds to damped behaviour, wn at(i) = 0. In this case, 7 = | . Because 7 ^ 0 , the robot slows while maintaining the same heading 0 when the system is safely inside K(t). The second choice for default control is ?Anat(2) = 7U(0- This is equivalent to motion of the system in the limit of no 97 damping. Hence, this choice of default behaviour is denoted by 7 = 0. A quick examination of the bounds on allowable speed and acceleration reveals that it is always possible to make such a choice of default control. The use of this control means that the system continues in the same direction at constant speed when not interacting with the buffer zone. We remark that this strategy is somewhat more aggressive than the previous one in that the system is 'preferring' to maintain a higher speed throughout the simulation and hence may be more sensitive to the prediction strategy. However, it is no more expensive to compute with in principle, and in fact it may lead to extremely efficient solutions if it ultimately produces fewer interactions with the boundary of K(t). All simulations use an adaptive sensitivity parameter, starting from a user-defined [/frac, expressed as a fraction of the identity matrix as UfTacI. Knowledge of the existence of opposing constraints is assumed and the stepsize is At = ^ seconds, representing the actual sampling time of the Dynamite system [15]. A first-order integration scheme was chosen because of the low expected accuracy due to simplifications made in the model equations. Simulations are terminated when the trailing (lower) edge of the plane wave is one car-length from the right edge, i.e., ^final — (^max %min L)/t)w. (6.3) The tables display three different measures of the performance. The perfor-mance of the algorithm is partly measured by the quantity %u, which represents the number of time steps that invoked the logarithmic barrier subroutine to com-pute a control as a percentage of the total number of time steps. Thus, it measures the complexity of the control computation in a new way - as the percentage of time spent having to explicitly account for the constraints. The constraints, along with their velocities and accelerations, are sampled every rasamp integration steps. 98 However, explicit constraint avoidance by means of barrier minimization is only executed a fraction %u of the total integration steps. In addition, we record the quantity % t t m a x » which measures the percentage of non-default control steps spent at maximum control for each control variable. Finally, typical execution time runs (in seconds) on an SGI Indy running an IP22 Processor at 100 MHz and 64 Mb of R A M are given under CPU. Of course, these times are only representative and should not be taken too literally: the times reported are an average of a small number of runs (usually about five), with the CPU load being monitored manually to ensure that other competing processes were not significant. We note that the numbers reported include simulation time and disk I/O, but are still considerably faster than the cor-responding simulation time. In this way, we see that relatively non-specific motion can be specified weakly and solved in a correspondingly cheap fashion. Case 6.1 (<f> = f, 0(0) = 0, n s a m p = 2.) 7 "frac %u %"max (a) (a) CPU 1 0 0.5 0.04 0.0 0.0 1.15 8 0 0.5 4.38 0.0 0.0 1.75 10 0 0.5 6.29 0.0 0.0 2.11 1 1 6 0.5 38.38 0.0 0.0 24.85 8 1 6 0.5 47.24 0.76 0.13 3.95 10 1 6 0.5 48.01 1.09 0.0 2.86 Table 6.1: Simulation results: Case 6.1 We note that this is effectively a one-dimensional problem: There is no incen-tive for the system to change its y-coordinate. Hence, this program is the discrete (damped) analogue of Example 4.3. The solution behaviour is depicted in Fig-ures 6.1 and 6.2 by comparing the ^-coordinate of the car to that of the wave. For 99 Figure 6.1: Robot position and wave position with no damping Trajectory of Mobile Robot (^=10,^= | ,7= 1,0(0) = 0) 70 x -1 1 1 I l i I / -robot -i i i w a v e i i i i 0 1 2 3 4 5 6 7 Figure 6.2: Robot position and wave position with damping. 100 the undamped case, the robot builds speed (approaching vw), and stays in front of the wave. Thus, %u is very small because the robot satisfies the constraints mostly without using any non-default control. The behaviour is similar with damp-ing turned on. As expected, the damping stabilizes the motion of the robot. The robot stays ahead of the wave but, due to the damping, remains much closer to the wave. The natural motion then allows the wave to catch up again, and the process is repeated. Hence, we observe that %u is higher than in the undamped case, and increases with the speed of the wave. We note that from (6.3), the number of in-tegration steps required is inversely proportional to the wave speed vw. Hence, the CPU times per integration step are comparable between runs for fixed 7 , and also reflect the increased problem difficulty of higher vw. A Case 6.2 (<f> = f, 0(0) = \ , uiieLC = 0.01, vw = 1.) 1 "samp %u %umeLX CPU (a) (a) 0 2 9.98 1.5 92.87 15.38 1 6 1 80.28 0.0 25.38 77.15 Table 6.2: Simulation results: Case 6.2 The initial configuration of the robot is now changed to \ . The choice of different n s a m p was used to enhance the different behaviours between the undamped and damped cases. Diagrams of the robot trajectories in the undamped and damped cases are given in Figures 6.3 and 6.4, respectively. In the undamped case, the robot initially moves in the direction 9(0) until it comes quite close to the top wall (constraint C4), at which point it turns to 6 « 0 and moves parallel to the top wall. When compared to the damped case, the greatest difference is that the 'natural' 101 Trajectory of Mobile Robot (vw = !,¥>= f ,7 = O,0(O) = f) y 100 50 -5 5 50 100 150 200 x Figure 6.3: Undamped motion of robot (Case 6.2). motion requires a more frequent updating of unat(t). The more frequent updates then allow the required turn to be completed sooner. Because the car must perform a fairly sharp turn to avoid the top boundary and a n a t = 0, the quantity %u is expected be much higher than in the previous (one-dimensional) case. We note that despite this, the algorithm still exhibits low CPU indicating the efficiency of the approach. A 6.1.3 More Refined Motions The program is now refined so that the robot is instructed to move to one corner of the table; for definiteness, that for X Iie<ir 3?max and y near ymin- For this purpose, we use a plane-wave driving constraint (1.5) with 4> — \. Results for various simulations are given in the Table 6.3, with a qualitative description to follow. Note that the difficulty level increases dramatically with decreasing (f>, because the final size of the feasible region is quite small. 102 Trajectory of Mobile Robot (vw= l,ip= f , 7 = |,0(O) = f ) T J 5 50 100 150 200 x Figure 6.4: Damped motion of robot (Case 6.2). Case 6.3 (</> = f , n s a m p = 1, vw = 1.) 0(0) 7 f^rac %u %"max CPU (a) (a) 0 0 10~2 58.44 0.17 97.37 84.59 0 l fi 10~2 100.0 0.0 15.02 81.03 7T 4 0 i o - 3 34.88 0.09 98.02 47.64 7T 4 1 6 10~3 100.0 0.0 41.43 144.54 Table 6.3: Simulation results: Case 6.3 The second and third cases listed in Table 6.3 are depicted in Figures 6.5 and 6.6. In Figure 6.5, the robot starts at an orientation of 0 and proceeds to turn downward and moves towards the lower right corner. In Figure 6.6, the robot starts at an orientation of — ^ , turns upward to straighten out, and angles to the corner. However, it must back up to adjust its final approach to the corner. This fine-tuning is a result of the exacting nature of the final size of the feasible region. Initial and final positions of the plane wave are included to help illustrate this (dotted lines in Figures 6.5 and 6.6). A 103 100 50 Trajectory of Mobile Robot = 7 = 1,0(0) = 0) 100 y 50 5 5 50 100 150 200 x Figure 6.5: Damped mot ion of robot (Case 6.3). In these simulat ions, we note that W f r a c was decreased in the tougher prob-lems. Due to the inherent local nature of the approach, the method can fai l in certain cases, especially when several constraints are 'act ive' and at smal l angles to each other, or when the viabi l i ty region is narrow. However, the efficiency of the method suggests that it should be effective and pract ical in many appl icat ions, such as for the Dynami te system [15], where the viabi l i ty region can usually be programmed to be 'wide' . Case 6.4 (Robot in a Spotlight) In this example, we develop a program that is closer in philosophy and appeal to t radi t ional programming methods. The object is to move the robot to the corner of the table for large x and y in the neighbourhood of a specified path . Hence, there is a geometric appeal to the programmer, who may have an idea of the desired path but does not wish to be involved in solving inverse dynamics problems. The ensuing D A I is easily specified and efficiently solved by the method of Chapter 3. 104 Trajectory of Mobile Robot K = l,<^=f,7 = 0,0(0) = -f) y x Figure 6.6: Undamped motion of robot (Case 6.3). We change the initial position of the robot to (x(0),?/(0)) = (5.27,50) and choose a parabolic spotlight path that passes through this point, and the points ( ^ m i n , 2/(0)) and ( x m a x , 2 / m a x ) - The path is parameterized as xc(t) = vwt + x(0), (6.4a) yc(t) = ax2 + bx + d, (6.4b) where vw is the x-component of the spotlight velocity and a, b, d are constants uniquely determined from the geometric criteria previously described: a = 0.001183677 b = -0.012156359 d = 50.031189879 From the parameterization (6.4) of the spotlight path, we see that the robot starts at rest in the centre of the spotlight, which is described by a constraint that is nonlinear in the control (cf. Example 4.4). The final time for the motion is now given by tf = ( x m a x — X m i n ) / ^ , with the feasible region being reduced to a quarter circle of radius r, centred at ( x m a x , y m a x ) . For this example, it was possible to 105 vw r 1 ^samp '"frac %u %"max (a) (a) C P U 1 10 0 2 i o - ' 2 23.25 0.0 39.02 26.25 1 10 i 6 2 i o - '2 49.21 0.0 0.04 29.84 1 1 0 2 i o - ' 2 46.43 0.0 2.18 18.04 1 1 1 6 1 i o -2 99.99 0.0 10.42 74.37 1 0.1 0 1 10~ 3 95.10 0.0 79.40 123.37* 1 0.1 1 6 1 0.5 100.0 0.0 0.0 45.19 5 5 0 2 0.5 47.01 0.0 11.09 5.36 5 5 1 6 1 0.5 99.85 0.07 40.95 35.11 Table 6.4: Simulation results: Spotlight Problem produce behaviours that were qualitatively different in the various cases, depending on the combination of choices for unat(t), Ufiac, and nsamp. Examples of these are shown in Figures 6.7, 6.8. These examples correspond to the last two entries in Table 6.4. The two choices we consider are the same as in Section 6.1.2: unat(t) = 0 and unai(t) = 7v(t), again denoted by 7 = \ and 7 = 0, respectively, in Table 6.4. The solution in Figure 6.7 looks qualitatively similar to solutions that are normally produced by an active-set method: The robot slides along the 'trailing' part of the boundary, as if being dragged along by the spotlight (cf. also Figure 4.2). On the other hand, in Figure 6.8, the robot stays very close to the centre for most of the time, right up until the final moments of the simulation, when it tapers off to the outer reaches of the feasible region. Because the speed of the spotlight is so modest, and the starting values of U{vac are identical, we can attribute the difference in behaviours to the different values of n s a m p : higher values allow the solution to approach the boundary more readily, leading to an active-set look to the solution. The (anomalous) reading denoted by * was obtained with the adaptive sensitivity feature turned off. This simulation is included for the purposes of illustration. It 106 shows the potential efficiency gain of the adaptive strategy. However, we note that forcing the sensitivity to maintain low values may be required to obtain solutions to particularly sensitive problems. A 40 -20 -5 50 100 150 200 x Figure 6.7: Solution exhibiting heavy properties. 40 -20 -5 50 100 150 200 x Figure 6.8: Solution not exhibiting heavy properties. The cases in which r = 0.1 are testing the limit of where the DAI formulation breaks down. We note that due to the nonholonomic nature of the problem, the precise path of the spotlight given in (6.4) cannot be followed exactly. This is an 107 illustration of a shortcoming of traditional (holonomic) path planners applied to nonholonomic systems [81]. Thus, the programmed-constraint approach can offer a sensible alternative that can also retain some geometric appeal. 6.2 Interaction with Approaches from Optimal Control In this section, we discuss some uses of the programmed-constraint approach in conjunction with an optimal-control solver. We show some complementarity between the two approaches, in particular how the DAI solver presented in Chapter 3 is a very inexpensive starter for an optimal-control routine. We also demonstrate some advantages of a local approach over a global one in the case of optimal control; however, these advantages are generalizable to any purely global planning strategy. The optimal-control code used is OCPRSQP [101], which solves optimal-control problems in boundary-value DAEs with state- and control-dependent constraints of the form: . /*' L(x(t),y(t),u(t))dt, (6.5a) '•JtQ mm x,y,u t subject to < x(t) ^ \ 0 /(*(*), y(t), «(<)), (6.5b) / Umin < U(t) < M m a x , (6.5c) < 4 i n < c\x,y,u) < c ^ . (6.5d) The boundary-value problem is discretized by collocation on a mesh n = {*o < h < • • • < <JV = */} and the resulting finite-dimensional optimization problem is solved by a new class of numerical optimization techniques that Schulz in his thesis has termed partially re-108 duced sequential quadratic programming methods [101]. These methods are intended to combine the convenient treatment of inequality constraints afforded by sequential quadratic programming and the smaller quadratic subproblems offered by reduced sequential quadratic programming. This allows the optimization problem (6.5) to be solved very efficiently [101]. For this thesis, we are not exploiting the full DAE capabilities of the code because we deal only with the ODEs (6.1). However, OCPRSQP is no less efficient when algebraic constraints are absent. In order to make the exact solution to the optimization problem more intu-itive, only table constraints are imposed, where for simplicity, we redefine ^min = 2/min = 0, x m a x = 200, and J/m ax = 100. The boundary conditions for the optimal-control programs are 4(0) = ( j ^ m a x , 0, 0) T , ?(*/) = (r%a;max,2'?/max,0mod27r,0)T. In the remainder of this section, we will refer to the initial and final (x, ^-coordinates as A and B = (xf,yj), respectively. The programmed-constraint formulation shares the same initial conditions, but the terminal conditions can only be enforced to within some tolerance. How-ever, this is of little concern when constructing an initial guess because an optimal trajectory is not necessarily 'close' to one that is produced from a DAI 1 . The two tra-jectories only need to be close enough to enable the nonlinear solver to converge. A DAI program is specified independent of any intertemporal objective function. How-ever, it often makes use of additional user-defined constraints that are not present 1 T h i s depends on how the viability set is defined. 109 in the corresponding optimal-control problem; in particular, it can require that the DAI trajectory end up in a suitably small neighbourhood of the terminal conditions. Thus, the programmer can still compute a reasonably accurate initial guess. We consider the minimum-time problem mm / at, 9> u Jo subject to (6.1), (6.2), and (1.3). Because the final time tf in the above is unknown, we define a new variable r = JJ and re-write the problem as min / C(q(r), U(T))(IT (6-6) 5,« Jo subject to k = f, (6-7) and equations.(6.2) and (1.3), where q = (qT,tf)T and / = (tffT,0)T. We now give a few technical details for the optimal-control simulations. Al l simulations in this section were run on an SGI Indigo 2 having 64 Mb of R A M and a clock speed of 200 MHz. Three-stage Gauss collocation is employed as the discretization scheme, on an adaptive mesh having 40 subintervals initially. The con-trols u are taken to be piecewise constant, generally on a coarser grid than the state variables. Convergence is deemed to have occurred based on an expected decrease, TOL, in the Powell merit function [101]. We pay particular attention to the effects of decreasing this tolerance from 1.0 X 10~3 to 1.0 X 10 - 5 on the numerical solution. As a final comment, we note that in order to reduce the computational complexity, convergence is only to a locally optimal trajectory. The particular optimal trajec-tory achieved depends critically on the initial guess. As we will see, the addition of an objective function to a basic viability problem already adds significantly to 110 the computational effort. This effort would be compounded further if an attempt is made to also go to a global optimization routine, such as simulated annealing (see, for example, [95] or [58] and the references therein). We consider two numerical experiments. Each involves obstacles of the form (1.7), to be described subsequently2. Both are prototypical applications for a robot in a hazardous environment [81], such as a mine field. The robot must execute a given program in an efficient manner yet have the autonomy to make correct decisions based on information not available to the programmer. We demonstrate that DAIs provide a suitable environment for the program, with the solver described in Chapter 3 providing an effective control in a timely manner. Case 6.5 (One Central Obstacle) To our previously uncluttered arena, we add a circular obstacle, c 0b s,i, of radius 10, and located at coordinates (100,50). In this case, the (locally) time-optimal solution is fairly intuitive. It involves one switching time and (close to) bang-bang control for the acceleration. The exact solution for the steering control is a little more complicated, but similarly based on switching times and almost bang-bang control. The problem is not reversible due to the nonzero damping factor, 7 , so the switching time is slightly larger than half of the optimal time. A consistent3 albeit not overly thoughtful initial guess for the solution is generated by means of shooting (Figure 6.9); that is, specifying a control profile and integrating (6.1) to obtain the state variables. As can be seen, it is not very close to optimal. In Figure 6.10, we show the 'optimal' solution produced with the setting 2 T h e author apologizes in advance for not giving the obstacles more colourful names, such as Barnacle B i l l , Yogi, or Scooby-Doo. 3 B y 'consistent', we mean a trajectory g(t) obtained from a control u(t), in contrast to q(t), u(t) being postulated independent of each other. Ill 0 50 100 150 200 x Figure 6.9: Initial Guess from Shooting for One-Obstacle Problem 80 [ 20 r 0 I , , , 1 0 50 100 150 200 x Figure 6.10: Solution for One-Obstacle Problem with TOL = 1 0 - 3 112 100 80 -60 y 40 20 -0 0 50 100 150 200 x Figure 6.11: Solution for One-Obstacle Problem with TOL = 1 0 - 6 TOL = 1.0 x 10 - 3 , with a CPU time of 1.53 seconds. It shows that we must be careful in casually accepting a solution simply because it was produced by an optimal-control code! Upon refining the tolerance to 1.0 X 10~6, we obtain a better approximation and this is displayed in Figure 6.11. However, the CPU time required is now 17.97 seconds. Let topt be the final time obtained from the accurate solution to the optimal-control problem. An optimal solution is very useful from a theoretical point of view. However, in practice, the principle of optimality [18, 17, 19] is very unforgiving. The prin-ciple of optimality states that any portion of an optimal trajectory between some intermediate point and the final point is itself optimal. Errors or uncertainties of some type are a reality in engineering as well as in numerical simulation. This often makes truly optimal solutions hard to come by in practice. However, we can make good use of the knowledge of an optimal solution to produce a sub-optimal solution that is close to the optimal one, even in the presence of errors. If we have an optimal solution, it is usually easy to construct a program that will mimic it using a DAI. The following is only one possible realization and 113 the description is a typical account of the programming process when using DAIs. We opt to use a spotlight constraint of the form (1.6). The path of the spotlight centre is not chosen to lie exactly on the optimal path so that we can maintain a wide viability region when it goes around the obstacle. Accordingly, we choose a spotlight path that consists of two cubics: The first cubic connects the starting point A with (x,y) = (100,65) and has slope ^ = 0 at each of these two points. The point (x,y) is chosen with an eye for an implementation with a spotlight of radius4 r = 8. The second cubic then connects (x, y) to the end point B, again with a slope of zero at both ends. We must now give a time parameterization of the spotlight centre. We note that this is relatively easy to do because the spotlight is not physical and thus can be chosen to obey simple dynamics, such as motion under constant acceleration. We parameterized the centre as cc(0) + hat2 t < t* xc(t) = { ~ (6.8a) x(t*) + xc(t*) - \dt2 t > t* Vc{t) axxlit) + bxxKt) + ci_xc{t) + di t<t* (6 8b) a2x3c(t) + b2x2c(t) + c2xc(t) + d2, t> t* where t* = ^ o p t , and the coefficients ai , fr^etc. are uniquely determined from the geometric conditions specified on the cubics. In this particular realization, the spotlight accelerates uniformly from rest at A, reaching its maximum speed at time t = t* and position ( x , y ) , then decelerates uniformly to point 5, where it is again at rest. The acceleration parameter a is chosen so that the spotlight centre travels from A to B in time iQpt- Of course, the DAI solution is sub-optimal; however, it 'More quantitative reasons for this choice are given in Case 6.6 114 100 80 y 40 -60 -20 -0 0 50 100 150 200 x-Figure 6.12: DAI Solution for One-Obstacle Problem captures the essence of the optimal trajectory in this case. A good candidate for unajt(t) can also be afforded by the optimal-control solution. The solution produced by this program is displayed in Figure 6.12. For reference, we have included the optimal trajectory, the spotlight, an outline of the car, and a dot within each outline to denote the coordinates of the centre. We note that the constraints are specified with respect to these coordinates, and the fact that the outline of the car itself may sometimes fall outside of the spotlight is not relevant. The CPU time for the above calculation was 0.42 seconds. Thus, the programmed-constraint approach can provide a relatively painless method to program an implementable approximation to the optimal-control solution, and we have not yet taken full advantage of the local nature of the solution method. A Case 6.6 (Hidden Obstacle) In order to illustrate the potential advantages of a local approach to robot control, we place a (small) circular obstacle, c0bs,2 m such a way that it intersects both the optimal-control solution (Figure 6.11) and the solution produced by the 115 DAI approach (Figure 6.12). A suitable choice for such an obstacle has its centre at (160, 54.5) and a radius of 3.5. By doing this, we wish to simulate the existence of an obstacle that is recognized during the (simulated) motion: From the vantage point A, cQbs,2 m a Y be obscured by c 0bs,i- The time-optimal solution is constructed based only on the information available at the outset; it cannot take advantage of more information obtained during the execution. Thus, the solution of the optimal-control problem cannot be implemented as is, because the robot would hit the hidden obstacle. This is a limitation of the optimal-control approach in general: It is a global problem and hence requires global knowledge. On the other hand, a programmed-constraint specification of the task can still be defined without this knowledge. The presence of a local controller may make it possible to produce a trajectory that avoids this hidden obstacle. Without changing the DAI program from Case 6.5, we add the hidden ob-stacle, but only allow the robot to 'see' it when it has passed (x,y): the information contained in c 0 b s > 2 15 o n i y made available after t = t*. We have purposely de-signed the program in Case 6.5 (particularly the choice r = 8) so that the program was likely not to need modification to avoid the hidden obstacle. The quantitative reasoning behind this is as follows: Figure 6.13 depicts a robot at the centre of a spotlight of radius r heading directly toward an obstacle of radius rGbs- Simple geometry allows us to calculate the minimum turning radius rt — required for the robot to barely avoid this obstacle: rt = l ± ^ r , (6.9) V where 77 = 2 ^ - . For an obstacle of size 3.5 and a turning radius of 25 cm, we find that r 10.8. Now, Figure 6.13 represents somewhat of a worse-case scenario: the robot is headed directly towards the centre of the obstacle. Given that this is 116 Figure 6.13: Worst-case scenario of mobile robot approaching hidden obstacle. unlikely to be the case as the robot passes the obstacle, we choose a radius of 8 as a compromise to not being far from the optimal-control program while respecting equation (6.9). The result is shown in Figure 6.14, with the robot adjusting its trajectory at run time to avoid the new obstacle. The CPU time required for this run was 0.45 seconds. Clearly, there are definite advantages to be had from a local approach to robot programming. The DAI formulation with local solver avoided the hidden obstacle with absolutely no additional effort on the part of the programmer. On the other hand, the optimal-control problem would have to recompute the trajectory, thereby adding to an already expensive calculation. A Case 6.7 (Multiple Obstacles) We have seen how programmed constraints can be used in conjunction with the knowledge of an optimal-control solution to produce a control that has a degree of robustness and is close to optimal. To complete the cycle of complementarity, we now demonstrate the use of DAI as an initial guess for a problem with multiple 117 200 Figure 6.14: DAI Solution for Problem with Hidden Obstacle obstacles. The arena is occupied by five obstacles of random size and position as de-picted in Figure 6.15. The optimal-control problem is the same as before: Find 0 50 100 150 200 x Figure 6.15: Location of Multiple Obstacles the minimum time solution to get from A to B while avoiding obstacles. Even in this case, the geometric solution to the optimal-control problem is still fairly clear; however, the values for the other unknowns such as control profile or minimum time 118 are less clear. We would like a low-effort way to produce a good initial guess for our optimal-control solver. It should be inexpensive both in terms of computation time as well as programmer effort. The easiest guess would be to just linearly interpolate between start and end points (despite parts of the path being infeasible) and choose some arbitrary control (for example, zero). This clearly satisfies both criteria of low effort. Unfortunately, the nonlinear equation solver of the optimal-control code [101] was not able to converge from such a poor initial guess. An initial guess based on a shooting-type approach satisfies the criterion of low computational expense because the computational effort boils down to an explicit, initial-value ODE solver. It also has the advantage of being able to produce a consistent, feasible guess. However, the programmer will generally need to perform a large number of iterations on the control profile before a solution that comes reasonably close to satisfying the terminal boundary conditions is found. Moreover, if these terminal boundary conditions are changed, obstacles become nonstationary, or more obstacles are added, the researcher would find that there is essentially no return on the time invested in the initial program! The initial guess obtained via shooting that is used in the computations to follow is shown in Figure 6.16. Some effort was put into making this guess reasonably good so that it would not unduly affect the convergence of OCPRSQP. A DAI program that forces the robot to end up in a neighbourhood of the desired terminal conditions can be constructed as follows. In addition to the table constraints (1.3), we introduce two more pairs of constraints: 1. A pair of constraints in the form of two travelling waves with (p = | that uniformly 'squeezes' the viability region down to a shape with size 6 in the z-direction at time t = tj. We take 6 = 5 and choose tj = 2.25 > i o p t = 2.15. 119 0 50 100 150 200 x Figure 6.16: Initial Guess from Shooting for Multi-Obstacle Problem The constraints themselves are then x — V\t \ Zmax - (X - V2t) ^squeeze where v\, v2 are chosen such that the waves are located at x = xm\n = 0 and x = xmax at t = 0, and x = Xf — \8 and x = Xf + i<5 at time t = tf. 2. A pair of constraints in the form of artificial, stationary 'edges', that 'funnel' [87] the solution into a neighbourhood of B. Explicitly, these are ^funnel — 1 y - \x + 50 ^ y -y - \x + 150 j We note that the program is easily modified to accommodate different bound-ary conditions, hence the time invested in constructing the first program is not entirely lost. A general program such as (6.8) can automatically handle different obstacles that may even be moving. Naturally, this approach produces a guess that is consistent and feasible. 120 X Figure 6.17: DAI and Optimal Trajectories for Multi-Obstacle Problem The initial guess produced by this program is depicted in Figure 6.17. Also shown are the optimal-control solutions for two different tolerances. The following table shows the relative computation times to produce an optimal solution for different tolerances. The first uses an initial guess from the DAI program (6.8) and is listed under the column labelled (CPU1). The second uses an initial guess obtained from shooting (CPTJ2). The times quoted are in seconds. The initial guess produced by solving the DAI described was 0.50 seconds. TOL CPU1 CPU2 1 0 - 3 52.26 5.57 10~4 122.82 126.9.7 10~5 118.14 105.75 10~6 109.70 83.69 Table 6.5: Optimal-control solutions using initial guesses obtained from shooting and DAI methods. Comparing the results, we see that the savings are not great in computation time when starting from a reasonably accurate initial guess obtained from shooting. 121 The savings that are significant are in terms of programmer time and program flexibility. It takes relatively little effort to construct the program and its solution. The initial guess obtained by means of a DAI was produced in 0.51 seconds of CPU time, or about 0.6% of the TOL = 1 0 - 6 case. A 6.3 Robot A r m As our final example, we give some results for the robot arm application described in Section 1.3.2. Simulations are carried out to a final time tj = 40s, representing almost 20 periods of motion in all cases. The following parameter values are used in all simulations reported in this section: mi = m.2 = 36 kg, l\ = l2 = 1 m, g = 9.81 m s - 2 . In this example, equations (1.8) are solved with the initial conditions 0i(O) = 70°, 02(O) = -140°, wi(0) = w2(0) = 0. (6.10) Also, all results shown use U f r a c = 0.5, n s a m p = 1. The default control is taken to be identically zero. This is done as an illus-tration of how the method can work despite limited knowledge on the part of the programmer. An alternative possibility for w n a t is to use the Lagrange multipliers for the corresponding DAE. If r is in fact small, this situation is analogous to a DAI implementation of optimal control (cf. Case 6.5). The Lagrange multiplier solution is in some sense optimal because the parabola (1.10) can be followed with r = 0. However, such a strategy would not be implementable in the presence of uncertainties or modelling errors. A DAI formulation of the problem with small r would circumvent this. 122 The qualitative comparison is rather unexciting because in all cases the robot arm simply swings in the feasible region. We note that the motions were very close to the actual parabolic constraint path given in (1.10) for all cases, even for those with relatively large r. As is typical in a DAI approach to programming, general motions are possible along with more specific ones. The general motion for larger r values may be appropriate, for example, if (moving) objects run the risk of intersecting the trajectory of the elbow. r %u (a) (a) CPU 1500 0.005 97.125 0 0 25.53 1500 0.01 97.125 0 0 25.52 1500 0.1 97.125 0 0 25.00 550 0.5 99.05 0 0 27.57 Table 6.6: Simulation results: Robot Arm 1 y 0 -1 — r 1 1 1 i ' \ / -•: 1 I \ £- ^ Yt = o I ^—&Z v \ / \ \ • X Figure 6.18: Motion of Robot Arm: r = 0.005 123 y o - l Figure 6.19: Motion of Robot Arm: r = 0.5 Summarizing, in this chapter we saw that the DAI solver described in Chap-ter 3 is an efficient method for several problem instances. We started with a simple example of a mobile robot executing very general motion and ended with the more complicated case of robot arm motion along a specific path. We have also examined possible scenarios for interaction between DAIs and optimal control, and illustrated some advantages of the local nature of our solution method. The ability of this method to deal with all of them clearly demonstrates its potential and applicability to problems in robotic programming. 124 Chapter 7 C o n c l u s i o n s Like other new ideas in scientific computing, it will succeed on some problems and lose on others. - Gil Strang 7.1 Summary In this thesis, we have described a numerical method for solving the systems of differential-algebraic inequalities that arise from the programmed-constraint ap-proach to robot programming. The procedure is applicable to similar problems where inequalities are a natural way to express unilateral constraints of an arbi-trary nature. The programmed-constraint approach is appealing because programs are often geometrically motivated and intuitive. Moreover, programs can be devel-oped incrementally. However, the range of possible tasks and desiderata for a robot program is so diverse that no single method can be the most appropriate in all cases. A key motivation for a numerical method designed to solve DAIs is to reduce the amount of computation for such dynamic constraint satisfaction problems by 125 exploiting the fact that the feasibility region of the constraints typically has large, non-empty interior. Our approach combines a delay-free discretization (usually com-posite Euler [109]) of the equations of motion with a constrained minimization to ob-tain a local control. Local planning is introduced, which provides a state-dependent test for future constraint violation and is used as a switching mechanism to acti-vate new constraints. Our simulation results to date indicate that the method is efficient, often exhibiting a small fraction of time steps explicitly accounting for the constraints. We also report relatively small CPU times, in particular compared to CPU times for corresponding optimal-control solutions. The local nature of the approach is new. This solution method is possible for the types of problems where a weaker specification of the control is desired, such as in the Least Constraint approach to robot programming. Because of this, a measure of computational efficiency and adaptability can be exploited. The natural extension of the above techniques to systems of DAEs subject to programmed constraints should increase the overall applicability; however, the approach is not applicable to every problem in robotics. For example, more investigation is required into problems that require finer control, such as docking or parts mating. We have also seen how an efficient solution to the DAI formulation of more traditional robot programs can complement existing approaches, in particular opti-mal control, both as an effective starter and as a compensator for new information. Generally, there is no strict connection between the DAI and optimal-control so-lutions; however, DAIs are easy to program and their solution adds very minimal computational expense to the optimal-control calculation. Thus, they provide a flexible alternative to shooting and a more consistent framework than traditional holonomic motion planners. Also, the hidden-object example (Case 6.6 in Sec-126 tion 6.2) illustrates a feature that is absent from motion planners: the potential for processing new information as it becomes available. Such a feature is imperative for successful autonomous robot programming. The generalization of delay-free discretizations to higher order as implicit-explicit Runge-Kutta schemes represents a new and potentially powerful method for numerically integrating differential equations where the terms in the right-hand side are of a different nature. We have motivated their development, given examples of them, and shown that they offer significant computational advantage over their linear multistep counterparts in all the situations considered thus far. The viewpoint of constructing a solution to a viability problem has lead to a greater understanding of the problem definition and what it is that a solution method should try to do. We have seen how the algorithm presented is a nonstandard selection method based on a nonstandard optimization criterion where the relative position within the viability region is maintained from one time step to the next. 7.2 Directions for Future Research There are many potential candidates for future investigation that arise out of the present dissertation. Suitable avenues emerge from approaching the method of pro-grammed constraints from different perspectives, as well as from increasing the focus on particular aspects of the solution method discussed. A brief summary of broad classifications of future work follows. • from the point of view of robotics: The move to more autonomous robotic systems from teleoperated ones is inevitable. There are many good reasons to want to take humans out of the feedback loop [81]. One of the more obvious 127 reasons is that delays between action and response can be long for meaning-ful human feedback. In this case, reactive systems must be able to fend for themselves and execute programs independent of human intervention. From the point of view of simulation, additional study could be carried out into increasing the robustness of the solution method in more applications, especially when constraints are ill-conditioned (intersect at very small angles, etc.). This is a weakness of the programmed-constraint methodology [87]. A natural direction for future research is to test our algorithm in practice, with the most obvious candidate being the mobile robots [15]. At present, the success of our algorithm has a critical dependence on the existence of a reasonably accurate model for the system. Such a model is needed in Step 4 in order to elicit information on how a choice of control will influence future states. Unfortunately, no such model exists for the Dynamite system [78, 15]. We have performed some experimentation on developing such a model using standard system identification and parameter-estimation techniques [77, 83, 21, 11]. The process of matching experimental data with a differential equation (such as (1-2)) is often a nontrivial task, and the results we obtained have thus far proven unsatisfactory in that robust predictions could not be made under normal operating conditions. Hence, the next step to this end would entail a more serious effort at system identification. However, this in turn would typically require detailed analysis of the transfer functions associated with the driving motor and steering mechanism, study of reliability of the power source, vision system, etc., all of which would be outside the scope of this thesis1. We 1A 'cleaner' possibility may exist through the use of techniques from artificial intelligence [23]; however, results and reliable software are still forthcoming. 128 note that our algorithm for solving DAIs is of course not restricted to problems associated with mobile robots. Thus, there is the distinct possibility of a future physical implementation on another system. • applications from outside of robotics: In Section 1.3.3, we described how the programmed-constraint approach could be used with respect to a reformula-tion of the moving-mesh problem. In some respects, it represents a fundamen-tally new way of viewing this problem. There is a diverse range of publica-tions on this subject (see for example, [56, 55, 92, 74, 75, 123, 57]), each one demonstrating effective behaviour in appropriate circumstances. However, a generally accepted method has yet to emerge. Many of the proposed methods also have interpretations as different stabilizations of DAEs [9, 75], such as Baumgarte stabilization [16]. Our idea for a DAI formulation of the moving-mesh problem is the following (see also [9]): For simplicity, consider a one-dimensional partial differential equation vt = f(v,vx,vxx), a<x <b, t > 0. A method-of-lines discretization on a spatial mesh a = XQ < X \ < . . . < XN = 6, yields a set of ODEs for the solution V{ at each node x^. Vi = f(vi,DiVi,D2Vi), where Dj, j — 1,2 is a (compact) discretization scheme for the j th spatial derivative. If we now allow the (internal) mesh points to move and assign 129 control variables u to the mesh-point velocities, we obtain the moving-mesh equations vi = f(vi,DxVi, D2Vi) + (DiVi)ui ii = Ui, i = 1,2,..., N - 1. In Section 1.3.3, we mentioned reasons why strict enforcement of equidistri-bution may not be the best criterion for mesh movement. A^natural way to express a discretization of the arclength monitor function as an inequality is (xi - Z i _ i ) 2 + (vi - Vi-i)2 < 0(t), where 9(t) represents some measure of the equiportion, ie. the average value of the monitor function on each subinterval. For the purposes of numerical stability, this is combined with a local quasiuniformity condition Ki t£i i Ki ~\~ 1 < _^ L J _ < _ L _ 5 K + 1 — Xi-2 K which limits the amount by which a mesh can change between successive time steps [40]. In practice, K is taken equal to 1 or 2. This formulation represents a novel albeit nontraditional application for DAIs and our solution algorithm. • IMEX RK discretization schemes: Three avenues of further research present themselves: 1. The generation of higher-order 'explicit' symplectic methods as described by (5.9) for partitioned Hamiltonian systems would have immediate appli-cation to conservative systems such as those found in molecular dynamics [107, 71] or astronomy [49]. These are two of the leading research areas with a vested interest in symplectic numerical integrators. 130 2. The use of up winding in numerical methods for advection problems can be interpreted as an addition of artificial diffusion to a hyperbolic, or inviscid term. It has been argued that a different criterion than the usual von Neumann analysis is appropriate when upwinding is considered (see, for example, [106]). It seems natural that the original inviscid term be treated explicitly, whereas the artificial diffusion contribution be treated implicitly. A successful time discretization in the presence of shocks and other discontinuous effects involves the use of total variation diminishing (TVD) schemes [105, 104]. Thus, RK-IMEX schemes with a T V D explicit component would be of great interest. 3. Naturally, as with all new methods, we will be interested in simply testing the method out in a number of diverse applications. There are many problems where a semi-implicit approach such as that offered by IMEX RK offers computational advantage. For example, the immersed-fibre problem (see [91, 110] and the references therein) naturally divides into terms that ideally should be dealt with in different ways: there are stiff, diffusion-related components and highly nonlinear forcing terms. Linear stability analysis has shown the existence of stable, but highly oscillatory solution modes [110]. Current solution methods have concentrated on explicit methods; however, the time-step restrictions remain quite severe [110]. Thus, a semi-implicit method such as IMEX RK can be expected to significantly improve the time stepping. Also for example, the Gierer-Meinhardt equations have been used as a model for problems in pattern formation [59, 100, 47]. These equations also divide naturally into terms requiring different treatment. Similar problems have been treated in [100] 131 by implicit-explicit linear multistep schemes. Their numerical solution is challenging [59]. Hence, this problem is well suited for an improved treatment by IMEX RK methods due to their enhanced stability region. DAEs and DAIs: The addition of equality constraints to a system such as (1.1) has two compelling facets. First, including DAEs permits complicated robotic systems to be modelled in a more natural way, without transforming to a minimal coordinate system through elimination of the constraints (see, for example, [51]). Thus, control of other robotic systems like the platonic beast [88] or other walking machines [86] could be programmed by means of programmed constraints. Second, from the point of view of theory, investiga-tion is warranted into the interaction between viability (weak invariance) and the algebraic equation component of the DAE (strong invariance). more unified analysis: Typical results like proximal aiming [32] from viability theory [13, 14], stable bridges [111] from differential game theory [60, 73], and numerical methods for ODEs subject to inequalities [49, 120, 46, 103] do not offer control strategies for when the system is inside the viability set: action is only taken when the system reaches the boundary. In some situations, this is the only possible course of action. However, due to the local uncontrolla-bility of nonholonomic systems, this is clearly unacceptable in many robotics applications [81, 76, 54]. There is certainly much scope for more analysis and development of (planning) strategies that would lead to a better understanding of the relationships between these aspects. 132 B i b l i o g r a p h y [1] S. Abarbanel, D. Gottlieb, and M . Carpenter. On the removal of bound-ary errors caused by Runge-Kutta integration of nonlinear partial differential equations. SIAM J. Sclent. Corn-put., 17:777-782, 1996. [2] U. Ascher, H. Chin, L. Petzold, and S. Reich. Stabilization of constrained mechanical systems with DAEs and invariant manifolds. J. Mech. Struct. Machines, 23:135-158, 1995. [3] U. Ascher, H. Chin, and S. Reich. Stabilization of DAEs and invariant mani-folds. Nurner. Math., 67:131-149, 1994. [4] U. Ascher and P. Lin. Sequential regularization methods for higher index DAEs with constraint singularities: I. linear index-2 case. SIAM J. Numer. Anal., to appear. [5] U. Ascher and P. Lin. Sequential regularization methods for nonlinear higher index DAEs. SIAM J. Scient. Comput., to appear. [6] U. Ascher and L. Petzold. Projected collocation for higher-order higher-index differential-algebraic equations. J. Comp. Appl. Math., 43:243-259, 1992. [7] TJ. Ascher, S. Ruuth, and B. Wetton. Implicit-explicit methods for time-dependent PDE's. SIAM J. Numer. Anal, 32:797-823, 1995. [8] U. Ascher and R. Spiteri. Collocation software for boundary value differential-algebraic equations. SIAM J. Scient. Comp., 15:938-952, 1994. [9] U. Ascher and R. Spiteri. DAE models for moving mesh methods. In prepa-ration, 1997. [10] U.M. Ascher, S.J. Ruuth, and R.J. Spiteri. Implicit-explicit Runge-Kutta methods for time-dependent partial differential equations. Applied Numerical Math., Special Issue on Innovative Time Integrators, 1997. to appear. 133 K.J . Astrom and P. Eykhoff. System identification - a survey. Automatica, 7:123-167,1971. J.P. Aubin. A survey of viability theory. SIAM J. Control and Optimization, 28(4):749-788, 1990. J.P. Aubin. Viability Theory. Birkhauser, Berlin, 1991. J.P. Aubin and A. Cellina. Differential Inclusions. Springer Verlag, New York, 1984. R. Barman, S. Kingdon, J.J. Little, A.K. Mackworth, D.K. Pai, M . Sahota, H. Wilkinson, and Y. Zhang. Dynamo: real-time experiments with multiple mobile robots. In Proceedings of the IEEE Intelligent Vehicles Conference, pages 261-266, Tokyo, Japan, July, 1993. J. Baumgarte. Stabilization of constraints and iantegrals of motion in dyn-maical systems. Comp. Methods Appl. Mech., 1:1-16, 1972. R.E. Bellman. Adaptive control processes: a guided tour. Princeton University Press, Princeton, 1961. R.E. Bellman. Introduction to the mathematical theory of control processes. Academic Press, New York, 1967. R.E. Bellman and S. Dreyfus. Applied dynamic programming. Princeton Uni-versity Press, Princeton, 1962. J. Blom and P. Zegeling. A moving-grid interface for systems of one-dimensional time-dependent partial differential equations. ACM TOMS, 20(2):194-214, 1994. H.G. Bock. Recent advances in parameter identification techniques for ODEs. In P. Deuflhard and E. Hairer, editors, Workshop on Numerical Treatment of Inverse Problems in Differential and Integral Equations, 1982. J. Borenstein and Y . Koren. Real-time obstacle avoidance for fast mobile robots. IEEE Trans. Syst. Man Cyber., 19(5):1179-1187, 1989. E. Bradley and R. Stolle. Automatic construction of accurate models of physi-cal systems. Annals of Mathematics and ArtificialTntelligence, 17:1-28, 1996. K. Brenan, S. Campbell, and L. Petzold. Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations. North-Holland, 1989. 134 [25] R.W. Brockett. Asymptotic stability and feedback stabilization. In R.W. Brockett, R.S Millman, and H.J. Sussmann, editors, Differential Geometric Control Theory, pages 181-191. Birkhauser, Boston, 1983. [26] C. Budd, G. Koomullil, and A. Stuart. On the solution of convection-diffusion boundary value problems by grid adaptation. Technical report, Math. res. Rep. AM95-04, Univ. of Bristol, 1995. [27] J.F. Canny. The complexity of robot motion planning. PhD thesis, MIT, Dept. Electrical Eng. Computer Sci., May 1987. [28] C. Canuto, M.Y. Hussaini, A. Quarteroni, and T.A. Zang. Spectral Methods in Fluid Dynamics. Springer-Verlag, 1987. [29] M . Carpenter, D. Gottlieb, S. Abarbanel, and W. Don. The theoretical accu-racy of Runge-Kutta time discretizations for the initial-boundary value prob-lem: A study of the boundary error. SIAM J. Scient. Comput., 16:1241-1252, 1995. [30] C W . Carroll. The created response surface technique for optimizing nonlinear, restrained systems. Operations Res., 9:169-184, 1961. [31] E. Charniak and D. McDermott. Introduction to Artificial Intelligence. Addison-Wesley, New York, 1986. [32] F.H. Clarke, Yu. S. Ledyaev, R.J. Stern, and P.R. Wolenski. Qualitative properties of trajectories of control systems: A survey. Journal of Dynamical and Control Systems, 1(1):1—48, 1995. [33] F.H. Clarke, Yu.S. Ledyaev, E.D. Sontag, and A.I. Subbotin. Asymptotic controllability implies feedback stabilization. Preprint. [34] J.D. Coope and R. Fletcher. Some numerical experience with globally con-vergent algorithms for nonlinearly constrained optimization. J. Opt. Theo. Applns, 32(4):1-16, 1979. [35] R. Courant. Variational methods for the solution of problems of equilibrium and vibration. Bull. Amer. Math. Soc, 49:1-23, 1943. [36] M . Crouzeix. Une methode multipas implicite-explicite pour l'approximation des equations devolution paraboliques. Numer. Math., 35:257-276, 1980. [37] A.W. Divelbiss and J. Wen. Nonholonomic path planning with inequality constraints. IEEE, pages 52-57, 1994. 135 B. R. Donald. A search algorithm for motion planning with six degrees of freedom. Artificial Intelligence, 31(3), 1987. A. Dontchev and F. Lempio. Difference methods for differential inclusions: a survey. SIAM Review, 34(2):263-294, 1992. E.A. Dorfi and L.O'c. Drury. Simple adaptive grids for 1-D initial value prob-lems. J. Comp. Phys., 69:175-195, 1987. A.V. Fiacco and G.P. McCormick. Nonlinear Programming: Sequential Un-constrained Minimization Techniques. John Wiley and Sons, New York, 1968. R. Fletcher. An ideal penalty function for constrained optimization. J. Inst. Maths. Applns, 15:319-342, 1975. R. Fletcher. Practical Methods of Optimization, Volume I. John Wiley and Sons, Chichester, 1980. R. Fletcher. Practical Methods of Optimization, Volume II. John Wiley and Sons, Chichester, 1981. K.R. Frisch. The logarithmic potential method of convex programming, 1955. Memorandum, University Institute of Economics, Oslo. C. W. Gear. Maintaining solution invariants in the numerical solution of ODEs. SIAM J. Sci. Stat. Comp., 7:734-743, 1986. A. Gierer and H. Meinhardt. A theory of biological pattern formation. Ky-bernetik, 12:30-39, 1972. E. Griepentrog. Gemischte Runge-Kutta-Verfahren fur steife Systeme. In Sem-inarbericht Nr. 11, Sekt. Math., pages 19-29, Humboldt-Universitat Berlin, 1978. E. Hairer, S.P. Norsett, and G. Wanner. Solving Ordinary Differential Equa-tions I: Nonstiff Problems. Springer-Verlag, 1993. E. Hairer and G. Wanner. Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems. Springer-Verlag, 1991. E.J. Haug. Computer-Aided Kinematics and Dynamics of Mechanical System Vol.1: Basic Methods. Allyn and Bacon, 1989. M.R. Hestenes. Multiplier and gradient methods. J. Opt. Theo. Applns, 4:303-320, 1969. 136 [53] N . Hogan. Impedance control: An approach to manipulation. ASME Journal of Dynamics Systems, 107:1-7, March 1965. [54] H.P. Huang and N.H. McClamroch. Time-optimal control for a robotic contour following problem. IEEE Journal of Robotics and Automation, 4(2):140-149, 1988. [55] W. Huang, Y. Ren, and R. Russell. Moving mesh methods based on moving mesh partial differential equations. J. Comp. Phys., 113:279-290, 1994. [56] W. Huang, Y. Ren, and R.D. Russell. Moving mesh partial differential equa-tions (MMPDEs) based on the equidistribution principle. SIAM J. Numer. Anal, 31:709-730, 1994. [57] J. Hyman and B. Larrouturou. Dynamic rezone methods for partial differential equations in one space dimension. Applied Numerical Mathematics, 5:435-450, 1989. [58] L. Ingber. Simulated annealing: Practice versus theory. Mathematical Com-puter Modelling, 18(ll):29-57, 1993. [59] D. Iron. Spectral Analysis of the Gierer-Meinhardt Equations. Master's the-sis, Department of Mathematics, University of British Columbia, Vancouver, Canada, 1997. in preparation. [60] R. Isaacs. Differential games; a mathematical theory with applications to war-fare and pursuit, control and optimization. J. Wiley and Sons, New York, 1965. [61] J. Funda and R. Taylor and K. Gruben and D. La Rose. Optimal motion control for teleoperated surgical robots. In SPIE Symposium on Optical Tools for Manufacturing and Advanced Automation, Boston, MA, September 1993. [62] J.G.Verwer, J.G. Blorn, and W. Hundsdorfer. An implicit-explicit approach for atmospheric transport-chemistry problems. Applied Numerical Mathematics, 20:191-209, 1996. [63] G.E. Karniadakis, M . Israeli, and S.A. Orszag. High-order splitting meth-ods for the incompressible Navier-Stokes equations. J. Computational Phys., 97:414-443, 1991. [64] O. Khatib. Real time obstacle avoidance for manipulators and mobile robots. The International Journal of Robotics Research, 5(1):90—99, 1986. 137 [65] J. Kim and P. Moin. Application of a fractional-step method to incompressible Navier-Stokes equations. J. Computational Phys., 59:308-323, 1985. [66] D.E. Koditschek. Robot planning and control via potential functions. In The Robotics Review 1. MIT Press, 1989. [67] D.E. Koditschek. The control of natural motion in mechanical systems. ASME Journal of Dynamics Systems, Measurement and Control, 111(4):547-551, 1991. [68] D.E. Koditschek. Some applications of natural motion control. ASME Journal of Dynamics Systems, Measurement and Control, 113:552-557, 1991. [69] J.C. Latombe. Robot Motion Planning. Kluwer, 1991. [70] Yu. S. Ledyaev. Criteria for viability of trajectories of nonautonomous differ-ential inclusions and their applications. Journal of Mathematical Analysis and Applications, 182:165-188, 1994. [71] B. Leimkuhler and R. Skeel. Symplectic integrators in constrained hamiltonian systems. J. Comp. Phys., 112:117-125, 1994. [72] W. Leler. Constraint Programming Languages: Their Specification and Gen-eration. Addison-Wesley, New York, 1988. [73] J. Lewin. Differential games : theory and methods for solving game problems with singular surfaces. Springer-Verlag, New York, 1994. [74] S. Li and L. Petzold. Moving mesh methods with upwinding schemes for time-dependent PDEs. Dept. of Computer Science and Army High Performance Research Center, Univ. of Minnesota, 1996. [75] S. Li , L. Petzold, and Y . Ren. Stability of moving mesh systems of partial differential equations. Dept. of Computer Science and Army High Performance Research Center, Univ. of Minnesota, 1996. [76] Z. Li and J.F. Canny, editors. Nonholonomic Motion Planning. Kluwer, Boston, 1993. [77] L. Ljung. System Identification: Theory for the User. Prentice-Hall, Inglewood Cliffs, New Jersey, 1987. [78] Michael K. Sahota. Real-Time Intelligent Behaviour in Dynamic Environ-ments: Soccer-playing Robots. Master's thesis, Department of Computer Sci-ence, University of British Columbia, Vancouver, Canada, 1991. 138 K. Miller and R.N. Miller. Moving finite elements I. SIAM J. Numer. Anal., 18:1019-1032,1981. Miyazaki, Fumio, and Arimoto. Sensory feedback based on the artificial po-tential for robots. In 9th IFAC, Budapest, Hungary, 1984. R.M. Murray, Z. Li , and S.S. Sastry. A Mathematical Introduction to Robotic Manipulation. CRC Press, Boca Raton, Fla., 1994. W.S. Newman and N. Hogan. High speed robot control and obstacle avoidance using dynamic potential functions. In IEEE Int. Conf. Robotics Automat., pages 14-24, Rayleigh, NC, 1987. J.P. Norton. Introduction to Identification. Academic Press, New York, 1986. P. Lotstedt. Numerical simulation of time-dependent contact and friction problems in rigid body mechanics. SIAM J. Sci. Stat. Comput., 5(2):370-393, 1984. P. Lotstedt. Numerical simulation of time-dependent contact and friction problems in rigid-body mechanics. SIAM J. Sci. Stat. Comput., 5(2), 1984. D.K. Pai. Programming anthropoid walking: Control and simulation. Tech-nical Report TR90-1178, Cornell University, 1990. D.K. Pai. Least Constraint: A framework for the control of complex me-chanical systems. In Proceedings of the American Control Conference, pages 615-621,1991. D.K. Pai, R.A. Barman, and S.K. Ralph. Platonic beasts: a new family of multilimbed robots. In Proceedings of the IEEE Conference on Robotics and Automation, 1994. W.J. Palm. Modeling, Analysis, and Control of Dynamic Systems. John Wiley and Sons, 1983. V.V. Pavlov and Voronin A.N. The method of potential functions for cod-ing constraints of the external space in an intelligent mobile robot. Soviet Automatic Control, 6, 1984. C S . Peskin. Numerical analysis of blood flow in the heart. J. Comp. Phys., 25:220-252, 1977. 139 [92] L. Petzold. Observations on an adaptive moving grid method for one-dimensional systems of partial differential equations. Applied Numerical Math-ematics, 3:347-360, 1987. [93] L.R. Petzold. A description of DASSL: a differential/algebraic system solver. Transactions on Scientific Computation, 1, 1982. [94] M.J.D. Powell. A method for nonlinear constraints in minimization problems. In R. Fletcher, editor, Optimization. Academic Press, London, 1969. [95] W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. Numerical Recipes: the Art of Scientific Computing. Cambridge University Press, second edition, 1992. [96] E. Rimon and D.E. Koditschek. Exact robot navigation using artificial poten-tial functions. IEEE Transactions on Robotics and Automation, 8(5):501-518, 1992. [97] R.T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, 1970. [98] R.D. Ruth. A canonical integration technique. IEEE Trans. Nuclear Science, NS-30:2669-2671, 1983. [99] S. Ruuth. Implicit-explicit methods for time-dependent PDE's. Master's the-sis, Ins. Appl. Math., Univ. of British Columbia, Vancouver, 1993. [100] S. Ruuth. Implicit-explicit methods for reaction-diffusion problems in pattern formation. J. Math. Biology, 34(2):148-176, 1995. [101] V. Schulz. Reduced SQP Methods for Large-Scale Optimal Control Problems in DAE with Application to Path Planning Problems for Satellite Mounted Robots. PhD thesis, Interdisziplinares Zentrum fur Wissenschaftliches Rech-nen, Univerisitat Heidelberg, 1996. [102] J.T. Schwartz and M . Sharir. On the "piano movers" problem. II. General techniques for computing topological properties of real algebraic manifolds. Advances Appl. Math., 4(1):298-351, 1983. [103] L.F. Shampine. Conservation laws and the numerical solution of ODEs. Comp. and Maths, with Appls., Part b, 12:1287-1296, 1986. [104] C. Shu. Total-variation-diminishing time discretizations. SIAM J. Sci. Stat. Comput., 9(6):1073-1084,1988. 140 [105] C. Shu and S. Osher. Efficient implementation of essentially nonoscillatory shock-capturing schemes. J. Comp. Phys., 77:439-471, 1988. [106] D. Sidilkover and TJ. Ascher. A multigrid solver for the steady state Navier-Stokes equations using the pressure-poisson formulation. Comp. Appl. Math. (SB MAC), 14:21-35, 1995. [107] Robert D. Skeel, Guihua Zhang, and Tamar Schlick. A family of symplectic integrators: stability, accuracy, and molecular dynamics applications. SIAM J. Sci. Comput., 18(l):203-222, 1997. [108] E.D. Sontag and H.J. Sussmann. Remarks on continuous feedback. In IEEE Conference on Decision and Control, pages 916-921, Piscataway, December 1980. IEEE Publications. [109] R. Spiteri, TJ. Ascher, and D. Pai. Numerical solution of differential systems with algebraic inequalities arising in robot programming. In Proceedings of the IEEE Conference on Robotics and Automation, pages 2373-2380, Nagoya, 1995. [110] J .M. Stockie and B.R. Wetton. Stability analysis for the immersed fiber prob-lem. SIAM Journal Applied Math, 55:1577-1591, 1995. [Ill] A. Subbotin. Generalized solutions of first-order PDEs. Birkhauser, Boston, 1995. [112] K. Taubert. Differenzenverfahren fur Schwingungen mit trockener und zaher Reibung und fur Regelungssysteme. Num. Math., 26:379-395, 1976. [113] J. Van Tiel. Convex Analysis. John Wiley and Sons, New York, 1984. [114] R.B. Tilove. Local obstacle avoidance for mobile robots based on the method of artificial potentials. In IEEE Int. Conf. Robotics Automat., pages 566-571, Cincinnati, OH, May 1990. [115] Tomas Lozano-Perez. Spatial planning: A configuration space approach. IEEE Transactions on Computers, C-32(2):108-120, 1983. [116] S. Turek. A comparative study of time stepping techniques for the incom-pressible Navier-Stokes equations: from fully implicit non-linear schemes to semi-explicit projection methods. Int. J. Num. Meth in Fluids, 22:987-1011, 1996. 141 [117] R.J. Vanderbei. An algorithmic and numerical comparison of several interior-point methods. Technical Report SOR-94-05, Princeton University, March 1994. [118] J .M. Varah. Stability restrictions on second order, three level finite difference schemes for parabolic equations. Siam J. Numerical Analysis, 17(2):300-309, 1980. [119] J. Verwer, J. Blom, and J. Sanz-Serna. An adaptive moving grid method for one-dimensional systems of partial differential equations. J. Comp. Phys., 82:454-486, 1989. [120] R. von Schwerin and M . Winckler. A guide to the integrator library MB-SSIM, Version 1.00. Technical report, Interdisziplinares Zentrum fur Wis-senschaftliches Rechnen, Universitat Heidelberg, 1994. [121] B.O. Watkins. Introduction to Control Systems. The MacMillan Company, New York, 1969. [122] J.T. Wen. Control Systems Handbook (ed. W. Levine), pages 1359-1368. CRC Press, 1995. [123] A.B. White. On the numerical solution of initial/boundary-value problems in one space dimension. SIAM J. Numer. Anal., 19:683-697, 1982. [124] S. Wright. A path-following infeasible-interior-point algorithm for linear com-plementarity problems. Preprint MCS-P334-1192, November 1992. [125] Y . Zhang and A. Mackworth. Constraint programming in constraint nets. In First Workshop in Principles and Practice of Constraint Programming, 1993. 142 Appendix A M i n i m i z a t i o n A l g o r i t h m Consider the following nonlinear programming problem: mm F(u), M G r - ' . f e ! ; , (A.la) subject to c(u) > 0. (A.lb) We consider the solution of (A.la) by a penalty function method; that is, we solve a sequence of unconstrained minimization problems where the objective function F is augmented with terms that penalize the violation of the constraints (A.lb). From [94], a suitable penalty function for (A.l) is 4>{u, 0, 2) = F(u) + i(c(u) - 0)1 E (c(u) - 0)_, (A.2) where 0 £ 3ftn c n s t r, ~ £ s f t « c n S t r X n c n 3 t r ^ a n ( j _ m m (. 5 o). The matrix 2 is diagonal and consists of weights & > 0, % = 1,2,..., n c n s t r . Hence, (A.2) may be equivalently written as -i T ^ c n s t r <t>(u,Q,0 = F(u) + - £ €i(ci(u)-di)l. (A.3) 1 i=i 143 For notational convenience, we often combine these diagonal entries into a vector f. The earliest penalty function was used by Courant [35] in the context of equality-constrained programming problems. In this case, 0 = 0. We refer to [41, 44] for discussion of the theoretical results for this method. We also note that equation (A.2) represents a generalization of the ideas originally put forth in [35] to the inequality-constrained programming problem. The standard penalty function algorithm is the following: for fixed values of 0 and £, an intermediate solution «( s ' (0 ,£) is found as an unconstrained minimizer of (A.3). Then 0 and £ are iterated over in such a way that the sequence of unconstrained minimizers {u^} converges to the constrained solution u* of (A.l) . Algorithm A . l (Standard Penalty Function) 1. Choose a sequence tf(s' —• oo (typically, {1,10,10 2,...}). 2. For each solve (A.3) for a local minimizer = u(£( s '). 3. Terminate under a suitable convergence criterion, such as \\u^ — M ( S - 1 ) | | < e, for e sufficiently small. A Common drawbacks of this method include: • the inefficiency associated with solving many minimization subproblems, • the fact that the minimization subproblems are not exactly solvable, and (s) • the increased ill-conditioning of the minimization subproblems as £,•—>• oo. The typical sequence for the £M suggested in Step 1 is meant as a compro-mise where {£<*)} increases in such a way as to balance the difficulties in finding accurate solutions for large values, while ensuring enough progress is made between 144 increments toward the solution of the original problem (A.l) . The ill-conditioning associated with large £ tends to have the largest effect on the performance of the penalty method [44]. A more well-conditioned method avoids this problem by iterating toward the optimal value of 0 for fixed and finite £. We begin to describe such a method by defining the vector A G §?™cnstr with components A; := (A.4) Substituting (A.4) into (A.3) and ignoring terms independent of u, we have1 c^nstr i=i (A-5) We now have the following theorem (see, for example, [44] for a proof): Theorem A . l If second-order conditions hold at (u*, A*), then there exists a £' > 0 such that for all £ > u* is an isolated local minimizer of (A.5), i.e., u* = u(X*). The value A* is also the Lagrange multiplier at the solution to the original problem (A.l) . Because this result holds independent of £, we suppress this argu-ment henceforth and write <f> = <f>(u, A). An algorithm for a well-conditioned penalty function is as follows: Algorithm A.2 (Well-Conditioned Penalty Function) 1. Determine a sequence {A(s)} —> A*. 2. For each A( s ' , compute u(*):= u(A<*>) = mm<p(u,X^). J T h e continuity of <f> allows us to choose the more advantageous placement of the equality condition in the following. 145 3. Terminate when u^ has converged sufficiently to u*. A It remains to determine the sequence {A( s'}. Following [44], we now briefly describe one particularly simple and effective choice, the ideal penalty function (as coined by Fletcher [42, 44]). We begin by noting that a convenient way to regard u*(X) is as the solution to a neighbouring problem: Let ^(A):=#u(A),A). It can be shown [44] that A* is a constrained maximizer of ip(X). Hence, methods for generating a sequence {A^} —>• A* can be derived from unconstrained minimization algorithms applied to —ij}{X). Using assumptions of strict complementarity at the solution and a regularity condition, it can be shown that V2tp is negative definite. This leads to a natural choice for the sequence { A ^ } : the sequence generated by Newton's method [43, 44]. Given an initial estimate A' 0), the Newton iteration is W = - W-^Lt^y The appeal of this choice stems mainly from two convenient results [44, 42] that allow us to circumvent the usual drawbacks associated with the implementation of Newton's method: V ^ = - m i n ^ C i ( « ( A ) ) , ^ , and, for large £, This leads to the particularly simple iteration A(') = A* 8" 1* - min fac^, A ^ ) . (A.6) 146 We now describe an ideal penalty function algorithm that is is used to solve the barrier minimization problem (3.5). The algorithm is designed to achieve linear convergence at no worse than rate (5. Algorithm A.3 (Ideal Penalty Function) 1. Initialize A = A^ 0 ' , £ = iteration counter k = 0, and HVVHIoo = °°-(TypicaUy, we use A*0) = 0 and f (°) = 1.) 2. Solve «( s)(A,0 = min<X«,A,0 and let c = c(«(*>(A,0). 3. Terminate if || V >^ |<x> has converged sufficiently to 0. 4. Check for sufficient rate of convergence: If I  V l^loo > / 5 | | V ^ - 1 ) | | 0 O , then set & <— 10^ for all i such that |V^-| > )9||V^ (-1)||oo and return to Step 2. 5. Step accepted: increment counter and update parameters, k*-k + l, \ { s ) <- A, c. 6. Determine the next A in the sequence from (A.6) and return to Step 2. A 147 In practice, f3 is chosen to be 0.25. We note that in Step 4, if any component Cj- is not reduced by this rate, the corresponding is increased in a suitable fash-ion, similar to Step 1 of Algorithm A . l . It can be shown that termination of this algorithm is guaranteed (see [44] for more details). As a final remark on Algorithm A.3, we note that it can still suffer somewhat from the ill-conditioning produced with large £ as discussed previously, but not to the extent of the standard penalty function method [42, 44]. Also in general, if the possibility of there being no solution goes undetected, £ can increase without bound, leading to floating-point exceptions. In our experiences using Algorithm A.3 to solve the barrier minimization problem (3.5), we have not found these considerations to be significant. 148 


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items