UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Solution methods for differential systems subject to algebraic inequality constraints Spiteri, Raymond J. 1997

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


831-ubc_1997-251659.pdf [ 6.33MB ]
JSON: 831-1.0080007.json
JSON-LD: 831-1.0080007-ld.json
RDF/XML (Pretty): 831-1.0080007-rdf.xml
RDF/JSON: 831-1.0080007-rdf.json
Turtle: 831-1.0080007-turtle.txt
N-Triples: 831-1.0080007-rdf-ntriples.txt
Original Record: 831-1.0080007-source.json
Full Text

Full Text

Solution Methods for Differential Systems subject to Algebraic Inequality Constraints by Raymond J. Spiteri •: B.Sc.(Hons.), University of Western Ontario, 1990 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE D E G R E E OF  Doctor of Philosophy in .  THE FACULTY'OF GRADUATE STUDIES Department of Mathematics and Institute of Applied Mathematics We accept this thesis as conforming to the reauired standard  The University of British Columbia September 1997 @ Raymond J. Spiteri, 1997  In presenting this  thesis in  degree at the University of .freely  partial  fulfilment  of  the  requirements  for  an advanced  British Columbia, I agree that the Library shall make it  available for reference and study! I further agree that permission .for extensive  copying of this . thesis for department  or  by  his  or  scholarly purposes may be. granted her  representatives.  It  is  by the head of  understood  that  copying  my or  publication of this thesis for financial gain shall not be allowed without my written permission.  Department of  ' MflTttertAX\CS>  The University of British Columbia Vancouver, Canada 'bate  DE-6 (2/88)  Sen S  Abstract  The method of programmed constraints has recently been proposed as an executable specification language for robot programming. The mathematical structures behind such problems are viability problems for control systems described by ordinary differential equations subject to user-defined inequality constraints. A l though these types of problems are common in applications, practical algorithms and software are generally lacking. This thesis describes a new method for the numerical solution of such viability problems. The algorithm presented is composed of three parts: delay-free discretization, local planning, and local control. Delay-free discretizations are shown to be consistent discretizations of control systems described by ordinary differential equations with discontinuous inputs. The generalization of delay-free discretizations to higher order in the context of implicit-explicit Runge-Kutta methods represents a potentially powerful new class of time integrators for ordinary differential equations that contain terms requiring different discretizations. The local planning aspect is a computationally inexpensive way to increase the robustness in finding a solution to the viability problems of interest, making it a refinement to a strategy based on viability alone. The local control is based on the minimization of an artificial potential function in the form of a logarithmic barrier. Theoretical examples are given of situations where the choice of such a control can be interpreted to yield heavy solutions. Simulations of two robotic systems are then used to validate the particular strategy investigated. Some complementarity is shown between the programmedconstraint approach to robot programming and optimal control. Moreover, we demonstrate the relative efficiency of our algorithm compared to optimal control ii  in the case of programming a mobile robot: our method is able to find a solution on the order of one hundred times faster than a typical optimal control solver. Some simulations on the control of a simple robot arm are also presented.  iii  Contents Abstract  ii  Contents  iv  List of Tables  vii  List of Figures  viii  Acknowledgements 1  Introduction  1  1.1  Differential Systems subject to Algebraic Inequality Constraints . . .  1  1.2  Problem Formulation  2  1.3  Applications  4  1.3.1  Mobile Robot  5  1.3.2  Simple Robot Arm  8  1.3.3  Moving Meshes  1.4 2  x  11  Overview  12  Background  15  2.1  Approaches to Robot Programming  16  2.2  Constraint Programming in Robotics  21  iv  3  2.3  C o n s t r a i n t P r o g r a m m i n g as a V i a b i l i t y P r o b l e m  26  2.4  Numerical Methods  31  A D A I Solver 3.1  Overview  36  3.2  T i m e Discretization  39  3.3  Local Planning  40  3.4  A n A l g o r i t h m for L o c a l C o n t r o l  47  3.4.1  50  3.5  4  5  36  . .  Buffer Zone and Sensitivity P a r a m e t e r Maintenance  Summary  53  Theoretical Considerations  55  4.1  C o n v e x i t y of B a r r i e r  57  4.2  Simple E x a m p l e s  59  4.3  Interpretation of L o c a l C o n t r o l  67  Delay-Free Discretizations  73  5.1  Motivation  73  5.2  Implicit-Explicit Runge-Kutta Methods  75  5.3  C o n s t r u c t i o n of I M E X R K M e t h o d s  78  5.3.1  C o m p o s i t e E u l e r (1,1,1)  82  5.3.2  F o r w a r d - B a c k w a r d E u l e r (1,2,1)  83  5.3.3  I m p l i c i t - E x p l i c i t M i d p o i n t (1,2,2)  83  5.3.4  A third-order combination (2,3,3)  84  5.3.5  L-stable, two-stage, second-order D I R K (2,3,2)  86  5.3.6  L-stable, two-stage, second-order D I R K (2,2,2)  86  5.3.7  L-stable, three-stage, third-order D I R K (3,4,3)  87  v  5.3.8 5.4 6  Advantages of I M E X R K Methods . .  Numerical Experiments 6.1  7  A four-stage, third-order combination (4,4,3)  89 90 95  Mobile Robot  96  6.1.1  Simulation  96  6.1.2  Very General Programs  97  6.1.3  More Refined Motions  102  6.2  Interaction with Approaches from Optimal Control  108  6.3  Robot A r m  122  Conclusions  125  7.1  Summary  125  7.2  Directions for Future Research  127  Bibliography  133  Appendices  143  Appendix A Minimization Algorithm  143  vi  List of Tables  3.1  Hierarchy of A p p r o x i m a t e Safety  45  6.1  S i m u l a t i o n results: Case 6.1  99  6.2  S i m u l a t i o n results: Case 6.2  101  6.3  S i m u l a t i o n results: Case 6.3  103  6.4  S i m u l a t i o n results: Spotlight P r o b l e m  106  6.5  O p t i m a l - c o n t r o l solutions using i n i t i a l guesses obtained f r o m shooting  6.6  and D A I methods  121  S i m u l a t i o n results: R o b o t A r m  123  vii  List of Figures  1.1  Dynamites  •  5  1.2  Schematic of the mobile robot  7  1.3  Schematic of the robot a r m  9  2.1  T r a d i t i o n a l robot p r o g a m m i n g solution m e t h o d  16  2.2  A candidate for a potential function  19  2.3  T h e eternal question: To B or not. to B  22  3.1  Schematic s u m m a r y of prediction strategy  43  3.2  A sketch of <j>i i n the case of one constraint  4.1  B a r r i e r function for two-dimensional circular spotlight  68  4.2  P s e u d o - A c t i v e - S e t Behaviour of a Nonholonomic System  70  4.3  T h e barrier trajectory,  71  5.1  Time-step stability restrictions for various values of a and 0, for I M E X schemes satisfying (5.7)  5.2  48  93  Time-step stability restrictions for various values of a and /3, for I M E X schemes satisfying (5.6)  94  viii  6.1  R o b o t position and wave position w i t h no damping  100  6.2  R o b o t position and wave position w i t h damping  100  6.3  U n d a m p e d m o t i o n of robot (Case 6.2)  102  6.4  D a m p e d m o t i o n of robot (Case 6.2)  103  6.5  D a m p e d m o t i o n of robot (Case 6.3)  104  6.6  U n d a m p e d m o t i o n of robot (Case 6.3)  105  6.7  Solution exhibiting heavy properties  107  6.8  Solution not exhibiting heavy properties  107  6.9  I n i t i a l Guess for One-Obstacle P r o b l e m  112  6.10 Solution for One-Obstacle P r o b l e m w i t h T O L = 1 0  - 3  6.1.1 Solution for One-Obstacle P r o b l e m w i t h T O L = 1 0 "  6  112 113  6.12 D A I Solution for One-Obstacle P r o b l e m  115  6.13 Worst-case scenario of mobile robot approaching hidden obstacle. . .  117  6.14 D A I Solution for P r o b l e m w i t h H i d d e n Obstacle  118  6.15 L o c a t i o n of M u l t i p l e Obstacles  118  6.16 I n i t i a l Guess for M u l t i - O b s t a c l e P r o b l e m  120  6.17 D A I and O p t i m a l Trajectories for M u l t i - O b s t a c l e P r o b l e m  121  6.18 M o t i o n of R o b o t A r m : r = 0.005  123  6.19 M o t i o n of R o b o t A r m : r = 0.5  124  ix  Acknowledgements A l t h o u g h a thesis is a work a t t r i b u t e d to one person, mine is the end result of the supportive efforts of many. I express m y thanks to m y Doktorvater, D r . U r i A s c h e r , for his generosity, patience, a n d insight, and D r . Dinesh P a i for i n i t i a t i n g the line of research t h a t was presented i n this thesis. I t h a n k D r . P h i l i p Loewen, D r . J i m V a r a h , a n d D r . B r i a n W e t t o n for their encouragement, guidance, and support. I also t h a n k D r . Stephen C a m p b e l l , D r . R i c h a r d Froese, and D r . W a y n e N a g a t a for their contributions to the final f o r m of this thesis. For the m a n y f r u i t f u l discussions I have h a d w i t h t h e m over the years, a n d most of a l l for their friendship, I thank Gilles Cazelais, J o r g Hilsebecher, S a m m y M e r c i e c a , Peter Newbury, Volker Schulz, and H a r o l d Ship.  L-egTiruq tieghi hargu minn taht gebla zghira tissejjah Ghawdex. Hemmhekk kont meta wiehed ghalliem xih, Ganni Grech, thassar tifel ghadu gej mis-safar u wrih li seta' jilhaq matematiku. Nirringrazzja l-familja tieghi li talbu ghalija li dan ix-xoghol isehh: il-genituri tieghi Kelinu u Mananni, in-nanniet, u z-zijiet u l-kugini kollha. Nirringrazzja lil Gorg Grech li sallum ghadu ta' ispirazzjoni ghalija. Nirringrazzja lil Dr. Paddy Nerenberg, li ispirani ghal hajja ta' ricerka, u li minghajru sibt ruhi mitlufa. Nirringrazzja 'I Pejxa u Keicha, li jaghtu hafna mhabba u jitolbu l-ftit il-ftit lura. U lil Lisa Kalynchuk, lilha nirringrazzja iktar minn kulhadd, ghall-fidi taghha fija, ghal imhabbitha, u ghall-ispirazzjoni li taghtini ta' kuljum.  R A Y M O N D J . SPITERI  The University of British Columbia  ' •  September 1997  x  Chapter 1  Introduction  1.1  Differential Systems subject to Algebraic Inequality Constraints  The viability problem of a control system subject to a nonempty set of (user-defined) inequality constraints is referred to in this thesis as a Differential system with Algebraic Inequalities (DAI). This acronym is motivated from the more popular class of problems known as Differential-Algebraic Equations (DAEs). Much is known about the DAE case, where the algebraic inequalities are replaced by algebraic equations [24, 3, 2, 4, 5, 6]. The contribution of numerical analysts in this case has been the development of new discretization schemes, as well as their implementation in robust, general-purpose software for both the initial-value [50, 93] and boundaryvalue DAE [8]. In contrast, much less attention has been given to the DAI case; some numerical work has been done in the context of algebraic inequality invariants [46, 103], time-dependent contact and friction modelling [84], and methods for differential inclusions (see [39] for a survey).  1  A natural application for DAIs arises from the programmed-constraint approach to robot programming [125, 86, 87, 61]. Such methods are intermediate-level languages, residing between low-level, trajectory-based approaches and high-level motion planning. Their goal is to offer a declarative, yet computationally tractable framework for the solution of such problems. The focus of this thesis will be on developing methods to produce their numerical solution. Our strategy follows the principle that the control for a system should be chosen so that the boundaries of the viability region are avoided whenever possible. We put forth such a selection as a refinement to choosing a control based on viability alone.  1.2  Problem Formulation  We now give some mathematical structure to the DAI problem. Before taking into account any state constraints, the problem being considered is that of a control system 4 = u{t)  e  f(t,q(t)) + B(t,q(t))u(t) u,  (1.1a) (l.ib)  where the state q and the control u belong to finite-dimensional vector spaces X and Z of dimension n and  nCntri,  respectively. We restrict our interest to trajectories on  finite time intervals of the form [0,2/]. In general, when the right-hand side of the inclusion (1.1b) is a function of the state q, U is a set-valued map from X to Z representing an a priori feedback that describes any state-dependent constraints on the controls. However, for the applications we are concerned with in this thesis, U is assumed to be a constant set (or a subset thereof), consisting of relations representing, for example, physical 2  limitations (saturation) of electromechanical actuators  \v,j\ ^ N o w let K(t)  U  m a x  j,  j — 1,2,...,  be a closed subset of X.  set for the control system ( l . l a ) - ( l . l b ) .  Wcntrl-  W e say that K(t)  defines the viability  T h i s set is similar to the feasibility set  i n m a t h e m a t i c a l p r o g r a m m i n g [41]. T h e m a i n difference is that feasibility sets are not usually concerned w i t h the flow of an ordinary differential equation ( O D E ) . It is t h r o u g h the definition and m a n i p u l a t i o n of the viability set K(t)  that the robot  p r o g r a m is specified [87]. W e note that this formulation represents a unified taskdescription and control approach to robot p r o g r a m m i n g .  For consistency of the  viability p r o b l e m , we assume that the system starts f r o m a viable point go € A ' ( 0 ) . W e assume that K(t) K(t)  is of the f o r m  = {q G 5t  n  | a(t, q) > 0, i = 1 , 2 , . . . , n  c n s t r  }.  (1.1c)  T h e p r o b l e m (1.1) is a viability problem. V i a b i l i t y theory (see, for example, [13, 14, 32]) attempts to give conditions relating K{t),  U, and the dynamics i n (1.1a),  under which a trajectory to (1.1a), (1.1b) satisfies (1.1c) for all t > 0. A  trajectory of (1.1) is an absolutely continuous function q(-) such that (1.1a)-  (1.1b) are satisfied almost everywhere [32]. A classical solution of (1.1a) exists if the right-hand side is continuous w i t h respect to q and B o r e l measurable w i t h respect to the control argument u. However, when a feedback law u = u(q) is considered, the concept of solution must be generalized. A classical solution exists i n this case if u is a continuous function of q; however, continuous feedback laws often do not satisfy problem specifications [25, 108]. T h e question of how to define a solution to a differential equation w i t h discontinuous right-hand side has been addressed often i n the literature (see, for example, [33] and the references therein). 3  T h e concept  that we adopt here is consistent w i t h discrete sampling and is therefore the most n a t u r a l i n terms of its fidelity to application. Consider a p a r t i t i o n of [0,i/]:  U  N  = {0 = t < ti < ... < t 0  N  =  t }. f  In m a n y applications, the p a r t i t i o n is u n i f o r m , w i t h spacing At = t{ — senting the sampling interval.  T h e trajectory associated w i t h the feedback  defined on each subinterval m =  repreu(q)  is the solution w i t h constant control value  u(q(ti-i)). T h i s interpretation for a solution is c o m m o n i n differential-game  theory, where it is called a step-by-step strategy by S u b b o t i n [111] or a ii'-strategy by Isaacs [60].  T h e forward E u l e r discretization of such a strategy can lead to an-  other interpretation of a solution to (1.1a) given a feedback control u(q); however, this distinction is not i m p o r t a n t for the purposes of this thesis. T h u s , we define a solution to (1.1) as a piecewise control u(t) such that q(t) £ K(t)  for all £ € [0,2/].  T h e a i m of the algorithm is to construct a solution (when such exists) to the v i a b i l ity p r o b l e m (1.1) i n an efficient and robust manner. In C h a p t e r 4, we examine the properties of our strategy i n the continuous-time l i m i t At —> 0 . +  Solutions of (1.1) are generally  nonunique. T h a t is, there is usually a bundle  of trajectories q(t) that satisfy (1.1). W e denote this set of all possible trajectories as QK, explicitly noting its dependence on the viability set.  1.3  Applications  In this section, we describe three applications that can be formulated as D A I s . T h e first two applications deal w i t h the control of a mobile robot and a simple robot a r m , respectively. These are considered to be standard problems i n robot control and can be viewed as very n a t u r a l settings for D A I approaches. T h e last application deals 4  with a reformulation of the well-known moving-mesh problem (see, for example, [56, 92, 57] and the references therein). This is perhaps a less obvious application for the DAI formulation. However, it serves to illustrate the potential for the DAIs to provide novel ways to view otherwise unrelated problems.  1.3.1  Mobile Robot  We have performed numerical experiments based on a model of the Dynamite multiple mobile robot testbed [15]. This is a fleet of radio-controlled model cars that share an off-board vision system. Commands are transmitted to the cars at a rate of 50 Hz by means of a remote computer. The computer vision system is able to track the position and orientation of the robots at a rate of 60 Hz with an accuracy in position of a few millimetres. A more detailed description of the Dynamite system may be found in [15, 78]. In Figure 1.1, we show a picture of two Dynamites in their natural habitat.  Figure 1.1: Dynamites. Let the robot be located with its centroid at (x,y), oriented at an angle 9  5  measured clockwise from the positive x-axis, and be moving with speed v. The controls are the normalized settings for the acceleration a and the steering ip, i.e. tp is the angle that the wheels make with respect to the car's body. We write the equations of motion of the robot as a set of ordinary differential equations (ODEs)  v cos 6,  =  (1.2a)  V = v sin 0,  (1.2b)  tan(W) —L—'  (1.2c)  V  (1.2d)  v —  where the quantities /?,7,K are parameters representing steering gain, damping factor, and acceleration gain, respectively. A more precise treatment of the dynamics would warrant terms to describe, for example, the differential equation for  and  the rotational dynamics of the car. For the purposes of numerical simulation, the model may be simplified further by means of a few theoretical idealizations. These will be discussed further in Chapter 6, where results from numerical simulations are presented. The viability set K(t) is described by constraints that keep the trajectory of the robot on the testbed, which is a table with opposite corners at ( i  m  a  x  ,  y  m  a x )  ( ^ m m , J/min)  and  (see Figure 1.2). These are expressed as  /  \  \  (  Cl  Ctable  —X  = C3  V  C 4  V  y  J  6  -  1.3)  2/min  2/max -  y  }  A speed limit of u  m a x  is also imposed on the car's motion to reduce the effects  of slippage. Thus, its motion may be more reliably modelled and controlled. This is formulated as  (1.4)  ^speed \  " m a x +  V  (^rnax;  2/max)  * s  ( • ^ m i m Drain)  Figure 1.2: Schematic of the mobile robot (not to scale). So far, we have not given the robot a task. All we have specified is that the trajectory should remain on the driving surface and obey a speed limit. The robot's task is programmed by means of a driving constraint. We will consider two forms for the driving constraint. One form is that of a plane wave that traverses the table surface at speed v and inclination tp € (0, w  with respect to the a;-axis. In this way, the robot can be guided to different locations  7  on the table. In terms of the programmed-constraint framework, this is expressed as Cwave = (V COS (f -  (x -  (1.5)  V t) SHI (f) . w  We also make use of a driving constraint that takes the form of a spotlight. The spotlight is completely determined by the location of its centre (x (t),y (t)) c  c  and its radius r(t). We take the form of this to be (r\t) - (x(t) - x {t)) - (y(t) - y (t)) ) • 2  Cspot =  2  c  c  (1-6)  The programmer is free to then parameterize these quantities according to the task 1  requirements. Similarly, circular obstacles of radius r b located at 0  s  (x bs> Vobs) 0  can be de-  scribed by C  o b s  =  (x(t) -  a;  o b s  )  2  +  (y(t) -  2/obs) 2  ?obs-  (!- ) 7  This process of simply adding more constraints to refine the behaviour of the robot is known as incremental programming and it is a useful advantage of the programmed-constraint approach to robot programming. Further details and illustrations are given in Chapter 2.  1.3.2  Simple Robot Arm  As another example, we have investigated the control of a two-link robot arm. This is modelled as a planar double pendulum, consisting of two rigid rods with masses m i , m2 and lengths li, I2. One end of the first rod (the shoulder) is fixed at the origin, whereas the other end is connected to the second rod (at the elbow). Rotations are allowed in the xy-plane. The free end of the second rod (the hand) 1  O f course, to within reasonable limits.  8  can be required to follow a prescribed p a t h , such as for tracking objects on an assembly line. Let the geometry of the system be described as shown i n F i g u r e 1.3.  F i g u r e 1.3: Schematic of the robot a r m .  W e make use of the following variables for n o t a t i o n a l convenience:  and  K{  :=  cosf5j-,  Si  :=  sin0,-,  Kij  :=  cos(0,- +  S{j  :=  sin(#; +  9j), f o r i , j = l,2.  T h e coordinates of the elbow are thus  (xi,yi)  •-  (hnijis-i),  a n d those of the h a n d are  (x2,2/2) := («i +  yi +  9  hsi2)-  We now denote 8{ by U{ for z = 1,2 and let q =  92,Di,U2)  •  T  The equations  governing the motion of the two-link robot arm can be written in the form (1.8a) where (1.8b) with  L0  2  T •\mxgl-i_K-i_ - m g(l\K  1  2  — \m2gl2K\2  (1.8c)  + \1 K. ) 2  + \m hl2S2(lLo Lo  12  2  -  \m2l\l2S2u\  I  0  1  2  + wf)  I  and M =  (1.8d)  0 M where all blocks of M are of size 2 x 2 and M =  \mxll + m (li + \l + /1/2K2) m (\ll + 2  2  2  (1.8e)  \m l\ 2  The control variables u = (ui,u ) are introduced in the following way. We T  2  retain the notation of (1.8a) and the definitions in (1.8b), (1.8d) and (1.8e), while making the following modification for the definition of T in (1.8c): 1 0 ^ T  0  T -  \ 10  U2  (1.9)  )  T h e following inequality constraint is used to represent a possible p a t h , i n the f o r m of a p a r a b o l a , which the  hand would be required to follow:  r - \y - xl + 0\ > 0, 2  (1.10)  where j3 is chosen so that the i n i t i a l conditions (6.10) are satisfied. Here r represents a tolerance on the amount of vertical deviation f r o m the parabolic p a t h p e r m i t t e d i n the c o m p u t e d solution.  W e note that the limit r = 0 represents a holonomic  constraint.  1.3.3  Moving Meshes  Suppose a system of time-dependent p a r t i a l differential equations ( P D E s ) describes an abrupt phenomenon, such as a shock wave, that is m o v i n g i n time. A n a t u r a l idea then is to use a  method of lines, where the P D E s are discretized i n space to  produce a set of coupled O D E s i n time for the solution at discrete mesh points. N o w , suppose further, for reasons of accuracy and efficiency, that the spatial mesh points are allowed to move as functions of time, chasing the abrupt phenomenon.  This  gives a moving-mesh m e t h o d . M a n y such schemes have been proposed by various researchers [20, 40, 56, 57, 79, 92, 119, 123] w i t h the idea of m o v i n g the mesh so as to reliably portray solution fronts. However, it has been reported that i n some cases, such methods can be little better t h a n the simpler and less expensive fixed-mesh schemes [26]. T h e mesh equations themselves are nonlinear and hence add to the difficulty of solving even linear problems. T h e vast m a j o r i t y of the approaches reported involve to some extent enforcement of an  the  equidistribution principle. T h a t is, the mesh points are chosen  so that the value of a monitor function, such as the arc length of the solution, is  11  distributed more or less equally between the mesh points. T h u s , the p r o b l e m being solved is a D A E . T h e p r i n c i p a l m o t i v a t i o n for a D A I reformulation of the moving-mesh probl e m stems f r o m the observation that often, i n practice, meshes that are only 'close' to o p t i m a l perform as well as ' o p t i m a l ' ones, and at a fraction of the c o m p u t a t i o n a l expense. Treating the mesh-point velocities as the controls, it is possible to implement inequality constraints representing  allowable deviations from equidistri-  bution. In this way, the emphasis on adherence to strict equidistribution is lessened. F r o m a philosophical standpoint, it seems appropriate to give more i m p o r t a n c e to the discretization of the differential operator t h a n to the artificial constraint of equidistribution. T h i s is i n fact the opposite of what is naturally done w i t h a D A E f o r m u l a t i o n , and it could explain the sometimes less-than-satisfying results using equidistributing m o v i n g meshes reported i n the literature.  1.4  Overview  N u m e r i c a l procedures for the efficient solution of the general D A I problem (1.1) are l a c k i n g .  Consequently, m u c h of the potential of the m e t h o d of p r o g r a m m e d  constraints has yet to be fully tapped as a suitable framework for robot p r o g r a m m i n g . T h e central contribution of this thesis is the development and investigation of an efficient numerical m e t h o d for the D A I problem (1.1) based on a l o c a l control procedure. In this way, we exploit an advantage i n c o m p u t a t i o n a l efficiency over t r a d i t i o n a l global approaches. T h e applications studied also help to illustrate the effectiveness of p r o g r a m m e d constraints. W e propose a new family of discretization schemes that have a great potential for applicability beyond the systems of O D E s t y p i c a l i n (1.1). 12  Accordingly, Chapter 2 presents some background material in order to establish important theoretical concepts, as well as to provide some context for the problem and relevant solution methods for it. This includes a brief description of DAIs as constraint-programming techniques, their use as applied to robotics, and their role with respect to more traditional approaches.  Relevant concepts from  viability theory are also presented. Chapter 3 describes in detail a method for the numerical solution of a general DAI. The algorithm consists of three main parts: the appropriate (delay-free) discretization of the differential part of the problem (1.1a), a strategy for local planning (or anticipation), and a selection mechanism to provide a local control. The description of the last part includes the construction and maintenance of an artificial barrier potential function, which provides a means for the system to be 'steered' away from the heightened risk of infeasibility near the boundary and towards the relative safety of the interior of the viability set. Chapter 4 presents theoretical results for the proposed algorithm. The chapter begins with proofs of some convexity properties of the artificial potential function. Convexity is a very useful property because it aids the efficiency and reliability of the minimization procedure. Different examples are presented which illustrate the effect of the proposed control procedure. This is followed by a characterization of the control obtained from the artificial barrier potential. Chapter 5 offers a detailed look at delay-free discretizations. Their generalization to implicit-explicit Runge-Kutta methods represents a new and potentially powerful technique for the numerical solution of ODEs that have terms of different nature, such as those obtained from semi-discretization of convection-diffusion equations. Details of all such schemes created thus far are also given, and their stability  13  regions for a representative test equation illustrated. T h i s chapter forms part of the paper [10]. C h a p t e r 6 contains the results f r o m the use of the algorithm i n solving various D A I s . T h e first part of the chapter deals w i t h the problem of p r o g r a m m i n g a mobile robot. Simulations show our solution m e t h o d to be computationally efficient on a number of challenging problems. T h e complementarity of this approach to o p t i m a l control is discussed. It is shown how a D A I solution can be a good starting point for a solution by means of o p t i m a l control, and how it can be used as a p r a c t i c a l m e t h o d for implementing an already available o p t i m a l control solution. A n illustration of the benefits of a l o c a l versus global approach is given. F i n a l l y , some simulations on another robotic system, that of a simple robot a r m , are presented. O v e r a l l , the results make several noteworthy contributions to field of D A I s and their numerical solution. C h a p t e r 7 discusses our conclusions and possible future directions for this research.  14  Chapter 2  Background The teleoperators of today are the autonomous robots of tomorrow. - Murray, Li, and Sastry [81] Over the past few decades, the fields of robotics and robot programming have been burgeoning areas of research. Many researchers, in both academia and industry, have studied the problem of point-to-point robot motion in a variety of forms, leading to a huge literature on the subject. Consequently, it is appropriate to outline the place of the programmed-constraint methodology relative to mainstream approaches for such robot programming (or robot navigation) problems. This provides an overview of the problem that we wish to solve and some basis for our particular numerical approach. In this thesis, we adopt a very generic view of what constitutes a robot program or task. A robot task is considered to be a description of a goal which'the robot is to accomplish. This definition purposely leaves a lot of room for interpretation. A program definition can be as simple and general as "Move to one end of the room." However, an effectively endless amount of complexity can be added. For example, 15  an equally valid robot program is  "Move to one end of the room, picking up object  0 along the way, stopping exactly at point X, and optimizing over time and energy for the entire trip." Clearly, different methods or conjunctions of different methods m a y be more appropriate t h a n others for a given task..  2.1  Approaches to Robot Programming  T r a d i t i o n a l l y , the t y p i c a l robot p r o g r a m m i n g problem involves point-to-point motion i n the configuration space of the robot.  For example, the robot a r m of Sec-  t i o n 1.3.2 might be required to move f r o m its present configuration (^1,^2) to the vertical position (f-,0). T h e solution of such a problem is usually the product of three layers of treatment, as illustrated i n F i g u r e 2.1.  C  Robot P r o g r a m m Problem  *  ~  Path Planning (geometric)  I Trajectory Planning (add time)  r  C o n t r o l Scheme (track trajectory)  F i g u r e 2.1: T r a d i t i o n a l robot p r o g a m m i n g solution m e t h o d  F i r s t , the  path-planning problem is solved: G i v e n only geometric d a t a such as 16  i n i t i a l and desired final configurations, robot size, and obstacle shapes and locations, construct a collision-free curve Unking i n i t i a l and final states of the robot.  This  beginning phase of the solution process is well researched. In [102], a solution based  cell decomposition is given to the purely geometric problem of finding a collision-  on  free p a t h (given perfect information) i n an environment cluttered w i t h p o l y n o m i a l l y described obstacles. A near-optimal solution is developed i n [27], based on a similar idea, called a  roadmap.  H a v i n g solved the path-planning p r o b l e m , the t r a d i t i o n a l robot p r o g r a m m e r now progresses to the  trajectory-planning stage: G i v e n a collision-free p a t h , find a  time p a r a m e t e r i z a t i o n of it, now accounting for constraints imposed by the robot's d y n a m i c s . U p to this point i n the solution process, the dynamics of the robot have been completely ignored. T y p i c a l l y , conventional p a t h planners are holonomic and i m p l i c i t l y assume that arbitrary motion is possible i n the free parts of configuration space. However, if a system contains  nonholonomic c o n s t r a i n t s , m a n y of the so1  lutions produced by these p a t h planners cannot be directly applied. If they c o u l d , the familiar problem of parallel p a r k i n g would be t r i v i a l . T h e mobile robot system (1.2) is an example of a nonholonomic system [81]. Nonholonomic systems are characterized by the fact that they are locally uncontrollable (ie. the linearized system is uncontrollable) yet globally controllable [122]. T h u s , depending on what level of constraints are taken into account, it is possible for the solution process to be stuck w i t h a trajectory that cannot be followed by the system. T h a t is, the paths generated m a y violate the often subtle nonholonomic constraints. It is also possible to construct a finite set of stable control strategies over A n equality constraint is called nonholonomic if it is satisfied by the generalized velocity of a (mechanical) system but it cannot be expressed as an equivalent condition on the generalized coordinates. 1  17  l o c a l time horizons and chain t h e m together to produce a desired global m o t i o n . Such an approach would yield a simple nonholonomic m o t i o n planner that could also handle hidden or moving obstacles. For example, i n control of aircraft, a set of stable maneuvers might include bank 10 degrees right or increase altitude by 5000 metres, etc. In such a manner, a tree Unking a l l such combinations of maneuvers can be generated. T h e m a i n job of the planner would then be t o perform a search of the tree to find a suitable trajectory. Because the complexity of the tree generally grows exponentially i n size, a mechanism for pruning the less-fruitful or even deadend branches must also be i n place. T h e search is often performed using techniques c o m m o n i n the field of artifical intelligence [31]. However, we note that such a n approach m a y not be computationally efficient nor easy to p r o g r a m :  E v e n after  p r u n i n g , there may still be a large number of possibilities to be examined, especially, for example, if m o v i n g obstacles i n the environment force the time horizon to be very short. T h e time-parameterized curve is called a reference trajectory. T h e r e has been a great deal w r i t t e n about the construction of reference trajectories, usually under  the names of motion planning, or nonholonomic motion planning. See [115, 38, 76, 81] for various examples and [69] for a survey. S u r v i v i n g the first two stages allows the programmer to graduate t o the final stage of the solution process, solving the tracking problem: Determine a control t h a t w i l l allow the physical plant to follow the reference trajectory as closely as possible. T h i s is most commonly referred to as solving the inverse dynamics p r o b l e m . T h i r t y and forty-year-old technologies i n the f o r m of P D and P I D controllers [89, 121] and heuristics such as Ziegler-Nichols Rules [89] are often pressed into action to realize this final stage of the solution process. Happily, such methods are persistently robust  18  and of great p r a c t i c a l use, despite their age. A l s o worthy of mention at this point are  artificial potentialfieldmethods.  These consist of constructing a real-valued map <f> on the robot's free configurat i o n space (which can be associated w i t h the viability set K(t))  w i t h the following  properties: 1. there exists a m i n i m u m of <j> at the goal configuration, 2. 4> is uniformly m a x i m a l on the boundary of K(t),  denoted  dK(t).  A n example of an acceptable potential function is depicted i n F i g u r e 2.2.  Once  a suitable <fi is constructed, it determines a feedback law i n which — V<f> (subject perhaps to some added dissipation) is used as the input to the robot's actuators.  y F i g u r e 2.2: A candidate for a potential function.  T h e concept of potential function was pioneered by K h a t i b i n the 1980's [64]  19  i n the context of obstacle avoidance, and it was further developed by H o g a n [53] and N e w m a n and H o g a n [82] i n the context of force control. T h e r e are also other independent developers of this methodology f r o m outside of N o r t h A m e r i c a [80, 90]. W e have seen how the t y p i c a l robot p r o g r a m m i n g problem is split into three phases that are solved separately. T h e first two are planning layers and are usually associated w i t h high-level, off-line control. In contrast, the final phase, which does the a c t u a l controlling, is a low-level, on-line process. P o t e n t i a l field methods offer a unified approach to task description and control, because a suitable (f> represents p a t h p l a n n i n g , t r a j e c t o r y planning, and a feedback control all at once. F o r example, the classic P D controller can be looked at as the gradient of a Hooke's law type of potential [66].  Moreover, such computations can all be carried out i n real time.  T h u s , it is not surprising that these methods have a successful history i n practice [64, 53, 82]. T h e A c h i l l e s ' heel of potential field methods, however, is the presence of spurious m i n i m a i n the potential function.  These become especially apparent i n  cluttered environments. Unfortunately, the construction of complex potential functions through the combination of simpler ones does not necessarily lead to easily predictable (or desirable) behaviour. T h i s leads to disappointing results, especially i n the transient behaviour of systems subject to these potentials [114, 22, 64]. In response to this, R i m o n and K o d i t s c h e k [68, 96] developed the concept of  navigation function i n the early 1990's. T h e y augment the required properties of potential functions to include functions that 1. have a  unique m i n i m u m at the goal configuration,  2. are at least twice differentiable,  20  3. have a nonsingular Hessian at all its critical points. The navigation function automatically yields a bounded-torque feedback controller, guaranteeing collision-free motion and convergence to the destination for almost all initial configurations [96]. In principle, such functions can be shown to exist for any robot-obstacle combination. However, except in certain specialized situations [96], constructive techniques are still lacking. Moreover, certain assumptions in the theory, such as stationary obstacles, fixed destination, perfect information, ideal sensors, and ideal bounded-torque actuators, must ultimately be relaxed in a practical implementation.  2.2  Constraint Programming in Robotics  Although the traditional method is most commonly used for solving the robot navigation problem, it is not necessarily the most natural characterization. There can be a degree of arbitrariness involved in the programming process, often leading to program inflexibility, the severity of which is accentuated in cluttered and dynamic environments with changing task requirements. For example, suppose a robot is programmed to move from point A to point C, via point B. See Figure 2.3. It is generally not easy to modify this program to instruct the robot to proceed directly to point C before completing its path to B. On the other hand, motion-planning approaches provide a very high level of programming. Of particular note is the optimal-control solution, where the robot motion optimizes a specified functional, such as the time taken or the energy expended during the motion. However, these approaches often incur great computational expense and are therefore of limited practical use in some situations, particu-  21  Figure 2.3: The eternal question: To B or not to B. larly where not all information is available at the outset. This is of definite concern for systems that are autonomous and reactive because much of the 'programming' must be done during the execution. For example, control of a space-shuttle arm [101] is perhaps more suitable for effective programming using optimal control because the environment is amenable to allowing off-line computation, and the program will not be subject to unexpected changes . This is in contrast to a robot working in 2  a hospital setting, where the long-term motion of patients, staff, carts, etc., cannot be known. The method of programmed constraints has recently been advocated as an executable specification language for robot programming. It can be seen as an intermediate-level language, residing between low-level, trajectory-based approaches and high-level motion planning. The Least Constraint approach [87], which incorporates a weaker specification of control actions, was proposed by Pai and first appeared in the late 1980's. In this methodology, large time-varying sets of nonzero measure (which we identify as viability sets) are the means by which desired goals 2  As the amount of space junk increases, however, this assertion may have to be modified.  22  are specified, i n direct contrast to explicit trajectory specification. T h e underlying philosophy is that all locations w i t h i n the viability set are equally acceptable, and it is often unnecessary and u n n a t u r a l to specify more for a task to be completed. . T h e viability sets are described by the (nonempty) intersection of inequality constraints that are satisfied at r u n time to produce the control.  A solution to  such a p r o g r a m necessarily produces behaviour that satisfies all the constraints. F o r example, consider the problem of a point robot x(t)  w i t h dynamics x — u.  p o t e n t i a l field approach, the control is expressed as u = — V</> — c(x),  In a  where (f> is  an artificial potential field to be specified, and c(x) is a dissipation t e r m . Suppose that the robot is constrained to the region 0 < x < 1. A suitable (barrier) p o t e n t i a l f u n c t i o n to do this is <f>i(x) = — logo; — l o g ( l — x). destination point is Xf. (p  — i ( x — Xf) . 2  2  N o w suppose that the desired  A suitable potential function to produce this result is  S i m p l y adding <f>\ and (f>2 to make a new potential function <f>  clearly does not yield the desired result x{t)  —> xj if xf is not near | !  However,  subjecting the dynamics to the intersection of constraints of the f o r m  x(t) > w (t), x(t) < toi(i) 0  w i l l always produce the desired effect, provided that the m o t i o n is possible for the choice of  wo(t), wi(t) satisfying u>o(0) = 0, u>i(0) = 1, and wo(T) =  Xf — e,  w\(T) =  Xf + e for suitable T and e. T h i s makes the Least Constraint approach very similar to  constraint program-  ming languages ( C P L s ) [72]. T h e distinguishing feature of C P L s is their declarative n a t u r e , as opposed to the algorithmic nature of imperative computer languages like P a s c a l or F O R T R A N . T h e constraint programmer simply states the relations between objects, and it is up to the constraint-satisfaction system to find a solution. In this way, the algorithm that is presented i n C h a p t e r 3 can also be viewed as a 23  particular case of such a system. A C P L allows the programmer to focus on the pure p r o g r a m m i n g aspect of the p r o b l e m , and not on the algorithmic details of how the different subtasks are executed. T h i s makes it more tractable, for example, to teach mobile robots how to play soccer (see [78] for a discussion of other issues involved i n this). T h e programmer can concentrate on more abstract goals such as playing  defense, while being much less preoccupied w i t h specific details of robot m o t i o n . It is also a powerful technique for nontraditional control problems such as m i n i m a l l y invasive surgery performed by teleoperated robots [61, 81]. Besides also being a unified approach to p r o g r a m m i n g , the advantage of C P L s that is p a r t i c u l a r l y attractive to robot p r o g r a m m i n g is the ease w i t h which complex programs can be built up and modified.  F o r example, consider the p r o b l e m of  controlling a walking machine [87, 86]. For this p r o b l e m , programmers would prefer an environment i n which assertions about the behaviour are specified incrementally because a significant part of the solution process is i n fact discovering the task requirements themselves. For example, an i n i t i a l attempt at p r o g r a m m i n g a robot to walk might include specifying a region i n which the leading foot is supposed to l a n d i n order to m a i n t a i n stability, and requiring that this foot be clear of the g r o u n d during its m o t i o n . It might be then discovered that one leg crosses behind the other and the robot trips. T h e programmer would like to modify the existing p r o g r a m by simply adding new assertions. T h i s is not possible i n current robot p r o g r a m m i n g languages. F i n a l l y , the use of constraints for robot p r o g r a m m i n g also offers some advantages that can be complementary to the more t r a d i t i o n a l approaches. F o r example, the simplicity and intuitive geometric appeal of explicit p a t h specification can be retained by means of a  fictitious  moving circle (or 'spotlight', as i n (1.6)) whose  24  center follows a predetermined p a t h . T h e radius of the spotlight is a measure of the tolerance w i t h i n which the p a t h should be followed. There are several advantages to this approach. F i r s t , it is easier for the programmer to specify a geometric entity such as the p a t h of the spotlight and allow the plant itself to (implicitly) solve the inverse dynamics problem of staying w i t h i n i t .  Moreover, it clearly makes more  sense to p r o g r a m i n terms of tolerances f r o m an implementational viewpoint. Seco n d , flexibility i n avoiding obstacles or changing tasks is not sacrificed because the spotlight p a t h can be changed at any time without adversely affecting the solution p r o c e d u r e . In other words, new information f r o m a d y n a m i c environment can be 3  processed accordingly. T h i r d , this m e t h o d also dispenses w i t h the problem of being forced to choose a functional to optimize so as to apply an o p t i m a l - c o n t r o l approach. O f t e n i n robotics, as i n everyday life, there is no clear choice of performance measure for a given task.  F o r example, when placing an object on a table, there is  (usually) no need to select a location that is, for example, maximally away f r o m the edges. Any location that is 'sufficiently' far f r o m the edges is equally acceptable. T h u s , the solution t o the place-object-on-table task is nonunique.  Optimal-control  solvers generally have a great deal of difficulty dealing w i t h nonuniqueness of solutions, and a n ill-posed performance measure can be a m a j o r cause. O t h e r p r a c t i c a l difficulties include determination of an i n i t i a l trajectory a n d control profile which is 'close enough' t o the exact solution and f r o m which a nonlinear equation solver can converge. T h i s also leads to the question of detecting convergence to undesired minima. T h e r e are processes for which o p t i m a l control is a n a t u r a l (and even easily computable) control strategy. For example, it is often desirable to use t i m e - o p t i m a l O f course, unexpected events such as obstacles which stray into the spotlight area are handled automatically, to within the limitations of the system. 3  25  control on manufacturing systems, especially those involving repetitive tasks. Also, the benefits of the additional effort exerted in optimizing the control are obvious. In this case, we offer the DAI framework for programming in more of a complementary sense, rather than strictly as an alternative to optimal control. In Chapter 3, we describe an algorithm that is designed to solve the DAIs that arise from the programmed-constraint approach to robot programming. The central idea is based on the Least Constraint framework [87]. The solution method combines the integration of the equations of motion with the timely updating of a default control, where an attempt is made to keep the solution trajectory away from all constraint boundaries.  This is achieved in the following way: A local  planning mechanism is used to delineate buffer zones around the boundary of the viability set. The choice of control is then based on the minimization of an artificial potential function where the constraint boundaries act as barriers. The artificial potential function is a non-negative, continuous, and differentiable function whose value tends to +00 as the system approaches the boundary of the viability set, and with its influence limited to a given extent within the interior. These are all properties shared with the potential functions originally proposed by Khatib [64]. However, there is a difference in the form of the artificial potential function near the boundary of the viability set: We make use of barriers with logarithmic singularities whereas inverse square barriers are used in [64]. Some reasons for our choice are given in Chapter 3.  2.3  Constraint Programming as a Viability Problem  In Chapter 1, the mathematical structure of the DAI problem is given as a viability problem (1.1). Viability theory, or controlled invariance [32], is a useful mathemat26  i c a l framework for the description of evolving macrosystems arising i n areas such as biology, economics, and cognitive sciences, as well as differential games and control theory (see, for example, [13, 14, 111, 32] and the references therein). W e concentrate i n particular on the m a i n results f r o m viability theory as applied to control systems [13, 14, 32]. V i a b i l i t y theory has recently emerged as a popular a p p l i c a t i o n for control systems, although it has not always enjoyed this role (see, for example, [12] and the references therein). W e begin w i t h a very brief description of viability theory. V i a b i l i t y theory deals mainly w i t h three central concepts [13]: • A  nondeterministic evolution that provides m a n y opportunities to explore the  environment. T h i s corresponds i n part to the assumption of a viability region w i t h nonempty interior. • V i a b i l i t y constraints that must be respected by the system at all times for a solution to be meaningful. T h i s is also i n fact the essence behind the definition of a D A I p r o b l e m ! • A n inertia principle which states that the controls of the system are changed only when viability is at stake. W e assume the existence of an a p r i o r i control strategy,  u (t), which is to be used by the system when it is 'safe' w i t h i n the nat  interior or the viability set. Despite the n o t a t i o n , this c o n t r o l c a n be open or closed loop. E x a m p l e s of each are given i n C h a p t e r 3. In this thesis, an i n e r t i a principle would more generally refer to an adherence to the control  u t(t). na  W e now provide a few more details for each of these items: • By  nondeterminism, it is implied t h a t , at each instant, there exist many  available viable evolutions, which m a y depend on the current state, or past evolution, 27  etc.  In this sense, the dynamics are not considered to be deterministic.  Such  a lack of determinism m a y also stem f r o m disturbances or perturbations i n the system, including those due to modelling errors, measurement errors or incomplete information. • T h e r e are m a n y applications i n which the evolution of a d y n a m i c a l system is not meaningful unless certain constraints, called viability constraints, are obeyed. T h i s is a c o m m o n feature i n D A E s [24, 3, 2, 50, 4], most notably m u l t i b o d y systems [120, 51] where, for example, distances between joints must be constant, a n d symplectic integrators [49, 107], which preserve differential 2-forms of a flow. F o r example, if a r o b o t - p r o g r a m m i n g task involves the m o t i o n of a robot i n a n arena, trajectories that would take the robot outside of the confines of the arena have no physical significance or practical utility. T h e goal is therefore to construct a viable solution, that is, one which satisfies the constraints at all times. V i a b i l i t y theorems yield selection procedures for viable evolutions [13, 14]. T h a t is, they characterize the relations between dynamics and constraints i n order to guarantee the existence of a viable solution starting f r o m any i n i t i a l state. These theorems can also yield feedbacks that can be used to m a i n t a i n viability, or even improve the state according to some performance measure (or preference relation). However, this theory is not always of a constructive nature and usually does not offer a strategy for when the system is still well inside K(t):  the analysis focuses  on what to do when the system reaches the boundary, dK(i).  T h i s is of particular  concern for nonholonomic systems, which have fewer l o c a l degrees of freedom t h a n global ones and hence m a y not be able to act o n such a strategy. T h e a l g o r i t h m presented i n C h a p t e r 3 is offered as a m e t h o d to construct a solution t o a viability p r o b l e m f r o m the programmed-constraint m e t h o d to robot p r o g r a m m i n g .  28  Similar to the setting of a D A I program, viability theory does not require optimization of an intertemporal objective function. This, of course, is in contrast to optimal-control theory. The choice of control is not predetermined at some initial time. Rather, the theory allows for the possibility to adapt to a changing environment as defined by the viability constraints. Also in contrast with the assumptions of optimal-control theory but similar to D A I philosophy, viability theory does not require any knowledge of the future . We argue that some 'planning' is very useful in 4  solving DAIs. However, it is implemented as a limited extrapolation to the future, based only on present (and possibly local past) information. So our view of the future is myopic at best, and may be viewed also as anticipation. It is natural to argue that selection through viability alone may be too general. This brings up the aspects of complexity and robustness in choosing a viable trajectory.  For example, it is possible that the state of a system becomes more  robust as it moves into the interior of the viability set. In this situation, only those trajectories that move away from the boundary remain viable after a given length of time. For a practical, efficient D A I solver, both these considerations must be weighed into the choice of a selection procedure. Clearly, a balance must be struck between the computational effort spent on determining a control and the improvement of the overall 'safety' of the system. • We have been advocating what amounts to a generalized inertia principle, which states that the controls are kept on a certain strategy, u  n a t  ( i ) , as long as  the viability of the system is not at stake. However, it is clear that despite this, the state of the system may evolve so that it reaches the viability boundary with a nonzero outward velocity. Such an event corresponds to a period of crisis: The although trajectories are certainly implicitly constrained by the choices of the past (as we are).  4  29  system must now update the schedule given by u (t) with a new control that will nat  steer it toward the interior of the viability region. Of course, another way to resolve such a situation is to relax the constraints. Then, the system will lie in the viability region defined by the new constraints. For example, if a table top is described by a set of constraints that leaves a border inside the edges, then these constraints can be relaxed to true physical edges. When neither course of action is possible, by definition, the trajectory leaves the viability set and terminates. Many possibilities present themselves when choosing a control at a crisis point [13, 14, 32]. For example, traditional strategies from viability theory include the choice of a control that provides a viable solution with minimal velocity. These are called heavy viable evolutions, or slow solutions [13, 14]. When the controls are velocities, heavy solutions have minimal acceleration or in other words, maximal inertia. Alternatively, the algorithm presented in Chapter 3 chooses a control based on the minimization of an artificial potential function when viability is deemed to be at stake. We will see in Chapter 4 that this choice corresponds to a type of heavy solution in certain situations. The main result from viability theory of relevance to problem (1.1) is the following: Denote the right-hand side of the appropriate differential inclusion associated with (1.1) by F(q). Now assume that F(q) is a Peano map . If a (constant) 5  closed set K is a so-called viability domain, then for any initial state go € K, there exists a viable solution to (1.1) on [0,T]. A subset K 6 Dom(F) is a viability domain if and only if VqeK,  F(q)C]T (q)^Hi, K  where Tft-(g) is the contingent- cone [13, 12, 32] to K at q. In less technical terms A Peano map is a nontrivial, upper semi-continuous, set-valued map with nonempty, compact, convex values and linear growth. 5  30  and i n the context of (1.1), this means that for every state q, there exists a feasible control u that can provide an inward velocity, i.e., a velocity directed towards the interior of K, whenever the system reaches the boundary. T h u s , if such a selection is always possible, the system w i l l remain viable. E v e r y D A I solver is ultimately concerned w i t h constructing a feasible control u t h a t makes K a viability domain. V i a b i l i t y theory is m a i n l y concerned w i t h the properties of the set K a n d the results are often not of a constructive n a t u r e . 6  Because we are dealing w i t h a time-varying set K(t), the results must be applied t o  the graph of K(t), G r a p h ( I i ' ) = {(t,q)  e S x T | q(t) € K(t)}.  Some theoretical results for this case are presented i n [70]. These ideas can also be related to the concept of stable bridge f r o m differential game theory. In this case, the two opposing players are endowed w i t h controls over a d y n a m i c a l system. One player would like that the system be i n some target set at a fixed later time. It is possible to build a universal feedback to guarantee this based on a n e x t r e m a l strategy i n the presence of a k n o w n stable bridge [111]. T h i s is very interesting as a way t o get back into the viable set and merits future investigation.  2.4  Numerical Methods  N u m e r i c a l treatment of ordinary differential equations w i t h inequalities n a t u r a l l y divides into two types: those that deal w i t h inequality invariants a n d those that deal w i t h unilateral (one-sided) constraints. 6  Consider a general O D E a n d some  w i t h proximal aiming [32] being a notable exception.  31  inequalities: y = /(*,!/(*))  (2.1a)  0 <  (2.1b)  c(t,y(t))  We say that (2.1b) is an inequality invariant if the satisfaction of (2.1b) at some (initial) time guarantees that it is automatically satisfied by the exact solution y(t) of (2.1a) for all later times. Hence, it represents a set of conclusions about y(t). For example, physical properties such as chemical concentration are known to remain non-negative. Difficulties in practice may arise because the numerically computed solution will not generally satisfy (2.1b). On the other hand, a unilateral constraint represents conditions imposed on y(t), such as described in the examples of Chapter 1. Hence, a numerical method for solving this problem must in fact alter the trajectory, perhaps through the addition of reactive forces or Lagrange multipliers, in order that (2.1b) be satisfied. Some work has been done for the case of inequality invariants, for example [46, 103]. It has been shown [103] that all the standard one-step numerical integrators (of which explicit Runge-Kutta methods are prototypical) maintain linear invariants having constant coefficients. This result does not extend to the class of standard implicit methods [103]. In both [46, 103], a minimum Z2-norm projection is utilized to project the discrete solution back onto the manifold defined by the active constraints after each integration step. A constraint of the form C{(t,y(t)) > 0 is said to be active at time t* if Ci(t*,y(t*)) = 0. We note also that the nonzero Lagrange multipliers at t* can only be associated with active constraints. In essence, the procedure in [46, 103] amounts to a selection of a Lagrange multiplier (similar in spirit to a heavy viable solution) in order to maintain the invariants.  32  A similar technique is used in [85] to satisfy unilateral constraints arising in rigid-body contact problems with Coulomb friction. There, the Lagrange multipliers are selected as the solution to a certain quadratic programming problem. Thus far, all the solution methods described resemble  active-set  strategies.  This results in the integration of a DAE of varying dimension, thereby making the problem much more difficult and computationally expensive, as different constraints drift in and out of the active set. Moreover, such techniques are not even always relevant in robotics problems, because physical limitations (often in the form of nonholonomic constraints) may not allow the system to 'slide' along the active constraint boundaries. In this case, the system has fewer local degrees of freedom than global ones. For example, a car approaching a wall (constraint) at an acute angle can come to rest against it, but it cannot slide parallel to it. Clearly, there is an essential need for some planning in an algorithm to solve such a problem. We are particularly interested in the numerical solution of the control system (l.la)-(l.lb), which is required to evolve in a viability region described by (1.1c). Such a control system can always be written as an initial-value problem for an ordinary differential inclusion [14, 32] yeF(t, (t)),  (2.2)  y  for almost all i G [^o,? ] and with y(to) = yo1  In general, solutions of (2.2) form a  set  of functions Y, representing a  bun-  dle of trajectories. Because of this nonuniqueness of solutions, many of the usual concepts of numerical analysis must be modified when applied in this context. There are two approaches to approximately 'solve' (2.2). The first is more in tune with traditional numerical analysis: Use afinite-differencescheme, with a suitable  selection  procedure,  together  to produce a sequence of discrete values { y ^ l ^ i 33  for the solution on a suitable grid U = {t$ < i f < . . . < t% = T}. In this case, the N  inclusion reduces to an ODE with discontinuous right-hand side. Because we can place a mesh point at every control discontinuity, Runge-Kutta integration methods are favoured over linear multistep methods. The second approach is rather intriguing, and its investigation is a possible candidate for future work. It consists of approximating the entire solution set Y of (2.2), and not simply one specific trajectory. One special case of note for this approach is the approximation of the reachable sets. This could provide useful insights to the programming process, especially in the design of the viability set. See [39] and the references within for more discussion of this approach. For the general differential inclusion, it is not worthwhile to look for higherorder standard methods. For example, consider the following ODE with discontinuous right-hand side describing forced vibrations with viscous and Coulomb damping [39, 112, 49, 120]: y + (a + sgn(y))y + y = f(t), 0  where f(t) is smooth. For this problem, it is shown in [39] that higher-order schemes exhibit the same difficulties as lower-order schemes in regions where the classical solution breaks down. However, the additional effort is worthwhile on subintervals where the righthand side is single valued and smooth. Fortunately, this is the case for the control system (1.1), because we are able to dictate when the control is potentially discontinuous and place a mesh point at the appropriate spot. Selection strategies are used to improve the qualitative properties of the approximate solution when there are a bundle of solutions. For example, a solution can be chosen [39] with 34  • minimum norm  where /,• G F is the selection on interval  or  • minimal variation \\yjf — We note that the first example is a case of the  heavy solution  mentioned  previously. The algorithm proposed in the next chapter can also be viewed as a nonstandard selection strategy with a corresponding nonstandard objective function. We will see that the selection made is so as to preserve the  relative  distance of the  system to the constraint boundaries defined in the graph of K ( t ) from one time step to the next. It is a hybrid selection strategy, switching between an a priori schedule  u (t) nekt  and a control based on the minimization of an artificial barrier  potential. The switching is controlled by a local prediction strategy. Formal higherorder convergence of the method would be a suitable avenue for further research from a purely theoretical view.  35  Chapter 3  A DAI Solver We now describe in more detail the method used to produce a solution to the robotprogramming problem defined by (1.1). An earlier version has appeared in [109]. Most illustrations will be given as applied to the Dynamite system (1.2). Following an overview, we proceed with a detailed specification of the various components of the algorithm. A summary of this is given in Section 3.5.  3.1  Overview  We recall that the objective of a DAI solver is to facilitate the numerical integration of the viability problem (1.1). At each step, a local selection is made for the control u(t), based only on 'current' information (or estimates thereof) about the system and the viability set K(t).  The default control ?j  nat  is maintained as long as the  viability of the solution is not in jeopardy. This strategy corresponds to a generalized inertia principle [13]. When the system is deemed too close to dK by a local prediction (or anticipation) strategy, a (new) control u*(t) is invoked. The anticipation process involves not only monitoring the constraints and their various rates of 36  change, but also whether a sufficiently s m a l l deviation of the control f r o m u t w i l l na  allow the system t o avoid each constraint boundary individually, though perhaps not simultaneously. T h i s is the concept that we call weak approximate safety. Desirable features of a solution m e t h o d should include the following: • Small computational expense i n determining the control. T h i s is required i n two i m p o r t a n t situations. F i r s t , if the problem is being solved as a starter for an o p t i m a l - c o n t r o l solver, we would prefer that the D A I c o m p u t a t i o n not be a significant a d d i t i o n a l expense. S m a l l c o m p u t a t i o n a l expense is also i m p o r t a n t i n a n i m p l e m e n t a t i o n , because the commands to be issued by the controller must be determined i n real time. • Robustness A suitable strategy should be capable of providing solutions i n difficult or varying geometries, as well as exhibiting a degree of insensitivity to m i n o r r a n d o m errors i n measurement, modelling, a n d finding the precise value of the locally o p t i m a l control. A l t h o u g h we wish to avoid a global m o t i o n planning p r o b l e m , a completely l o c a l approach w i l l be severely l i m i t e d i n the scope of problems that can be solved. In m a n y situations, viable solutions are easily found w i t h rudimentary  planning  schemes, but they are impossible to obtain without these schemes. A c c o r d i n g l y , we desire a prediction strategy that enlarges the A'-reachable set over that which would be obtained without planning. T h e K-reachable Or K-attainable  set at time  t G [0,*/] is defined as R (qo,t) K  = {?(<) G K  n  | <?(•) G Q  K  on [0,t] w i t h g(0) = q }0  It is the usual reachable set, but w i t h the added restriction of viability - hence, its dependence on K.  37  T h u s , as part of a planning procedure, an anticipatory or buffer  zone near  the b o u n d a r y of the feasibility region must be created and maintained in order to give any algorithm the opportunity to choose a control (or sequence of controls, perhaps at consecutive time steps) to keep the system inside the viability region. T h e algorithm that we present attempts to solve the viability p r o b l e m by avoiding the boundaries of the viability region whenever possible. A l t h o u g h not strictly necessary, we assume that the user w i l l have prescribed a default control profile tinat? which should yield some k i n d of preferred or n a t u r a l behaviour for the system. T h i s is an opportunity for the user to exercise some influence on the solution produced. Let the controls in (1.2)  be redefined to be  1  a = nd, a = — tan(/3V0Li  T h e n , for example, u  = ( V w ^ n a t ) = 0 for the mobile robot system corresponds  nat  to slowing down but continuing i n the same direction when viability is not at stake. T h i s often leads to the effect of the car being b u m p e d along by the driving constraint. However, the choice a (effectively, 7 = 0). choosing u  n a t  n a t  = ")v simulates the absence of d a m p i n g in the system  A s a last resort, the user always has the option of simply  = 0 and allowing the local m i n i m i z a t i o n process to determine a control  f r o m scratch. However, this may not lead to an intuitive or computationally efficient solution i n all cases. T h i s is a very application-dependent aspect. It is sometimes more instructive to view the solution process as a control strategy i n itself, whereas at other times, it can be viewed as simply a correction (improvement or superroutine) to an ' a l m o s t ' acceptable u  n a t  . T h e advantage to the l o c a l nature of the  approach is that it can to some extent be immune to surprise events, including measurement or model errors and incomplete information. W e note that despite its name, the n a t u r a l control u 38  n a t  is not exactly ' n a t u r a l '  in the sense of Koditschek [67]. What Koditschek refers to as natural motion is actually unforced motion of a damped mechanical system. Although a common point in the two descriptions would be the choice u , (t) = (ip .t, a t) = 0 in the na t  na  na  mobile robot system (1.2), we take 'natural' to mean an a priori, often open-loop control provided by the programmer as the desired behaviour of the system when viability is not at stake. The values for the controls are computed based on the approximate minimization of an artificial C logarithmic barrier potential, where the singularities are 1  aligned with dK(t). In this way, future states that continue to approach the boundary are penalized. The artificial potential function chosen attempts to maintain the same relative distance of the system to the boundaries between time steps. The algorithm can be divided into three basic components: time discretization, local planning, and local control. Only a brief treatment of the time discretization is given in the following section; a more extensive treatment is deferred until Chapter 5. However, a detailed treatment of local planning and local control is given in the following sections. A summary of the entire algorithm appears in Section 3.5.  3.2  Time Discretization  A suitable discretization scheme is chosen based on the differential problem (1.1a). Consider a typical one-step method to advance the system from ( i - i n  control w _ i to (t ,q ) n  n  the interval (t -i,t ), n  n  n  5  <7n-i)  with  where t = t -\ + At. Recalling that w _ i is constant on n  n  n  we have q = $ ( * _ i , g _ i ; u _ i ; A*). n  n  n  n  (3.1)  We note that an r-step multistep method would also involve the states 39  <7n-2> ln-3-, • • •  j Qn-r  i n (3.1). However, multistep methods are of little use i n this  context because they attempt to continuously  extrapolate f r o m previous states t o  the next state, whereas u is generally discontinuous  between subintervals. T h i s situ-  ation is m i t i g a t e d b y one-step methods provided that a mesh point can be placed at each j u m p i n the value of the control, i.e., provided u is constant on each integration subinterval. Fortunately, this is the case for the D A I (1.1). In solving the m i n i m i z a t i o n problem (3.5), we view (3.1) as a function control.  of the  In such cases, we w i l l employ the notation  (in =  q (u). n  T h i s turns out t o be a useful way of characterizing future states. Choices for particular discretizations of (3.1), especially ones that are delay free, are discussed i n C h a p t e r 5.  3.3  Local Planning  T h e second component of the algorithm is local planning.  T h e underlying m o -  t i v a t i o n for local planning is based on the idea that adding a planning layer to the programmed-constraint approach would allow more problems to be successfully solved. A l t h o u g h this is intuitively appealing, it also has a basis i n concepts f r o m viability theory [13]. It is possible that the state of a system becomes more robust as it moves into the interior of the viability set. So, only those trajectories w h i c h choose t o move away f r o m the boundary remain viable after a given length of time. In this way, movement away f r o m the boundary corresponds to increasing the size of the ii'-reachable set. A similar view can be adopted of t a k i n g steps to change the evolution of the system before the boundary is reached. T y p i c a l results f r o m v i a b i l -  40  ity theory or differential games do not offer a strategy for control until the boundary of the viability set is reached [13, 111, 32]. By then, depending on the form of the control system (1.1), the intersection of the reachable set and the viability set may be empty. This is especially true in the case of nonholonomic constraints, under which arbitrary motions are not possible locally. Due to the physical limitations of the system, it is important to monitor the relative movement of the boundaries and make a decision as to whether they can be easily  avoided. If not, then the system should act immediately towards improving  upon this. We are using the concept of weak  approximate  safety  to determine when  a control step should be executed. We form a local approximation to the current behaviour of the constraints, and we perform a test on whether each constraint can be avoided  individually  by a control that is sufficiently close to  tt  n a  t-  The size of  this deviation is governed by a sensitivity factor £/f , which represents the size of a rac  relative deviation that is deemed to be 'small'. The quantity C/f is defined to be rac  a diagonal matrix with nonzero entries  tif  r a C l i ?  -,  .7 = 1,2,...,  n ntriC  When the condition of weak approximate safety holds, the system is deemed safe  and the system can proceed with  «  n  a  t .  Otherwise, a new control step is com-  puted, based on the minimization of an artificial potential function. Having discussed the need for planning, we now discuss its implementation through the use of buffer zones. Let the quantities c f ; > 0, i = 1,2,..., n sa  ei  c n s t r  ,  denote the inner edge of the buffer zone. The buffer zone itself at time t = t -\ is n  then the set of all states  q  such that 0<  We generalize the concept of an  q  £  J?A'(?O>  c(t -i,q) n  active  set  <  tn-i)  c  safe  a n  d  .  to refer to the set of those constraints  whose buffer zone is currently being penetrated by the system. Our strategy is 41  based on treating the constraints c as independent quantities and not inferring m u c h about their dependence on q,t or indirectly on u. T h a t is, we treat the components of c simply as functions that we would like to remain positive over time.  Hence,  the quantities c and c at a given time are useful i n predicting how c m a y evolve locally. These quantities m a y be computed analytically or numerically by difference quotients. F o r example, for C\ from (1.3) of the mobile robot system (1.2), we have  c'i = x = vcos0  (3.2a)  and  ci  =  v cos 6 — v sin 66  =  a cos 6 — v sin 6a.  (3.2b)  2  Wherever they occur, the controls are treated as constant over each t i m e step, and so for the purposes of prediction, we take a = a = 0 . T h i s is consistent w i t h the c o m m o n practice of sample and hold. T h e theoretical m o t i v a t i o n behind our prediction strategy is s u m m a r i z e d i n F i g u r e 3.1. Let the system be i n a viable configuration q -\ n  at time t = t -\. n  A  strategy based on viability alone or no knowledge of the future would be to simply m a i n t a i n the schedule  « at(0 u n t i l the system reaches the b o u n d a r y (situation (a)). n  Clearly, there w i l l be limitations to the robustness of such a strategy. O u r prediction strategy involves using time-derivative information of the constraints. If c > 0 at (i _i, n  q -i)i n  w  e  continue n o r m a l integration w i t h u(t)  = ti  n a t  ( i ) , because the system  is not m o v i n g locally towards the b o u n d a r y of the feasibility region (situation (b)). T h e system w i l l be viable using u (t) nat  under a linear model for the evolution of the  constraint equations. If for some i, c\ < 0, we compute c and f o r m the discriminant  42  A w i t h components defined by  A' t  =  1C{Ci  Cj  i — 1, 2, . . . , 7 i n s t r -  (^'^)  C  •  (a)  (b)  (c)  F i g u r e 3.1: Schematic s u m m a r y of prediction strategy.  If A > 0, then the parabolas given by  c + c(t -  + ^c(< - i n - i ) , 2  i>i„_i,  have no real roots and hence locally we deem that constraint violation w i l l not occur based on a second-order model for the evolution of the constraint equations (situation (c)). If A j < 0, this indicates that an impact w i t h the b o u n d a r y of c - is likely to 8  occur, provided ti,Ci remain relatively constant and u{t) = u {t). nSLt  O f course, this is  not necessarily a good long-time predictor, because c ^ c ; are not generally constant. However, these are reasonable local-time predictors and can be useful over the entire interval of integration provided they are updated w i t h sufficient frequency. However, there is a shortcoming because, i n most c o m m o n m u l t i b o d y and robotics systems, constraints occur i n opposing pairs (so-called box constraints  are  t y p i c a l examples). In the case of a shrinking feasibility region (which is usual i n n o n t r i v i a l problems), it is easily seen that the conditions c > 0 or A > 0 can never  43  be satisfied. We would therefore like a test of more practical utility, the motivation for which is described now, using the mobile robot example. Example 3.1 Suppose the mobile robot is moving toward the edge of the table described by x = £  m a x  , (in the absence of friction) with 6 = 0 and at some (small) velocity  e cm s . We temporarily assume that there are no driving constraints. Then _1  c = -e < 0, c = 0, 2  2  so that A = - e < 0. 2  2  Accordingly, the simple test would predict that a control step should be taken. However, if a  m a x  >  then in theory no planning is  ever  required (in the  absence of discretization errors). That is, it will be possible to stop the car before it goes over the edge in the very last step. Now suppose a plane wave is introduced with  v  w  >  0 and  (f  = | . This test would initiate a control at  every  step. So, we  have a conflict: On the one hand, no planning is very inexpensive (computational cost may only be little more than that of a nonstiff integrator) but it suffers from not being robust in finding solutions; on the other hand, solving the minimization problem to be described in (3.5) at every step is too expensive computationally. A We desire a method to act as a compromise by detecting situations when ^nat(i)  needs no modification. We do this by establishing relative degrees of 'safety'  for the system inside the viability set K(t). The method is based on a second-order approximation to the constraints and is intended as a means to establish how the system may respond to a control that 'slightly' differs from u (t) nat  Au. The classifications are summarized in Table 3.1. 44  by an amount  Strong Approximate Safety  there exists a vertex Au such that either (b) or (c) in Figure 3.1 holds for all C j  Weak Approximate Safety  simultaneously  there exists a vertex Au such that either (b) or (c) in Figure 3.1 holds for all ci individually  Table 3.1: Hierarchy of Approximate Safety We now discuss some implementational details, with illustrations from Example 3.1. We recompute c , c with a control having components « 2  2  is a vector with components  AUA  = iif  r a c  ju  max  ,• and  Mf  r a c  ,• £ (fj,  n a t  l"  n a t  ± Au where Au J  "  "  "  '  ^  j =  1,2,... ,n tri. The signs of the individual deviations AUJ are as yet unknown, becn  cause it is not known a priori which will be the most advantageous combination. For example, in the preceding situation, the appropriate af  rac  is negative. In the  case that the user has specified a component that is too large, it is truncated at the upper bound. Such a strategy is not difficult to implement in practice. The expressions for c, c can be reduced to  where G, H £ sft cnstrxn i n  cntr  a m  j  c = GAu + r  (3.4a)  c = HAu + s,  (3.4b)  ^  r  s  £ ^n  _  cnstT  Tj g these forms, we determine sm  whether a choice of Au can be found that will make c > 0; and if not, then A > 0. More precisely, we are searching the vertices of an ra i-dimensional box with sides cntr  2uf j« rac  maX)  j centred at u = u  nat  to see if the conditions c > 0 or A > 0 hold. If such  a vertex exists, then we define the system to be in a state of strong approximate safety. This indicates that, based on our second-order approximation of c(t), a 45  (small) deviation i n control w i l l likely enable the system to avoid a l l constraints. U n f o r t u n a t e l y , this condition cannot be satisfied i n the t y p i c a l case of opposing constraints a n d shrinking feasible region. Moreover, testing for strong a p p r o x i m a t e safety can be c o m p u t a t i o n a l l y expensive. It can easily be seen that the number of vertices increases exponentially w i t h the number of constraints. However, a p r a c t i c a l test can be borne f r o m relaxing this requirement. If AUJ appears w i t h coefficient gij, hij i n the expressions (3.4a) a n d (3.4b) respectively, then we recompute c ; , A ; using the nonzero values \gijAuj\  a n d \h{jAuj\.  In this  case, we are searching the vertices for c; > 0, A ; > 0, noting that it is possible that a different vertex would be suitable for different constraints. If such vertices can be f o u n d , we define the system to be i n a condition of weak approximate  safety:  a  s m a l l control can be expected to avoid any one constraint, but not necessarily a l l constraints at once. For example, recall equations (3.2a), (3.2b) for c\, c\. L e t u  nat  estimate of ' a p p r o p r i a t e ' control w i t h components of magnitude  = 0. T h e  Wfracj'^maxj  then  uses ' c{ = v cos  6  and c'i = a  f r a c  a  m a x  | cos f9| + a  f r a c  a  u | sin f9| 2  m a x  i n the calculation of A i = 2c* c\ x  cf.  If the condition of weak safety holds, then the integration continues w i t h u = Mnat- T h u s far we have found this test to be of p r a c t i c a l utility. T h e q u a n t i t y £/f c r a  allows for errors i n constraint measurements, the m o d e l , or predictions a n d  46  potential inconsistencies therein. It can be considered a sensitivity parameter, initially specified by the user, but then dynamically altered. A full description of how it is updated appears in 3.4.1.  3.4  A n Algorithm for Local Control  The third component of this algorithm is local control. This component of the algorithm takes over from the inertial behaviour t i  n a t  when the viability of the system  is at stake. The control to save the system from imminent death is determined by the following strategy. Suppose that the system is in a viable state q -\ at time £ _i, and that the n  n  prediction strategy has deemed a control step necessary. We delineate the beginning of a buffer zone by assigning constants Csafe — c(tn—l j Qn — 1) •  As an alternative to the 'unacceptable'  u  n a t  ,  we obtain a new value for the control  u* as an (approximate) solution to the following constrained nonlinear programming problem: u*  G argmin (f>(u)  (3.5a)  such that Ci(q (u)) n  > 0,  (3.5b)  where (3.5c)  (3.5d)  47  and (with a slight abuse of notation),  c(q (u))  = c(t ,q (u))  n  n  n  is used to emphasize  that the constraints are being viewed as functions of the control. A sketch of the barrier function (3.5d) is given in Figure 3.2 in the case of one constraint.  Csafe,  Figure 3.2: A sketch of 4>i in the case of one constraint. We note that if u* satisfies (3.5), then d du  "cnstr ] T  &(?„(«))  = 0,  i=i  that is,  Vc-(g„(«'))  where I (u*) n  = {i  | c,- < c f i, i = 1,2,..., n sa  ei  cnstr  ^  0,  (3.6).  } is the set of indices for the 'active'  constraints. We have chosen a barrier that not only disallows infeasible solutions, but also serves to keep the system away from the boundaries. We have chosen to use logarithmic barrier functions; however, other choices such as inverse or inverse square  48  functions are possible, depending on the desired interaction of the system w i t h the p o t e n t i a l f u n c t i o n , especially near the boundary [30, 44, 45, 64]. T h e particular choice of the barrier function (3.5d) was made as a deterrent to a ' b a n g - b a n g ' choice for the control when moving the system away f r o m the boundary. In this way, the previous stability of the system evolving w i t h a n a t u r a l value for the control is not excessively undermined. T h e advantage of l o g a r i t h m i c barriers lies i n their relatively slow divergence near the singularity. T h e l o g a r i t h m i c terms are augmented by terms that make <f> a C - f u n c t i o n , further enhancing this x  effect. Generalizations to barrier functions of higher orders of continuity are available, but any added u t i l i t y i n practice over (3.5d) is not clear. M o r e o v e r , (3.5d) has some nice theoretical properties, such as convexity, and it can be shown to produce intuitively correct controls i n the limit of continuous observations (see C h a p t e r 4). T h e problem (3.5) is solved by means of a  penalty function method, t h a t is,  t h r o u g h the solution of a sequence of unconstrained m i n i m i z a t i o n problems where the objective function <f> is augmented w i t h a penalty t e r m [43].  T h e strategy  adopted for use i n practice is taken f r o m [42, 34, 43, 52, 94], although steps are taken so that an approximate solution to (3.5) is found quickly, sometimes at the expense of accuracy. D u e to the local nature of the p r o b l e m , however, an extremely accurate solution offers little advantage. A more detailed description of the m i n i m i z a t i o n a l g o r i t h m appears i n A p p e n d i x A . T h e presence of the logarithmic t e r m i n the definition of the barrier p o t e n t i a l makes the choice of a control seem akin to interior-point optimization.  solution methods f r o m  One possible interpretation i n this context is that the solution of  (3.5) is a particular iteration of an interior-point m e t h o d to minimize the objective f u n c t i o n V\-  c  '  — 1, where the s u m is taken over all i such that c,- < c f ,-, subject sa  49  ej  to c(q (u)) > 0. T h e particular iteration is w i t h barrier coefficient having the value n  unity.  In other words, the objective function to be m i n i m i z e d is i n fact the one-  n o r m of the differences between c; a n d c f - for i such that c,- < c f . A g a i n we sa  ei8  sa  eiI  note t h a t , due to the l o c a l nature of the solution a n d the time-varying feasibility region, there is no advantage offered by a full solution of the classical interior-point algorithm, in which  —» 0 . T h e full c o m p u t a t i o n i n which JJL is allowed to approach +  zero is too c o m p u t a t i o n a l l y intensive for the j o b of an on-line controller. M o r e o v e r , because K(t) is evolving i n time, such a complete o p t i m i z a t i o n procedure cannot be justified for a fixed t. If a solution u* to (3.5) could be f o u n d , the system (1.1a) is advanced one t i m e step, starting f r o m ( i „ _ i , q -\)n  In the absence of discretization or modelling  errors, the existence of a u* guarantees that the system w i l l be viable at the next step. W e then r e t u r n t o the ' n a t u r a l ' control u  nat  (at least initially) as the control  to a t t e m p t for the next time step.  3.4.1  Buffer  Zone  and  Sensitivity  Parameter  Maintenance  A s another measure taken to improve the overall efficiency, the b o u n d a r y of the buffer zone, c f a n d the components of c7f sa  e  T h e quantity <7f  rac  rac  are u p d a t e d as the system evolves.  can be considered a sensitivity parameter i n the following  sense: R e d u c i n g the values for the components W f  r a c  j implies that the control step  w i l l be invoked more readily. Hence, the algorithm does not rely as m u c h o n the accuracy of predictions or a relatively large value of the control to m a i n t a i n viability. For this reason, we do not recommend the use of large values of C/f T h e setting f7f  rac  rac  i n practice.  = 0 essentially invokes a control f r o m (3.5) at every step. W h e n  the system is forced to choose Uj = w  m a x j  for some or a l l components j , the stability  50  of finding a viable solution can be undermined. For example, it may be a sign that (perhaps due to a lack of adequate planning) the intersection of the iif-reachable set and the viability region is becoming very small, possibly making it harder to find a viable solution. Starting from an initial value Uf  vac  provided by the user, prediction and  integration are carried out as described in the previous modules. There are three situations in which we update the components of (7f . These situations, and our rac  respective strategies for their treatment, are the following: • If there is an unexpected constraint violation, U{  Tac  <— ^U{ . An unexpected Tac  constraint violation occurs when the prediction method fails to warn the control system that the next step (taken with u — u ) would yield an infeasible nat  state. This could be due to a problem with the model or the prediction strategy, or due to the prediction that an application of a control of the form nat  u  i  ^frac^max  yields an impact in less that one time step from a safe state.  • The choice of a control yields \UJ\ = u j max  such j ' s ,  Uf j TaCt  <— ^ U f r a c j -  for any j = 1,2,..., ra i. For cntr  This is in fact a step to err on the side of pru-  dence. We wish to avoid using components of size u  max  for the purposes of  stability of finding a solution and to reduce computational expense, because in general, obtaining optimal solutions on the boundary of \u\ < u  max  is the  most expensive optimization calculation to determine u. In a practical partial minimization strategy used to solve (3.5), components Uj of iterates whose magnitudes exceed u  j are truncated at u j.  max>  max  This is taken to be the final  answer instead of iterating toward formal convergence to this quantity. This has produced a dramatic speedup because needless iterations of successive min-  51  imization problems are averted where the penalty on violation u  m a x  j — \v,j\ > 0  is increased. • The choice of a control is  u «  M  n a t  (to within a small percentage of  u  m a x  ),  then we set C/f <— 2<7f . Here we deem that the constraints have combined rac  rac  themselves in such a way that the barrier function is locally flat. Hence the system is (at least temporarily) in a state where natural motion is acceptable, so the sensitivity can be relaxed. Buffer zones can also be renormalized at this point in order to deter the system from 'bouncing off' constraint boundaries. No discontinuous interaction is caused with the barrier because the barrier function and its derivative are continuous at the location of the system when the barrier is established. Example 3.2 (Effect of Sensitivity Parameter) Consider the mobile robot with state (10 50 0 2) at time t = 5. Let 7 = 0 and T  «nat  = 0. Then ci = 2, c = - 2 , c , c = 0. 2  3  4  In particular, A := c c - c\ = 0 - (-2) = - 4 < 0. 2  2  Now with t/f  rac  2  2  = 0.5/, we have c  2  =  c* = A;  =  — v cos 8 = —2, |—  a  cos 9\  +  v 2\  sin 0a\  =  75  2(222)(75)- (-2) > 0. 2  In this case, the integration would continue with the control u = w . However, for nat  52  the case £7f  — gg goo^' -  rac  = 2(222)-^-- (-2)  2  < 0,  in which case a buffer zone would be created and an appropriate u found from solving the minimization problem (3.5).  3.5  -  A  Summary  The various components of the algorithm are combined as follows.  Recall that  the basic strategy is to proceed with default motion until a buffer zone has been established. The control function u(t) is taken to be piecewise constant on each interval  [t -\,t ). n  n  For defmiteness but without loss of generality, we assume that  the buffer zone is not activated. Starting from a viable state q -i at time i„_i, and n  a sensitivity parameter (7f : 1  rac  1. Predict safety of proceeding with u„_i = 2. If  weak approximate  safety  tt at(*n-i)n  then  (a) Attempt integration of (1.1a) over the time interval default control w _i = n  [t -i,t )  ^nat^n-i)•  (b) If resulting state q is viable, then (step is successful) n  i. Accept w - i , q ; n  n  ii. Increment n\ iii. Break to Step 7; else (unexpected constraint violation) 1  initialized by the user  53  n  n  using the  i. Reduce U{ ; Tac  ii. Continue to Step 3; 3. If buffer zone is not activated, then activate it by setting c f = c(t„_i, g„_ sa  e  4. Solve the barrier minimization problem (3.5) to obtain a control 5. Attempt integration of (1.1a) over the time interval dated control  =  u*(t -\). n  6. If q is viable, then n  (a) Accept u _ i , q ; n  n  (b) Increment n; (c) For each j such that  UJ = umaxj,  (d) If  increase (7 ;  u -i n  = u , (t -i), na  t  n  reduce  Mfracj!  frac  (e) Continue to step 7. else (failure) System dies. 7. If  M _i = u n  n a t  (i _i), n  then deactivate buffer zone.  54  [t -i,t ) n  n  u*(t ^i). n  using the  VJ  Chapter 4  Theoretical Considerations Trial  and error  is convincing,  but mathematics  is  conclusive.  - Gil Strang We now present some theoretical results for the algorithm, focusing on the control obtained from the barrier function in certain identifiable situations. We show explicitly that the control produced maintains the  relative  position of the system  within the viability set from one time step to the next. These results are given for model systems with velocity and acceleration controls in various viability regions, such as n-dimensional rectangular or spherical spotlights and shrinking regions. In this chapter, we sometimes suppress the additional dependence on q in the notation n  of (3.5b) and write simply c- = t  C{(u).  We note that  q (u) n  is linear in  u  if, for example,  a forward Euler discretization is applied to (1.1a): qn — qn—i + At [ / ( * „ _ i , q -i) n  + 5(i„_i,g„_i)u],  or if a composite Euler discretization (cf. [109] and Chapter 5) is applied to a system such as (5.1) (cf. page 73) with  f\(t,q ) 2  being linear in 55  q. 2  It is useful to recall a few definitions a n d standard results f r o m convex analysis [113, 97].  Definition 4.0.1  (Convex Function) A function f : D o m ( / ) C •>  STlcntri  R  —> 3? is  said to be convex i / D o m ( / ) is convex and  f(Xu+(l-X)v)<Xf(u)  +  (l-X)f(v),  for a l l u, v e D o m ( / ) a n d A 6 [0,1]. A function / is said to be strictly convex if the above inequality is strict for all u ^ v a n d A € ( 0 , 1 ) .  Lemma 4.1 Let f : / —> •ft be differentiable, where I C D o m ( / ) is an interval in 9?. If f  is nondecreasing, then f is convex.  Lemma 4.2 (Finite Sum of Convex Functions) The sum of a finite number of convex functions is convex. Theorem 4.1 (Uniqueness of Minimum) If f is strictly convex, then f has at most one global minimum point. Definition 4.0.2  (Convex Program) The problem of minimizing f(u) subject to  the linear equality constraints Au = b and the inequality constraints c(u) > 0 is a convex p r o g r a m if f(u)  and —c(u) are convex.  W e w i l l not need to incorporate the use of equality constraints i n our analysis because we are concerned w i t h solutions of O D E s (cf.  (1.2)) subject only to inequality  constraints. However, we will make use of the following property of convex programs:  56  Theorem 4.2 (Local Constrained Minimizer of Convex Program) / / u* is a local constrained minimizer of a convex programming problem, then it is also a global constrained  minimizer.  This theorem is useful because it allows us to categorize situations where more than one control solves (3.5) as multiple, as opposed to spurious, which leaves the impression that the minimum is undesirable. More details on this idea will be given in the theoretical analysis of the spherical spotlight problem.  4.1  Convexity of Barrier  In this section, we show that the barrier function (3.5d).is convex if the constraints are linear functions of the control u. For simplicity of notation, let c f = 1 and sa  e  consider for now only scalar-valued u and c. The controls u belong to a set / , where / is of the form [ - t i  max  ,ti  max  ] . However, the barrier function cf> is only defined for  u G U C J, where U = {u G I \ c(u) > 0}. We note that U is convex if c is linear. We will call U the set of admissible controls, also denoted by Dom(<^). With these simplifications, <fi takes the form (4.1) We will call this less complicated version of <$> the reduced barrier function. Theorem 4.3 (Convexity of Reduced Barrier Function) Let the control u be admissible. If the constraint c(u) is linear, then the reduced barrier function (f>(u) = (j)(c(u)) is convex.  57  P r o o f : T h e proof consists of two parts. W e begin by showing that <f>(c) is convex. Let c-l-logc  0 < c < l  0  c > 1  <K') = Then  0 < c< 1 c > 1  0  Clearly, (f>'(c) is nondecreasing for a l l c > 0. T h u s , by L e m m a 4.1, <f>(c) is convex. T h e proof is now made complete by noting t h a t , for all 0 < A < 1 a n d U\,U2  € U (so that c(\u\  4>{c(\  Ul  + (1 — A)w ) > 0), we have 2  + (1 - A)tt )) 2  =  # A c ( « i ) + (.1 -  A)c(u ))  <  A0(c(«i)) + ( 1 - A ) # c ( u ) ) .  2  2  Hence, <f>(u) is convex.  •  T h i s result can be extended to the case of multivariate i n p u t . In the following,  let u =  (ui,u ,...,u ). 2  ncnttl  L e m m a 4.3 ( C o n v e x i t y o f R e d u c e d B a r r i e r w i t h M u l t i v a r i a t e I n p u t ) Let the control u be admissible and the constraint c(u) be linear. Then, the reduced barrier function (j>(u) is convex. P r o o f : T h e proof is a straightforward extension of T h e o r e m 4.3.  •  T h e o r e m 4.4 ( C o n v e x i t y o f B a r r i e r ) If the constraints Ci(u) are linear in u for i = 1 , 2 , . . .,n nstrj then the barrier function (3.5d) is convex. C  P r o o f : T h e proof follows immediately f r o m T h e o r e m 4.2.  58  •  4.2  Simple Examples  In order to better understand the behaviour of the algorithm proposed in Chapter 3, we examine what control is computed upon minimization of (3.5) for several theoretical examples. We begin with a few examples for systems with n degrees of freedom in situations involving typical programmed constraints, such as moving or shrinking rectangular 'spotlights'. We then proceed to give results for systems with more general dynamics and constraints, including that for motion within a spherical spotlight. To simplify the exposition, we first remove the effect of the prediction strategy. Note that this is only of theoretical interest, because in order for this setting to work in practice, it would also require continuous observational data and control (and perhaps an idealized computer). Hence, its scope is limited in practice, but it is a useful exercise in aiding our understanding of the control selection. Example 4.1 (Rectangular 'Spotlight' with Velocity Controls) Consider the control system x  = u,  u  £  a:(0)  (4.2a)  U = {u  | || u  ||oo <  ,  u } max  = a*,  (4.2b) (4.2c)  where x,u € 5J . Let the viability set K(t) be denned by (1.1c), with constraint n  functions c(t) £ SR  2ra  c  > Wr-(, -^)) W  V c+(t) )  W  \ r + (x(t)  59  (  -  x {t)) c  ^  where c~, c , r G iR . This is the n-dimensional analogue of a moving rectangular +  n  spotlight. The objective is to choose a control u such that each component of the system  Xi(t)  remains within a predetermined distance r - from a moving point 8  x i(t). Ci  We know the exact solution of (4.2) on any interval [0,t] where u is constant, x(t) =  + ut.  XQ  As we did in Chapter 3, we also regard this solution as.a function of the control, x(t; u)  =  XQ  +  ut,  and correspondingly, the constraints as c = c(t,  x(t; u)).  We now explicitly compute the control obtained from solving (3.5) on such an interval and interpret it in the limit t —> 0. This is an idealized situation that represents the continuous limit of the observation rate of x (t)', and also a continuous control c  strategy u(t). We begin by delineating the buffer zone: /  - x (0)) ^  r - (x  c  0  Csafe  \ r + (x - x (0)) ) 0  c  The barrier function to be minimized is  "cnstr i=l  where d>f = —  1  1 — los  We note that only one of c~, c\ can be active at any given time for fixed i. For simplicity but without loss of generality, we can assume the active constraints 60  correspond to c . Now, from (3.6), the necessary (and, because <j> is now in fact strictly convex, sufficient) condition for an optimum yields, r - (x - x (0)) = r - (x + ut c  0  0  x (t)). c  Hence, we have, x (i) u = —— t c  x (0)  —.  c  In the limit t —> 0 , we can generalize this for all t to yield the control strategy +  Ui(t)  2-c,i(^)  = <  l^c,*(^)| ^ "max  sgn(i i(i))« C)  otherwise.  max  Thus, the control obtained from the minimization  (3.5) matches the velocity of the  system with that of the centre of the spotlight as closely as possible. It is optimal in  the following sense: If || x jj^ < u , the strategy always yields a globally viable c  max  solution. We note that, because this is not a differential game where the motion of the n-dimensional spotlight may be governed by an evil adversary [60, 111, 73], a condition such as || x ||co < c  u  m a x  can be enforced by the programmer, and hence  represents a guide in designing the program. When \xi(t) — x i(t)\ = r -, it is possible Ci  4  to interpret this as a heavy solution because it is the control of smallest size that will preserve viability. Example 4.2 (Rectangular 'Spotlight' with Acceleration Controls) In a similar fashion to the previous example, consider now x = u,  (4.4a)  u  (4.4b)  £  U,  x(0) = x i(0)  = 61  0  (4.4c)  v.  (4.4d)  0  Let U and the viability set K(t) be as in Example 4.1. Here, the exact solution over a time interval [0,t] with constant u is x(t;u) = XQ + vot + \ui?. Again, the barrier function <j> is strictly convex in this situation, so (3.6) implies that the necessary and sufficient condition for (3.5) to be minimized is x (t) - x (0) - v t u = —— -ir-i . c  c  0  Expanding x (t) about t = 0 yields c  u(0) = » (0) + °]f  ^° + 0(t).  iei  c  (4.5)  We note that as t —> 0 for VQ ^ x (0), the norm of M(0) grows without +  c  bound. Hence, some care needs to be exercised in its explanation. The interpretation of (4.5) as a continuous control strategy breaks down into two parts because there are two situations in which to consider (4.5). In the first situation, we consider the case VQ ^ x (0). For t sufficiently small, the value of the components of the c  feasible control will reach one of the limits imposed by (4.2b). At this point, the components of the control will satisfy U{ = sgn(x ;(0) — vo,i) max- Hence, the control u  Ci  uses maximum allowable accelerations in such a way as to match the velocity of the spotlight. Now let us assume that the parameters xo, fo, x (0), x (0), V a x c  a r e  c  such that velocities of the system and the spotlight can be matched after time r. For example, in the case n = 1, this would imply that r satisfies  i (0) + f x {t)dt = c  Jo  c  VQ  +  sgn(x (0) c  v )u 0  T.  meLX  (4.6)  Without loss of generality, we can now identify the control strategy for t > r with the second possible situation at t = 0, namely, VQ = x (0). Now, u(t) = x (t). c  62  c  Thus, if |x (i)| <  M  c  m  a  x  the continuous control strategy can be summarized as  ,  follows: u(t) =  sgn(x (i) - v(t))u c  0 < t <r  max  t>  X (t) c  (4.7)  T,  where r satisfies (4.6). The control obtained from the minimization (3.5) uses a maximal acceleration to match the velocity of the system to that of the spotlight centre. After this is achieved, it matches the spotlight acceleration. We now show that, for every finite interval of length t upon which the spotlight acceleration x (t) is constant and satisfies || x (t) ||oo < t t c  c  m a x  ,  the relative posi-  tion of the system within the spotlight remains the same: The change in the position of centre of the spotlight in time t is simply *c(t) ~ x {0) = x (0)t + i £ ( 0 ) i . 2  c  c  c  Using (4.5), the corresponding change in the robot position is x(t)-x  = vot+i( l~ ic{0  0  =  Vo  +x (0)) t  2  c  x (0)t+lx (0)i2 e  c  In a practical setting where the sampling time cannot be made arbitrarily small, this means that the relative position of the system within the spotlight is preserved from mesh point to mesh point while the velocity matching takes place. Thus, we note that, as is common with numerical optimization codes that solve discretizations of continuous problems subject to state-dependent inequality constraints, viability can only be guaranteed at grid points. The possibility of infeasibility exists in between. However, this can generally be mitigated by a suitable mesh refinement.  63  Similar results can be proved for the damped system x = u — "fX,  where the control to be applied on the subinterval [0,t] is obtained as _ ScQQ - a--c(O) ~ U  ~~  •  fo(l  ~  e-^)  i-^r(l-e-^)  Expanding in powers of •yt, we obtain  a: (0) + x (0)t + ix (0)i + . . . - x (0) - W 7 * - i 2 + ...) 2  c  c  2  c  c  2  7  7/ ~ * - ^{it-*  c{0)+  hl t  2 2  + -..)  + 7"o,  x  as •yt -> 0+ and we can see explicitly how the damping is accounted for when compared to the undamped case. As a particularly important example, we now consider the problem of the n-dimensional rectangular spotlight that is moving and contracting. The ability to write a program by constructing regions like this one is a very useful aspect of the programmed-constraint environment. The ability to compute a control that produces a viable solution is a very useful aspect of a DAI solver. Example 4.3 (Shrinking Rectangular 'Spotlight') Consider now the control system (4.2) subject to the constraints ' r - g(t) - (x(t) - x (t)) c  r - g(t) + (x{t) - x (t))  v  c  where the function g(t) € R determines the shrinking properties of the spotlight. n  For clarity, we assume that g(0) = 0 and g'(t) > 0. Typically, the program will end 64  after some time T with g(T) = r — e, where e represents a vector of desired final tolerances for each component of the system from the spotlight centre. In this case, we note that all the constraints will be active after time t. The necessary conditions (3.6) now imply that the control satisfies n - gj(t) + (x (t) - Xj(t; u)) _ rj + (x j(0) - x ) n - gi(t) - (x (t) - Xi(t; u)) ri - (x -(0) - x )' Cii  Ct  0ii  Cti  C)l  0ti  Hence, the control is chosen so that the overall ratios of the constraint values are preserved by the end of the integration step. Although these models may seem simple, they have been used in more sophisticated applications. For example, the control system described in (4.7) has been used in [54] to describe the open chain manipulator dynamics of a SIGMA robot (a simple two-degree-of-freedom Cartesian robot made by Olivetti) for control in a time-optimal contour-following task. In addition, the forms of some models presented here with velocity controls are special cases of the canonical form of nonholonomic control systems [81, 54, 37]:  "cntrl i=l  We note also that right-hand side of this equation is linear in the control variables u. Hence, the hypotheses of the convexity theorems in Chapter 4.1 are satisfied for constraints c(t, q) that are linear functions of the state. Thus far, we have dealt with examples where the constraints c were linear functions of the control u. In this case, we often made use of the convexity and sometimes strict convexity properties of the barrier to find a solution u. Although the algorithm is based on discrete observations, we have given interpretations of corresponding continuous control strategies in the limit of infinitesimal sampling intervals. Our final test case for this section involves an n-dimensional 'spherical' 65  spotlight. In this case, the control defined will not be unique. However, we will see that the minima are not spurious, but rather multiple, because each of them would yield the same acceptable solution. This corresponds to the nonuniqueness property of the solution. We will see that in practice the particular choice of solution can be influenced by the default control  specified by the user.  u ,t(t) na  Example 4.4 (Spherical Spotlight) Consider the control system (4.2) required to move within an n-dimensional spherical spotlight with centre at  and radius r, the two-dimensional version of  (x (t),y (t)) c  c  which is described in (1.6). As usual, we begin by defining n  r  Csafe =  ~ ^-(O)) , 2  -  2  i=l  and we write n  - ^2(x (t; i=l  c(t; u) = r  2  t  u) -  x (t)) . 2  Cti  The necessary conditions (3.6) for a minimum of (3.5d) lead to n nonlinear equations for the components of u: -2t  (—  VC  - -z^—] • (tit + x C(t;u)j 0  s a f e  x (t)) c  = 0  The solution that yields a minimum of (3.5) is where u satisfies c(t; u) = c f , sa  e  that is, n  n  Yfuit  + x  = J2( 0s  - x ,i(t))  2  0ti  x  c  -  MO)) . 2  (4-8)  t'=l  i=l  This can be rewritten as n  n u  )  -  c,i(t))  x  2  =  i=l  E( o,* - M O ) ) , x  i=l  66  2  (4-9)  and hence we see again that the control attempts to preserve the relative distance from the centre of the spotlight from one time step to the next. To obtain some geometric intuition, consider the case for n = 2. Let the (circular) spotlight have unit radius and initial position (x i(0), x ^(Q)) = (0.9,0) Ci  C  and assume its position at the next sampling time At is (x i(At), x 2(At)) = (1, 0). Ct  Ci  Let the initial position of the system be (xi(0), £2(0)) = (0,0) and let u  meiX  = 1.  Figure 4.1 gives a surface plot of the barrier (3.5d) to be minimized in this case. From this plot, we can see how the set of minimizers satisfies a relation such as equation (4.9). The particular selection made by a numerical implementation of the controller will depend on the initial guess given to the minimization routine. This is chosen to be the current value of u t(t). Hence, in the case of possibly multiple na  extrema, the control produced will depend on the programmer's selection of w (i). nat  All solutions produce acceptable behaviour. However, it may not be desirable for solutions to 'jump' between points inside the spotlight corresponding to different values of the control satisfying (4.8). Hence, the use of u  nat  = 0 will tend to yield  'heavy-looking' solutions that mimic an active-set solution.  4.3  Interpretation of Local Control  When c, c are reliable predictors of the constraint behaviour over the medium to long term, then our overall strategy allows the system to 'drift' within the viability set until such time that the viability is at risk. The advantage of such a strategy is mainly computational; however, it shares the concept of an inertial principle from viability theory [13, 12]. It is also in agreement with our philosophy that all points in the interior of K(t) are equally acceptable. 67  <f>(u ,u ) 1  2  Figure 4.1: Barrier function for two-dimensional circular spotlight.  68  The characteristic of the computed control is that it attempts to maintain the relative distances to the constraint boundaries at the beginning and at the end of a control step. The result of this behaviour is especially apparent in Example 4.3. In the case of nonlinear constraints, the control may not be uniquely defined. This nonuniqueness is a notable drawback with other potential function methods due to their stronger specification of the control. In such cases, the extra possibilities were spurious because they did not ultimately lead to the desired behaviour. In our case, the minima are better referred to as multiple, because they are indeed all equally acceptable a priori. The particular solution chosen will then depend on the initial guess provided to the minimization routine. Because in practice we take this to be the value of u (t -\), nat  n  we see that the programmer's choice of the default control is  a way in which the robot behaviour can be influenced. In some situations, there can be the illusion of an active-set solution where the system appears to slide along the boundary of the viability region. The following figure illustrates this effect in the case of the motion of a mobile robot in a spotlight. The position of centroid of the car within the spotlight is depicted as a circle in Figure 4.2 for a few successive time frames. These are in fact some results from the simulations presented in Chapter 6.  Our final comment on the characterization of the control selected by our algorithm deals with its relation to the barrier trajectory (or central path) of interiorpoint methods applied to dynamic programming problems. Let us quickly recall a few relevant concepts. Consider the problem of minimizing a real-valued function f(x) such that Ci(x)  > 0 for i = 1,2,.... j ^cnstr- The generic barrier algorithm solves a sequence of  69  Figure 4.2: Pseudo-Active-Set Behaviour of a Nonholonomic System: Relative (x, y) Position of Mobile Robot in Spotlight at Consecutive Time Steps problems  1  "cnstr min f(x)  -/J,  x  ^2 log Ci(x), i=l  for a decreasing sequence {/ijt} —> 0 . Under suitable conditions, it can be shown +  that the sequence x(ji) of barrier minimizers not only converges to the optimum x*, but that it also defines a smooth path [41] called the barrier trajectory, or central path as it is known in the case of linear programming. The situation is illustrated in Figure 4.3. We note that following such a path is often in conflict with moving toward the geometrical or  analytic  centre defined by fj,k —> oo. Indeed, in the case of  linear programming, the optimum is guaranteed to be on the constraint boundary. We note that, in their haste to converge to the optimum x*, practical algorithms do not completely solve the unconstrained minimization problem for a given Hk- Rather, they terminate that part of the solution process early and reduce the barrier parameter following  before continuing. This leads to the appropriate name of  path-  methods [124, 117]. The motivation is that there is nothing special to be  I t is implicitly assumed that a(x) > 0 for i = 1,2,...,  70  7istrcn  Figure 4.3: The barrier trajectory, x^. gained from following the barrier trajectory exactly. We share this philosophy from the computational point of view, because we want an efficient method. Also, our strategy is to not specify the control too strongly. Solving the barrier problem for one 'smallish' value of /i also corresponds to the shortcut method of Fletcher [44] by which an approximate solution to a dynamic programming problem is solved for a single value of the parameter appearing in an augmented penalty term. In this way, it can be interpreted as a shortcut solution towards the minimization of  where the sum is taken over all i such that c,- < c f ; and subject to c(x) > 0. sa  e)  The presence of the logarithmic barrier term makes our solution procedure a path-following method, but in a different sense than is usual. Usually, iterates of a path-following method move towards the optimum in the direction of the barrier trajectory. However, the feasibility set for such problems is static. Because we consider a more general case of a viability set that varies in time, it is necessary to generalize the concept of barrier trajectory to a barrier graph, thus including time. 71  The barrier graph can be defined as Graph(a; ) = {(t, x^) £ 5? X W \ x^ is the barrier trajectory at time t}. 1  M  Our method then follows the path of .a particular point on each barrier trajectory (namely the point with fik = 1) through time. We feel that this is a natural strategy for a controller designed to solve a DAI from robot programming: It is a compromise between not getting too close to the boundary while not expending too much effort to move toward the (nonstationary) analytical center.  72  Chapter 5  Delay-Free  Discretizations  If you write some software which is half-way useful, sooner or later someone will use it on discontinuities. - A . R . Curtis T h e state equations of the D A I problem f o r m a differential system (1.1a). A discretization of (1.1a) is required for two purposes.  F i r s t , it allows for the  possibility of numerical simulation. Second, numerical integration can be used for off-line verification or on-line, super-real-time simulation.  5.1  Motivation  Consider a differential system that is separable as follows  ?i  =  ./i(*,?2)  ( 5  9.2 = f2(t,q ,q ) + B (t,q ,q )u. 1  2  2  1  2  l a  )  (5.1b)  There is intuitive appeal i n choosing a discretization of (5.1) that is also delay free. There is a discretization delay of the influence of the choice of control  73  function u propagated between stages of an explicit Runge-Kutta method applied in the standard way. In general, if the differential equation for a state variable x must be differentiated (p — 1) times before it contains a control variable, then there is a (p — l)-stage delay in the influence of a control on x. Correspondingly, constraints c(x) are said to have index p. As an illustration, consider a standard application of the forward Euler scheme to (5.1): Ol,n  =  <?l,n-l + A i / i ( / - l , ?2,n-l)  ?2,n  =  <h,n-\ + A < ( / 2 ( i n _ l , g i , „ _ l , g 2 , n - l ) + - ^ ( ^ - l , <7l,n-l, 92,n-l)) n-1  n  M  Thus, the discretization renders q\ independent of u until after the first stage (which, in this extreme case, also corresponds to an entire time step). Moreover, this effect will occur every time there is a discontinuous jump in the control. In order to have a time-continuous dependence for all components of the state on the control, the composite Euler scheme may be employed [109, 49]. This scheme uses forward Euler on the differential equation for q and backward Euler 2  on the differential equation for q\: 92,n  =  92,n-l + A2 (/2(*n-l)9l,n-l} </2,n-l) + -#2(^-1, 9 l , n - l , <Z2,rc-l)) n-l  9l,n  =  9l,n-l + A i / i ( i _ i ,  u  n  92,71)  It is easy to see that this modification now makes q\ dependent on u with no numerical delay. We note that delays between issuance and execution of commands are common in practice. However, the choice of a delay-free discretization does not compound this practical issue with an artificial component. The system (5.1) can be viewed as an instance of a partitioned system of ordinary differential equations. The composite Euler scheme is then a simple dis74  cretization scheme for such a system. We now generalize such composite RungeKutta schemes to higher order, in the broader context of implicit-explicit (IMEX) schemes [7] applied to more general systems than (5.1).  5.2  Implicit-Explicit  Runge-Kutta Methods  When a set of differential equations involves terms of different types, it is a natural idea to employ different discretizations for them. Such schemes are often well suited for the time integration of convection-diffusion (or hyperbolic-parabolic) equations, such as u = uu + vAu, t  x  v > 0.  (5-2)  Discretization of the spatial derivatives yields a very large ODE system in time, u = f(u)+g(u),  (5.3)  where, in this case, / corresponds to the discretization of the convection (hyperbolic) term, uu , and g corresponds to the discretization of the diffusion (parabolic) term, x  vAu. Typically, / is highly nonlinear, but nonstiff, whereas g is linear, but stiff. Hence, a fully implicit method would be complicated by questions of asymmetry and indefiniteness in the Jacobian of / , and a fully explicit treatment would suffer from a severe time-step restriction because of the stiffness of g. However, / can usually be adequately handled by an explicit scheme and the implicit system for g is often linear, symmetric, positive definite, and sparse. An IMEX scheme consists of applying an implicit discretization for g and an explicit one for / . Linear multistep IMEX schemes have been used by many researchers, especially in conjunction with spectral methods [65, 28]. Schemes of this type have been 75  studied as far back as the late 1970's [118, 36]. More recently, instances of these methods have been successfully applied to the incompressible Navier-Stokes equations [63], and in environmental modelling studies [62]. A systematic, comparative study for PDEs of convection-diffusion type was carried out in [7], and a corresponding study for reaction-diffusion problems arising in morphology is reported in [100]. An early example of an IMEX RK scheme appears in the context of a partitioned R K method [48]. IMEX RK schemes have also appeared in very specialized cases as explicit symplectic Runge-Kutta methods for separable Hamiltonian systems [98, 49]. The use of strongly damping schemes is required, for example, to reduce aliasing in poorly resolved computations. Schemes that do not possess such attenuation properties can produce qualitatively incorrect solutions, even upon refinement [100]. For diffusion-dominated problems, schemes utilizing Backward Differentiation Formulae (BDF) for g have optimal damping properties. The corresponding IMEX scheme is a semi-explicit BDF (SBDF) [118, 36, 63]. However, when diffusion is not dominant, the choice of method is less clear. Ideally, one would like a dissipative scheme for the hyperbolic term, so that the resulting IMEX scheme would have good stability and smoothing properties, independent of g(u). Yet, it is well known that this necessitates looking for schemes with order (and hence, backward steps) of at least three [7]. The main difficulty plaguing multistep IMEX schemes is the possibility that relatively small time steps become necessary due to stability restrictions (e.g. [116]). Such restrictions become worse when working with higher-order multistep schemes, as demonstrated in [7]. On the other hand, the stability region of explicit Runge-Kutta schemes actually increases when three- and four-stage schemes  76  are contemplated. This is another natural motivation for us to be led to investigate the development and performance of higher-order IMEX Runge-Kutta schemes. When the system (5.3) arises from a PDE in more than one spatial variable, the simplicity and efficiency of solving the algebraic equations corresponding to the implicit part of the discretization at each time step are of paramount importance. In [7], it is shown that, under appropriate circumstances, only slightly more than one multigrid W-cycle per time-step is needed for certain IMEX schemes. The goal here would be for such a performance per Runge-Kutta stage. Hence, we concentrate on diagonally implicit Runge-Kutta (DIRK) schemes for g(u), paying particular attention to their attenuation properties. Because the effective order of general DIRK schemes drops to one for very stiff ODEs [50], we do not expect the RungeKutta schemes to be competitive with SBDF in the limit where g(u) is heavily dominant and very stiff. However, the Runge-Kutta schemes developed here are shown to have excellent properties for other parameter ranges [10]. It is well known that, despite its recent rise in popularity, the use of RungeKutta schemes for the integration of time-dependent PDEs has some drawbacks. One problem is a loss of accuracy when time-dependent inflow boundary conditions are prescribed. In [29], it is shown that the conventional (and intuitive) method of imposing inflow boundary conditions at intermediate stages reduces the accuracy to first order in space near the inflow boundary, and to second order overall, independent of the order of accuracy of the spatial-difference operator. A remedy, called the simultaneous approximation term procedure, is proposed in [1]. Note also that, in the (usual) case where the accuracy of the solution at the end of the step is higher than at internal stages, the accuracy to which these internal stages must be calculated (say by an iterative method, such as multigrid) cannot be lowered.  77  The system (5.3) can be cast as a partitioned system, v =  f(v, w)  (5.4a) (5.4b)  w  with u = v + w, f = f(v + w) and g — g(v + w). The IMEX Runge-Kutta schemes derived below can thus be seen as partitioned Runge-Kutta schemes. Their orders are discussed in [49], §11.15. They can also be seen as a particular class of splitting methods. We now develop some members of the new IMEX Runge-Kutta schemes. The schemes divide naturally into two classes: those whose last internal stage is identified with the solution at the end of the next time step, and those where an additional quadrature is used at the end of the step. The first class is particularly good for stiff problems, but it is the second class that seems to yield some of the more promising variants for a wide range of parameters. In particular, we note the performance of the scheme identified as (3,4,3). As is described below, the labelling denotes three internal stages for the implicit formula, four (effective) stages for the explicit formula, and a combined accuracy of order three.  5.3  Construction of I M E X R K Methods  We now describe the construction of some IMEX Runge-Kutta schemes for equation (5.3), beginning with some notation. For g, we consider an implicit s-stage DIRK scheme [50] with coefficients A £ 1Z , c,b £ 1Z , in the usual Butcher tableau, SXS  S  c  A b  T  78  The Butcher tableau is a compact way of characterizing a Runge-Kutta method. When applied to y = f(t,y), the nth integration step from the point (t -i,y -i) n  n  with stepsize At is written as  Ki = f (^t + CiAt,Y^aijKj^ s  Vn = Vn-1 + &tYl J Jb  K  Let a = s + 1. For / , we consider an (s + l)-stage explicit scheme with the and coefficients A G TZ ,be  abscissae c =  axa  V  c  Vf. To cast the DIRK scheme  /  as an ( 5 + l)-stage sc  as well, we can pad the  obtaining the tableau  0  0  ...  0  Cl  0 an  0  ...  0  C2  0  «22  ...  0  C  0  « 2  0  &2  0  s  0  «21  s  &SS  S  ...  6  S  Referring to the coefficients of this padded tableau a s i 6 1Z , b G 7Z , c G Vf, we aXa  a  see that c = c. This property simplifies significantly the form of the order conditions on the method's coefficients due to coupling between the individual schemes. An algorithm to perform one step from t -\ n  to t = n  + k of the IMEX  scheme is given as follows: 1. Set K\ = /On-l)79  (5.5a)  2. For i = 1,..., s do: • Solve for K{. Ki  =  (5.5b)  5(«(t))>  where i  =  i  w _i + k ] P i,i j j=i a  K  n  +k  i=i  "-i+ijKj-  (5.5c)  • Evaluate =  /(«(,•)).  (5.5d)  3. Finally, evaluate w  =  n  +k  +k i=i  bjKj.  (5.5e)  j=i  We note that in Step 1, a term corresponding to Ko is ignored (because it is irrelevant computationally). However, it lends further support to the choice of implicit-explicit (and not vice versa) as the name for these schemes. We now consider two special cases, leading to two sub-families of methods, which satisfy one of the following conditions: 1.6 = 6 (and in particular b\ = 0). Thus, the quadrature step (5.5e) reduces to u  n  = u -i + k^ n  bj(Kj + Kj+i)-  (5.6)  3=1 •  2. b \ — 0. Hence, K +i need not be evaluated. Furthermore, if s+  s  bj = a j, s  bj = a i j , j = 1,..., s s +  80  (5.7)  (implying that the implicit scheme is stiffly accurate and c = 1), then substis  tuting (5.5e) into the expression for u^, we see that U  n  =  (5.8)  This is useful for very stiff problems. We note also that the explicit scheme can now be generated from a general explicit s-stage scheme based on the abscissae c \ c  s  _ i ,  0  0  Cl  0  0  • •• 0  0  0  • •• 0  0  • •• 0  C2  0-31  032  Cs-1  a si  dS2 '  h  as3  •  6  •  h  3  •• 0  In (5.5b), there are s nonlinear systems to solve for K{ involving g alone. The operators to be inverted in a Newton iteration are  where g = | £ . Assuming that the Jacobian g is evaluated only once per step (at u  u  i _ i ) , this operator is the same for each i if a,-,- is independent of i. In this case, n  the implicit schemes are known as singly diagonally implicit Runge-Kutta (SDIRK) schemes. However, if an iterative method like one-cycle multigrid is used for the implicit scheme, then perhaps it is less critical for the overall efficiency to insist on SDIRK, and a more general DIRK scheme can be contemplated. See [50], §IV.6, for stiffly accurate SDIRK schemes. In our search for particularly accurate and stable schemes, we have allowed the possibility for the diagonal elements of A to differ 81  from one another. Despite this, the schemes we present are all SDIRK, because there were sufficient excess degrees of freedom in the solution process to allow for this slightly more efficient choice. Relating back to equations (5.1), consider the case of (5.3) where u —  We apply the implicit scheme to advance q and the explicit one to advance p. Observe that if j  2  is influenced by a control function,  for instance, then at each stage i, this influence is propagated to Ki from K i . As mentioned, this does not happen through the standard application of an explicit scheme ([109]). Moreover, if / i is independent of q then  (5.9) and the scheme becomes explicit! We now proceed to construct some instances of our IMEX RK family of schemes. We will use the triplet (s,a,p) to identify a scheme (3), where s is the number of stages of the implicit scheme, a is the effective number of explicit scheme stages (so a = 5 + 1 for Case 1, i.e. (5.6), and a = s for Case 2, i.e. (5.7)-(5.8) above), and p is the combined order of the scheme.  5.3.1  Composite Euler (1,1,1)  The well-known pair of backward and forward Euler schemes  1 1  0 0  1  1  82  can be padded to read 0 0 0  0 0 0  1 0 1  1 1 0  0 1  1 0  This yields the simplest IMEX RK method:  1  + Kf(. n-l)  +  u  g(Un))-  Note that this scheme satisfies (5.7).  5.3.2  Forward-Backward Euler (1,2,1)  The above form of Euler's method does not satisfy b = b. The variant that does is 0 0 0  0 0 0  1 0 1  1 1 0  0 i.e.  u  n  = u _ i + k(g(u^)) n  +  0 1  1  /(w(i))) where  =  u _i + kg(u^) n  +  A?/(u _i). This n  scheme involves one more evaluation of / per step than the (1,1,1) scheme.  5.3.3  Implicit-Explicit Midpoint (1,2,2)  The Euler pairs are first-order accurate and it is well documented that the (1,1,1) scheme exhibits poor (and misleading!) performance when g = 0 (cf. [7, 10, 99, 100]). The following pair of implicit-explicit schemes 0 0  0  0  0 0  0  1  1  1  2  2  2  0  0  1  0  1  1 2  1  a n d the first i n the family of I M E X linear multistep methods [7].  83  correspond to applying explicit midpoint for / and implicit midpoint for g. It can be shown to be second-order accurate because the two schemes from which it is composed are each second-order accurate, and c = c [49]. The performance of this scheme is usually comparable to the unreasonably popular CNAB considered in [7], and has better symmetry properties. Applying the scheme (1,2,2) to the separable Hamiltonian system 9 = • 'P(P)> P = H  we define  We note that this scheme is explicit, and it can also be shown to be symplectic. It is identical to the leapfrog/Verlet scheme [107] Pn  5.3.4  A  third-order  =  Pn-1  -  kHq  (Qn-1  combination  +  77#p(Pn-l))  (2,3,3)  To analyze the damping properties of a (one-step) numerical scheme, it is convenient to consider the test equation u = au. We let z = ak and write one step of the scheme as u = R(z)u -i, n  n  where R(z) is called the stability function. For attenuation of the numerical solution, we must have |i?(-oo)| < 1.  84  It is shown in [50] that the stability function for a DIRK scheme takes the form P{z)  R{z)  (5.10)  where P(z) is a polynomial of degree at most s. 2  Solution of the order equations yields that there is a one-parameter family of solutions for a two-stage DIRK to be third-order accurate. We select the free parameter to minimize \R(—oo)|. The two-stage, third-order DIRK scheme with the best damping properties turns out to be the two-stage, third-order SDIRK scheme  wi th 7 =  7  7  0  1-7  I-27  7  1/2  1/2  and R( — 00) = 1 — y/3 ~ —0.7321. This is a marked improvement  over the midpoint scheme (1,2,2), which has no attenuation at the stiffness limit z —*• - c o . The corresponding third-order explicit Runge-Kutta scheme (ERK) is 0  0  0  0  7  7  0  0  1-7  7-1  2(1-7)  0  0  1/2  1/2  The resulting IMEX combination is third-order accurate, has some dissipation near the imaginary axis (like all three-stage third-order E R K methods) and has some attenuation of the stability function at z = —00. 2  Precise formulas for P(z) are given in [50], p.49.  85  5.3.5  L-stable, two-stage, second-order D I R K (2,3,2)  It is possible that the attenuation of the above scheme may not be sufficient for some problems. A two-stage, second-order DIRK scheme that is stiffly accurate is ([50], p. 106) 7  0  7  1 1-  7  1where 7 =  2  7  7 7  £ . The corresponding three-stage, second-order ERK is of the form 2  0  0  0  0  7  7  0  0  1 6 0  1- 6 0 1-7  7  The choice of 6 is made to get a dissipative stability region. We match the terms of the corresponding stability functions to obtain the stability region of a threestage, third-order explicit RK scheme, with 6 = —2\/2/3. The resulting IMEX combination is second-order accurate. 5.3.6  L-stable, two-stage, second-order D I R K (2,2,2)  We use the same form for schemes as in the previous IMEX scheme, retaining the implicit scheme. Thus, 7 is that as defined for the (2,3,2) scheme, but 6 is not yet specified. The value for 6 is determined from the requirement that (5.7) hold instead  86  of (5.6), i.e., b\ = 6, b = 1 — 8, b — 0. T h i s gives a second-order scheme 2  3  7  7  0  1  1- 7  7  1- 7  7  0  0  0  0  7  7  0  0  1  <5 1 -  s  <5 1 - 8  0 0  with 6 = l - ± .  5.3.7  L-stable, three-stage, third-order  DIRK  (3,4,3)  W e would now like to exploit the larger dissipativity region, which four-stage, fourthorder E R K schemes have near the imaginary axis . A three-stage, third-order D I R K scheme that is stiffly accurate is ([50], p. 106)  7  7  0  0  (1+7)  (1-7)  7  0  1  6l(7)  62(7)  7  6l(7)  62(7)  7  2  2  where 7 is the middle root of 6a; - 18x + 9x - 1 = 0, 61(7) = - § 7 + 4 3  62(7) = I7  2  2  2  7  - \ and  — 57 + I . Numerically, this evaluates to .4358665215  .4358665215  0  0  .7179332608  .2820667392  .4358665215  0  1  1.208496649  -.644363171  .4358665215  1.208496649  -.644363171  .4358665215  T h e corresponding four-stage, third-order E R K , constructed such that it has the same (desirable) stability region as all four-stage, fourth-order E R K schemes, is  87  0  0  0  0  0  7  7  0  0  0  2  a3l(7)  032(7)  0  0  1  041(7)  «42  043  0  0  6l(7)  hirt)  (1+7)  7  The third-order conditions indicate that this is a two-parameter family of schemes. Taking our degrees of freedom to be  the remaining expressions  042,^43,  are (  9  3 ,\  J  • «si=  f  9  3  «32=  2  a 4 2 +/  l l  21  U"  \ ~  /  Y  11  15  T  7 +  21  J^+i-T +Y a l = 1 4  —  #42  —  7  a  7  15  2  \ „  7  „  9 ,  J « - . 2 + 137-27 , a  " ^ 2  \ „  , 2 5  J 43 + 4 a  y  7  9 + -  o 7  43-  Selecting particular values for 642(7) and  ^43(7)  so as to match the terms of  the stability function of fourth-order E R K schemes, we obtain the scheme 0  0  0  0  0  .4358665215  .4358665215  0  0  0  .7179332608  .3212788860  .3966543747  0  0  1  -.105858296  .5529291479  .5529291479  0  0  1,208496649 -.644363171 .4358665215  The resulting I M E X combination is third-order accurate.  88  /  ,  5.3.8  A four-stage, third-order combination  (4,4,3)  No pair consisting of a three-stage, L-stable DIRK and a four-stage E R K satisfying (5.7) with a combined third-order accuracy exists. The proof of this runs as follows: Solution of the order conditions at order three implies that there is a one-parameter family of three-stage, third-order DIRKs and four-stage, third-order ERKs that satisfy (5.7). Without loss of generality, let the free parameter be c\. A troublesome aspect of this family of solutions is that, despite the degree of freedom, 033 = 0 is a necessary condition for the solution of the order equations. Hence, the matrix A is singular, and the guarantee of L-stability for the DIRK is lost [50]. The only possible resolution then would be to use c\ to make the two leading coefficients of P(z) in (5.10) vanish, so that  lim^-co  R(z) = 0. This was  found to be impossible because the coefficient of z in P(z) does not vanish for any 2  real value of c\. Accordingly, in order to obtain a third-order formula satisfying (5.7), we are forced to go to a four-stage, third-order, L-stable DIRK, coupled with a four-stage ERK. After having satisfied the order conditions and imposed L-stability, we are left with a few families of schemes. Experimentation with the test equation (5.11) of the next section, we have selected a scheme that has rational coefficients that are not too large, and a diagonal entry that is close to that of the four-stage,  fourth-order,  L-stable DIRK. In  particular, we have chosen C\ = \, accompanied by the choices C3 = \ , 64 = \, and 643 = \. The resulting scheme is  89  1 2  1 2  0  0 0  2 3  1 6  1 2  0 0  1 2  1 2  0  1 2  1 2  1  5.4  3 2  3 2  1 2  1 2  3 2  3 2  1 2  1 2  0  0  0  0  0  0  1 2  1 2  0  0  0  0  2 3  11 18  1 18  0  0  0  1 2  5 6  5 6  1 2  0  0  1  1 4  7 4  3 4  7 4  0  1 4  7 4  3 4  7 4  0  Advantages of I M E X R K Methods  As is shown in [7], using a centered discretization scheme for the spatial derivatives of. advection-diffusion PDEs, a von Neumann analysis reveals that a simple yet appropriate test equation is the scalar ODE (5.3) with / = idu, g = au,  (5.11)  where a,j3 are real constants, usually a < 0, f3 > 0, and i = y/— 1. For a given time step k, we can define x = ka,  y = k/3, z — x + iy,  and write an IMEX step (3) using (5.6) as s  u  n  =  tt _i + z Y n  b  J (i) u  i=i  90  = R{ ) n-l, Z  U  (5.12)  where + Y^i( ij  (1 + iya i)u -i i+h  +  xa  n  iyai+i,j+i) {j) u  1 — Xdi  i = l,...,s.  (5.13)  For schemes where a = s, i.e. where (5.7) holds, an expression like (5.12) need not be used because u = u^ y n  s  Let us consider the two Euler schemes first. For the usual pair (1,1,1), equation (5.13) implies _ l + iy 1—  X  Thus, on the imaginary axis, \u \ < ( l + ?/ )|i£ _i|, and the scheme is unconditionally 2  n  n  unstable for any y / 0 (cf. [7]). As x —> —oo and |y| < oo, however, the scheme is unconditionally stable and u  n  0, i.e. attenuation compatible with that of the  exact solution is realized. For the other pair, (1,2,1), we get from (5.12)-(5.13) 1-  X  J  Now, along the imaginary axis, we have \u \  2  n  < (l-y  2  +  y )\u _ \ , 4  2  n  1  so stability is achieved provided that M<i-  This corresponds to the CFL condition. On the other hand, letting x —> - c o , we also obtain the restriction \y\ < 1 in the high-attenuation limit. Thus, we see that the variant (1,2,1) has stability characteristics that are preferable over the usual backward-forward Euler scheme in the hyperbolic limit, but worse in the parabolic limit.  91  Now, for a fixed ratio x/y = a//3, it is of interest to determine the maximum y > 0 for which the scheme is stable, that is,  < 1. This gives the stability  restriction on the step-size for a given problem. In Figure 5.1, we plot the resulting curves for the IMEX schemes presented earlier of the second type, i.e. those satisfying (5.7). Corresponding curves for the schemes satisfying (5.6) are plotted in Figure 5.2. Common to all the schemes in Figure 5.2 is that, as a —• —oo, there is a time-step restriction involving /3, typically k/3 < 1. In contrast, no such restriction occurs for schemes satisfying (5.7), and hence (5.8), in Figure 5.1. The behavior of these latter schemes is qualitatively similar in the high-attenuation limit to that of the SBDF schemes [7]. On the other hand, the scheme (3,4,3) in Figure 5.2 suggests a potential advantage, in terms of allowable time steps, over a large range of parameter values. Some experimentation with these schemes has been performed in [10] on convection-diffusion problems in one and two dimensions for a wide range of mesh Reynolds numbers. These include a one-dimensional, variable-coefficient problem, the Burgers equation, and a two-dimensional problem solved via time-dependent multigrid. As expected from general properties of standard (one-step) Runge-Kutta versus linear multistep schemes, results from [10] show that the Runge-Kutta-based IMEX schemes have better stability regions than the best IMEX multistep schemes known. They are also found to average just over one fine-grid iteration per implicit stage of one-cycle multigrid. This is also a marked improvement over the scheme suggested in [48].  92  Time-Step Restrictions  1  1  1  \ k(5  '(1,1,1) (2,2,2)  \  (4,4,3)  y \  -  0 -10  \ \  1  1 2  -10  -10°  1  •  —  L  -10-  1  -1(T  2  a//?  Figure 5.1: Time-step stability restrictions for various values of a and /?, for IMEX schemes satisfying (5.7). Note that all these schemes perform well in the highattenuation limit.  93  Time-Step Restrictions 12 i  -io  r—  2  -10  1  -io°  1  - i o  _  1  - I O  -  2  Figure 5.2: Time-step stability restrictions for various values of a and /?, for IMEX schemes satisfying (5.6). Note that some of these schemes, especially (3,4,3), perform well when -a is not too large in magnitude, but they all have time-step restrictions in the high-attenuation limit.  94  Chapter 6  Numerical Experiments In this chapter, numerical simulation is employed to demonstrate the general applicability of the solution procedure described and analyzed in Chapters 3-4, the types of solutions that are produced, and a measure of the computational efficiency. In the first section, we give some results from simulations of the mobile robot system (1.2) under the action of various programs. These include very general motions using plane waves (1.5) as drivers, as well as more refined motions using plane waves and spotlights (1.6). We then proceed to combine the programmed-constraint approach with optimal control, showing the complementarity of the two approaches and some of the potential advantages of a purely local approach in various situations. We conclude this chapter by examining a program for the robot arm described in Section 1.3.2.  95  6.1 6.1.1  Mobile Robot Simulation  The mobile robot example is described in Section 1.3.1. We will now examine different scenarios of varying difficulty and report on the performance and qualitative properties of the solutions. For the purposes of simulation, we make a few simplifications of the equations of motion (1.2). Let a = ^ tan(/?V0  a n  d a = na be the new controls. Thus, the  state equations of the mobile robot are . x =  vcos9  (6.1a)  y = vs'm9  (6.1b)  9 = va  (6.1c)  v = a — ~fv.  (6.Id)  This casts the equations (1.2) into the form (1.1). Such a redefinition of controls is possible since ip and a are in a one-to-one correspondence in the allowable range of ip. The bounds on the controls are \a\ <  150 cm s~  (6.2a)  \a\ <  — cm . 25  (6.2b)  2  - 1  The first bound represents the maximum allowable throttle setting and the second yields a minimum turning radius of 25 cm. The mobile robot is a nonholonomic system [81]. The nonholonomic constraints are found by setting the sideways velocity of the wheels to zero. For the rear wheels, this yields sin 9x  —  cos 9y = 0  96  and for the front wheels, we have sin(0 + /3ip)x - cos(0 + f3ip)y - L6 cos(j3?p) = 0. The identification of nonholonomic constraints is generally a nontrivial task [81]. Moreover, control of nonholonomic systems is especially challenging because they are locally uncontrollable (the linearized system is uncontrollable) and yet globally controllable [81, 122, 76]. 6.1.2  V e r y General Programs  One philosophy of weakly specifying a control is that simple tasks should be described in a simple manner and solved with correspondingly small computational effort. In this section, the robot program simply stated is to move to the end of the table with large x. The plane-wave constraint (1.5) is employed as driver, with 4> = f. The problem is complicated through the specification of different starting orientations and different times by which the motion should be completed. We also examine the qualitative differences in the trajectories produced, depending on the choice of natural control. Most of these results have appeared in [109]. For all simulations in this section, initial conditions of the system (1.2) are x(0) = 52.7, y(0) = 50, v(0) — 0, with 6>(0) and the other parameters specified in the case description or tables. We use the value 7 = | in the simulations. We utilize two possible choices for the default control, tt t(i)- The choices are na  classified according to an 'effective' damping factor 7 , to be described subsequently. The first default control corresponds to damped behaviour, w t(i) = 0. In this na  case, 7 = | . Because 7 ^ 0 , the robot slows while maintaining the same heading 0 when the system is safely inside K(t). is ?Ant(2) = a  7 (0U  The second choice for default control  This is equivalent to motion of the system in the limit of no 97  damping. Hence, this choice of default behaviour is denoted by 7 = 0. A quick examination of the bounds on allowable speed and acceleration reveals that it is always possible to make such a choice of default control. The use of this control means that the system continues in the same direction at constant speed when not interacting with the buffer zone. We remark that this strategy is somewhat more aggressive than the previous one in that the system is 'preferring' to maintain a higher speed throughout the simulation and hence may be more sensitive to the prediction strategy. However, it is no more expensive to compute with in principle, and in fact it may lead to extremely efficient solutions if it ultimately produces fewer interactions with the boundary of K(t). All simulations use an adaptive sensitivity parameter, starting from a userdefined [/f , expressed as a fraction of the identity matrix as Uf I. rac  Tac  Knowledge  of the existence of opposing constraints is assumed and the stepsize is At = ^ seconds, representing the actual sampling time of the Dynamite system [15]. A first-order integration scheme was chosen because of the low expected accuracy due to simplifications made in the model equations. Simulations are terminated when the trailing (lower) edge of the plane wave is one car-length from the right edge, i.e., ^final — (^max  %min  L)/t)w.  (6.3)  The tables display three different measures of the performance. The performance of the algorithm is partly measured by the quantity %u, which represents the number of time steps that invoked the logarithmic barrier subroutine to compute a control as a percentage of the total number of time steps. Thus, it measures the complexity of the control computation in a new way - as the percentage of time spent having to explicitly account for the constraints. The constraints, along with their velocities and accelerations, are sampled every ra  samp  98  integration steps.  However, explicit constraint avoidance by means of barrier minimization is only executed a fraction %u of the total integration steps. In addition, we record the quantity  %tt ax» m  which measures the percentage of non-default control steps spent at  maximum control for each control variable. Finally, typical execution time runs (in seconds) on an SGI Indy running an IP22 Processor at 100 MHz and 64 Mb of R A M are given under CPU. Of course, these times are only representative and should not be taken too literally: the times reported are an average of a small number of runs (usually about five), with the CPU load being monitored manually to ensure that other competing processes were not significant. We note that the numbers reported include simulation time and disk I/O, but are still considerably faster than the corresponding simulation time. In this way, we see that relatively non-specific motion can be specified weakly and solved in a correspondingly cheap fashion. Case 6.1 (<f> = f, 0(0) = 0, n  1 8 10 1 8 10  s a m p  = 2.)  7  "frac  %u  0 0 0  0.5 0.5 0.5 0.5 0.5 0.5  0.04 4.38 6.29 38.38 47.24 48.01  1 6 1 6 1 6  %"max  (a) 0.0 0.0 0.0 0.0 0.76 1.09  (a) 0.0 0.0 0.0 0.0 0.13 0.0  CPU 1.15 1.75 2.11 24.85 3.95 2.86  Table 6.1: Simulation results: Case 6.1 We note that this is effectively a one-dimensional problem: There is no incentive for the system to change its y-coordinate. Hence, this program is the discrete (damped) analogue of Example 4.3. The solution behaviour is depicted in Figures 6.1 and 6.2 by comparing the ^-coordinate of the car to that of the wave. For 99  Figure 6.1: Robot position and wave position with no damping  Trajectory of Mobile Robot  ( ^ = 1 0 , ^ = | , 7 = 1,0(0) = 0) 1  1  1  l  I  i  I /  70 x  -  robot i  0  1  i  2  i  3  i  i  wave i  i  4  5  6  7  Figure 6.2: Robot position and wave position with damping.  100  the undamped case, the robot builds speed (approaching v ), and stays in front w  of the wave. Thus, %u is very small because the robot satisfies the constraints mostly without using any non-default control. The behaviour is similar with damping turned on. As expected, the damping stabilizes the motion of the robot. The robot stays ahead of the wave but, due to the damping, remains much closer to the wave. The natural motion then allows the wave to catch up again, and the process is repeated. Hence, we observe that %u is higher than in the undamped case, and increases with the speed of the wave. We note that from (6.3), the number of integration steps required is inversely proportional to the wave speed v . Hence, the w  C P U times per integration step are comparable between runs for fixed 7 , and also reflect the increased problem difficulty of higher v .  A  w  Case 6.2 (<f> = f, 0(0) = \ , u  iieLC  = 0.01, v = 1.)  1  "samp  %u  0  2 1  9.98 80.28  1 6  w  %u (a) (a) 1.5 92.87 0.0 25.38 meLX  CPU 15.38 77.15  Table 6.2: Simulation results: Case 6.2 The initial configuration of the robot is now changed to \ . The choice of different n  s a m p  was used to enhance the different behaviours between the undamped  and damped cases. Diagrams of the robot trajectories in the undamped and damped cases are given in Figures 6.3 and 6.4, respectively. In the undamped case, the robot initially moves in the direction 9(0) until it comes quite close to the top wall (constraint C4), at which point it turns to 6 « 0 and moves parallel to the top wall. When compared to the damped case, the greatest difference is that the 'natural' 101  Trajectory of Mobile Robot  (v = !,¥>= f , 7 = O,0(O) = f) w  100  y  50 -  5  5  50  100  150  200  x  Figure 6.3: Undamped motion of robot (Case 6.2).  motion requires a more frequent updating of u (t). nat  The more frequent updates  then allow the required turn to be completed sooner. Because the car must perform a fairly sharp turn to avoid the top boundary and a  n a t  = 0, the quantity %u is expected be much higher than in the previous  (one-dimensional) case. We note that despite this, the algorithm still exhibits low CPU indicating the efficiency of the approach.  6.1.3  A  More Refined Motions  The program is now refined so that the robot is instructed to move to one corner of the table; for definiteness, that for  X  Iie<ir 3?max  and  y  near  ymin-  For this purpose, we  use a plane-wave driving constraint (1.5) with 4> — \. Results for various simulations are given in the Table 6.3, with a qualitative description to follow. Note that the difficulty level increases dramatically with decreasing (f>, because the final size of the feasible region is quite small.  102  Trajectory of Mobile Robot (v =  f , 7 = |,0(O) = f )  l,ip=  w  T  100  50  J  5  50  100  150  x  200  Figure 6.4: Damped motion of robot (Case 6.2).  Case 6.3 (</> = f , n  0(0) 0 0 7T 4 7T 4  s a m p  = 1, v  7  ^frac  w  =  1.)  %u  %"max (a)  (a)  CPU  0 l fi 0  10~  2  58.44  0.17  97.37  84.59  10~ io-  2  100.0 34.88  0.0  15.02  0.09  98.02  81.03 47.64  1  10~  3  100.0  0.0  41.43  144.54  6  3  Table 6.3: Simulation results: Case 6.3 The second and third cases listed in Table 6.3 are depicted in Figures 6.5 and 6.6. In Figure 6.5, the robot starts at an orientation of 0 and proceeds to turn downward and moves towards the lower right corner. In Figure 6.6, the robot starts at an orientation of — ^, turns upward to straighten out, and angles to the corner. However, it must back up to adjust its final approach to the corner. This fine-tuning is a result of the exacting nature of the final size of the feasible region. Initial and final positions of the plane wave are included to help illustrate this (dotted lines in Figures 6.5 and 6.6).  A 103  Trajectory of Mobile Robot =  7 = 1,0(0) = 0)  100  y  50  5 5  50  150  100  200  x Figure 6.5: D a m p e d m o t i o n of robot (Case 6.3).  In these simulations, we note that  Wf  r a c  was decreased i n the tougher prob-  lems. D u e to the inherent local nature of the approach, the m e t h o d can fail i n certain cases, especially when several constraints are 'active' a n d at small angles t o each other, or when the viability region is narrow. However, the efficiency of the m e t h o d suggests that it should be effective a n d practical i n m a n y applications, such as for the D y n a m i t e system [15], where the viability region can usually be p r o g r a m m e d t o be 'wide'.  Case 6.4 (Robot in a Spotlight) In this example, we develop a p r o g r a m that is closer i n philosophy and appeal to t r a d i t i o n a l p r o g r a m m i n g methods. T h e object is t o move the robot t o the corner of the table for large x and y i n the neighbourhood of a specified p a t h . Hence, there is a geometric appeal to the programmer, who may have an idea of the desired p a t h but does not wish to be involved i n solving inverse dynamics problems. T h e ensuing D A I is easily specified a n d efficiently solved by the m e t h o d of C h a p t e r 3.  104  Trajectory of Mobile Robot  K = l,<^=f,7 = 0,0(0) = -f)  y  x Figure 6.6: Undamped motion of robot (Case 6.3).  We change the initial position of the robot to (x(0),?/(0)) = (5.27,50) and choose a parabolic spotlight path that passes through this point, and the points ( ^ m i n ,  2/(0)) and ( x  m a x  , 2 / m a x ) - The path is parameterized as x (t)  = v t + x(0),  (6.4a)  y (t)  = ax + bx + d,  (6.4b)  c  w  2  c  where v  w  is the x-component of the spotlight velocity and a, b, d are constants  uniquely determined from the geometric criteria previously described: a =  0.001183677  b =  -0.012156359  d =  50.031189879  From the parameterization (6.4) of the spotlight path, we see that the robot starts at rest in the centre of the spotlight, which is described by a constraint that is nonlinear in the control (cf. Example 4.4). The final time for the motion is now given by tf = ( x  m a x  —  X m i n ) / ^ ,  circle of radius r, centred at ( x  with the feasible region being reduced to a quarter m a x  ,y  m a x  ).  105  For this example, it was possible to  v  w  r  1  ^samp  %u  '"frac  CPU  %"max  (a)  (a) 39.02  26.25  2  io-'  2  23.25  0.0  10  0 i  2  io-'  2  49.21  0.0  0.04  29.84  1  0  2  io-'  2  46.43  0.0  2.18  18.04  1  1  1  io-  2  99.99  0.0  10.42  74.37  1  0.1  1 6  0  1  10~  3  95.10  0.0  79.40  123.37*  1  0.1  1  0.5  100.0  0.0  0.0  45.19  5  5  1 6  0  2  0.5  47.01  0.0  11.09  5.36  5  5  1 6  1  0.5  99.85  0.07  40.95  35.11  1  10  1 1  6  Table 6.4: Simulation results: Spotlight Problem produce behaviours that were qualitatively different in the various cases, depending on the combination of choices for u (t), Uf , and n . nat  iac  samp  Examples of these are  shown in Figures 6.7, 6.8. These examples correspond to the last two entries in Table  6.4.  The two choices we consider are the same as in Section  6.1.2:  u (t) nat  = 0  and u (t) = 7v(t), again denoted by 7 = \ and 7 = 0, respectively, in Table 6.4. nai  The solution in Figure 6.7 looks qualitatively similar to solutions that are normally produced by an active-set method: The robot slides along the 'trailing' part of the boundary, as if being dragged along by the spotlight (cf. also Figure 4.2). On the other hand, in Figure 6.8, the robot stays very close to the centre for most of the time, right up until the final moments of the simulation, when it tapers off to the outer reaches of the feasible region. Because the speed of the spotlight is so modest, and the starting values of U{ are identical, we can attribute the difference vac  in behaviours to the different values of n  s a m p  : higher values allow the solution to  approach the boundary more readily, leading to an active-set look to the solution. The (anomalous) reading denoted by * was obtained with the adaptive sensitivity feature turned off. This simulation is included for the purposes of illustration. It  106  shows the potential efficiency gain of the adaptive strategy. However, we note that forcing the sensitivity to maintain low values may be required to obtain solutions to particularly sensitive problems.  A  40 20 5  50  100  150 200 x Figure 6.7: Solution exhibiting heavy properties.  50  100  40 20 5  150  200  x Figure 6.8: Solution not exhibiting heavy properties. The cases in which r = 0.1 are testing the limit of where the DAI formulation breaks down. We note that due to the nonholonomic nature of the problem, the precise path of the spotlight given in (6.4) cannot be followed exactly. This is an  107  illustration of a shortcoming of traditional (holonomic) path planners applied to nonholonomic systems [81]. Thus, the programmed-constraint approach can offer a sensible alternative that can also retain some geometric appeal.  6.2  Interaction with Approaches from Optimal Control  In this section, we discuss some uses of the programmed-constraint approach in conjunction with an optimal-control solver. We show some complementarity between the two approaches, in particular how the DAI solver presented in Chapter 3 is a very inexpensive starter for an optimal-control routine. We also demonstrate some advantages of a local approach over a global one in the case of optimal control; however, these advantages are generalizable to any purely global planning strategy. The optimal-control code used is OCPRSQP [101], which solves optimal-control problems in boundary-value DAEs with state- and control-dependent constraints of the form: mm. /*' L(x(t),y(t),u(t))dt,  (6.5a)  x,y,u '•JtQ t  subject to < x(t) ^  \  0  /  Umin  <4in <  <  /(*(*), y(t), «(<)),  (6.5b)  U(t) < M  (6.5c)  m  a  x  ,  c\x,y,u) < c ^ .  (6.5d)  The boundary-value problem is discretized by collocation on a mesh  n = {*o < h  < • •  • < <JV = */}  and the resulting finite-dimensional optimization problem is solved by a new class of numerical optimization techniques that Schulz in his thesis has termed partially re108  duced sequential quadratic programming methods [101]. These methods are intended to combine the convenient treatment of inequality constraints afforded by sequential quadratic programming and the smaller quadratic subproblems offered by reduced sequential quadratic programming. This allows the optimization problem (6.5) to be solved very efficiently [101]. For this thesis, we are not exploiting the full DAE capabilities of the code because we deal only with the ODEs (6.1). However, OCPRSQP is no less efficient when algebraic constraints are absent. In order to make the exact solution to the optimization problem more intuitive, only table constraints are imposed, where for simplicity, we redefine ^min  =  2/min  = 0, x  = 200, and  m a x  J/ x  100.  =  ma  The boundary conditions for the optimal-control programs are 4(0)  = (j^max,  0, 0)  T  ,  ?(*/) = (r% max, '?/max,0mod27r,0) . a;  T  2  In the remainder of this section, we will refer to the initial and final (x, ^-coordinates as A and B = (xf,yj), respectively. The programmed-constraint formulation shares the same initial conditions, but the terminal conditions can only be enforced to within some tolerance. However, this is of little concern when constructing an initial guess because an optimal trajectory is not necessarily 'close' to one that is produced from a D A I . The two tra1  jectories only need to be close enough to enable the nonlinear solver to converge. A DAI program is specified independent of any intertemporal objective function. However, it often makes use of additional user-defined constraints that are not present 1  T h i s depends on how the viability set is defined.  109  in the corresponding optimal-control problem; in particular, it can require that the DAI trajectory end up in a suitably small neighbourhood of the terminal conditions. Thus, the programmer can still compute a reasonably accurate initial guess. We consider the minimum-time problem mm / 9>  u  at,  Jo  subject to (6.1), (6.2), and (1.3). Because the final time tf in the above is unknown, we define a new variable r = JJ and re-write the problem as min / C(q(r), 5,« Jo  (6-6)  U(T))(IT  subject to  k = f, and equations.(6.2) and (1.3), where q = (q ,tf) T  T  (6-7) and / =  (tff ,0) . T  T  We now give a few technical details for the optimal-control simulations. A l l simulations in this section were run on an SGI Indigo 2 having 64 Mb of R A M and a clock speed of 200 MHz. Three-stage Gauss collocation is employed as the discretization scheme, on an adaptive mesh having 40 subintervals initially. The controls u are taken to be piecewise constant, generally on a coarser grid than the state variables. Convergence is deemed to have occurred based on an expected decrease, TOL, in the Powell merit function [101]. We pay particular attention to the effects of decreasing this tolerance from 1.0 X 10~ to 1.0 X 1 0 3  -5  on the numerical solution.  As a final comment, we note that in order to reduce the computational complexity, convergence is only to a locally optimal trajectory. The particular optimal trajectory achieved depends critically on the initial guess. As we will see, the addition of an objective function to a basic viability problem already adds significantly to  110  the computational effort. This effort would be compounded further if an attempt is made to also go to a global optimization routine, such as simulated annealing (see, for example, [95] or [58] and the references therein). We consider two numerical experiments. Each involves obstacles of the form (1.7), to be described subsequently . Both are prototypical applications for a robot 2  in a hazardous environment [81], such as a mine field. The robot must execute a given program in an efficient manner yet have the autonomy to make correct decisions based on information not available to the programmer. We demonstrate that DAIs provide a suitable environment for the program, with the solver described in Chapter 3 providing an effective control in a timely manner. Case 6.5 (One Central Obstacle)  To our previously uncluttered arena, we add a circular obstacle, c b ,i, of radius 0  s  10, and located at coordinates (100,50). In this case, the (locally) time-optimal solution is fairly intuitive. It involves one switching time and (close to) bang-bang control for the acceleration. The exact solution for the steering control is a little more complicated, but similarly based on switching times and almost bang-bang control. The problem is not reversible due to the nonzero damping factor, 7 , so the switching time is slightly larger than half of the optimal time. A consistent albeit 3  not overly thoughtful initial guess for the solution is generated by means of shooting (Figure 6.9); that is, specifying a control profile and integrating (6.1) to obtain the state variables. As can be seen, it is not very close to optimal. In Figure 6.10, we show the 'optimal' solution produced with the setting T h e author apologizes in advance for not giving the obstacles more colourful names, such as Barnacle B i l l , Yogi, or Scooby-Doo. B y 'consistent', we mean a trajectory g(t) obtained from a control u(t), in contrast to q(t), u(t) being postulated independent of each other. 2  3  Ill  0  50  100 x  150  200  Figure 6.9: Initial Guess from Shooting for One-Obstacle Problem  80 [  20 r 0 I 0  ,  ,  ,  1  50  100 150 200 x Figure 6.10: Solution for One-Obstacle Problem with TOL = 1 0  112  -3  100 80 60 y  40 20 0  0  50  100 150 200 x Figure 6.11: Solution for One-Obstacle Problem with TOL = 1 0  -6  TOL = 1.0 x 1 0 , with a CPU time of 1.53 seconds. It shows that we must -3  be careful in casually accepting a solution simply because it was produced by an optimal-control code! Upon refining the tolerance to 1.0 X 10~ , we obtain a better 6  approximation and this is displayed in Figure 6.11. However, the CPU time required is now 17.97 seconds. Let t  opt  be the final time obtained from the accurate solution  to the optimal-control problem. An optimal solution is very useful from a theoretical point of view. However, in practice, the principle of optimality [18, 17, 19] is very unforgiving. The principle of optimality states that any portion of an optimal trajectory between some intermediate point and the final point is itself optimal. Errors or uncertainties of some type are a reality in engineering as well as in numerical simulation. This often makes truly optimal solutions hard to come by in practice. However, we can make good use of the knowledge of an optimal solution to produce a sub-optimal solution that is close to the optimal one, even in the presence of errors. If we have an optimal solution, it is usually easy to construct a program that will mimic it using a DAI. The following is only one possible realization and 113  the description is a typical account of the programming process when using DAIs. We opt to use a spotlight constraint of the form (1.6). The path of the spotlight centre is not chosen to lie exactly on the optimal path so that we can maintain a wide viability region when it goes around the obstacle. Accordingly, we choose a spotlight path that consists of two cubics: The first cubic connects the starting point A with (x,y) = (100,65) and has slope ^ = 0 at each of these two points. The point (x,y) is chosen with an eye for an implementation with a spotlight of radius r = 8. The second cubic then connects (x, y) to the end point B, again with 4  a slope of zero at both ends. We must now give a time parameterization of the spotlight centre. We note that this is relatively easy to do because the spotlight is not physical and thus can be chosen to obey simple dynamics, such as motion under constant acceleration. We parameterized the centre as  cc(0) + hat  t < t*  2  x (t) c  =  {  x(t*) + x (t*)  Vc{t)  ~  - \dt  (6.8a)  t > t*  2  c  axxlit)  + bxxKt)  + ci_x {t)  + di  t<t*  a x (t)  + b x (t)  + c x (t)  + d,  t> t*  3  2  c  2  2  c  c  2  c  2  (6 8b)  where t* = ^ t , and the coefficients a i , fr^etc. are uniquely determined from op  the geometric conditions specified on the cubics. In this particular realization, the spotlight accelerates uniformly from rest at A, reaching its maximum speed at time t = t* and position ( x , y ) , then decelerates uniformly to point 5, where it is again at rest. The acceleration parameter a is chosen so that the spotlight centre travels from A to B in time i pt- Of course, the DAI solution is sub-optimal; however, it Q  ' M o r e quantitative reasons for this choice are given in Case 6.6  114  100 80 60 y  40 20 0  0  100 150 200 xFigure 6.12: DAI Solution for One-Obstacle Problem 50  captures the essence of the optimal trajectory in this case. A good candidate for u (t) can also be afforded by the optimal-control najt  solution. The solution produced by this program is displayed in Figure 6.12. For reference, we have included the optimal trajectory, the spotlight, an outline of the car, and a dot within each outline to denote the coordinates of the centre. We note that the constraints are specified with respect to these coordinates, and the fact that the outline of the car itself may sometimes fall outside of the spotlight is not relevant. The CPU time for the above calculation was 0.42 seconds. Thus, the programmed-constraint approach can provide a relatively painless method to program an implementable approximation to the optimal-control solution, and we have not yet taken full advantage of the local nature of the solution method.  A  Case 6.6 (Hidden Obstacle)  In order to illustrate the potential advantages of a local approach to robot control, we place a (small) circular obstacle,  c b ,2 0  s  m  such a way that it intersects  both the optimal-control solution (Figure 6.11) and the solution produced by the 115  DAI approach (Figure 6.12). A suitable choice for such an obstacle has its centre at (160, 54.5) and a radius of 3.5. By doing this, we wish to simulate the existence of an obstacle that is recognized during the (simulated) motion: From the vantage point A,  c bs,2  m  Y  a  Q  be obscured by  c bs,i0  The time-optimal solution is constructed  based only on the information available at the outset; it cannot take advantage of more information obtained during the execution. Thus, the solution of the optimalcontrol problem cannot be implemented as is, because the robot would hit the hidden obstacle. This is a limitation of the optimal-control approach in general: It is a global problem and hence requires global knowledge. On the other hand, a programmed-constraint specification of the task can still be defined without this knowledge. The presence of a local controller may make it possible to produce a trajectory that avoids this hidden obstacle. Without changing the DAI program from Case 6.5, we add the hidden obstacle, but only allow the robot to 'see' it when it has passed (x,y): the information contained in  c b 2 0  s>  15  o n i  y made available after t = t*. We have purposely de-  signed the program in Case 6.5 (particularly the choice r = 8) so that the program was likely not to need modification to avoid the hidden obstacle. The quantitative reasoning behind this is as follows: Figure 6.13 depicts a robot at the centre of a spotlight of radius r heading directly toward an obstacle of radius geometry allows us to calculate the minimum turning radius r — t  r bsG  Simple  required for  the robot to barely avoid this obstacle: r = l±^r, V t  (6.9)  where 77 = 2 ^ - . For an obstacle of size 3.5 and a turning radius of 25 cm, we find that r  10.8. Now, Figure 6.13 represents somewhat of a worse-case scenario:  the robot is headed directly towards the centre of the obstacle. Given that this is 116  Figure 6.13: Worst-case scenario of mobile robot approaching hidden obstacle. unlikely to be the case as the robot passes the obstacle, we choose a radius of 8 as a compromise to not being far from the optimal-control program while respecting equation (6.9). The result is shown in Figure 6.14, with the robot adjusting its trajectory at run time to avoid the new obstacle. The C P U time required for this run was 0.45 seconds. Clearly, there are definite advantages to be had from a local approach to robot programming. The DAI formulation with local solver avoided the hidden obstacle with absolutely no additional effort on the part of the programmer. On the other hand, the optimal-control problem would have to recompute the trajectory, thereby adding to an already expensive calculation.  A  Case 6.7 (Multiple Obstacles)  We have seen how programmed constraints can be used in conjunction with the knowledge of an optimal-control solution to produce a control that has a degree of robustness and is close to optimal. To complete the cycle of complementarity, we now demonstrate the use of DAI as an initial guess for a problem with multiple 117  200 Figure 6.14: DAI Solution for Problem with Hidden Obstacle obstacles. The arena is occupied by five obstacles of random size and position as depicted in Figure 6.15. The optimal-control problem is the same as before: Find  0  50  100 x  150  200  Figure 6.15: Location of Multiple Obstacles the minimum time solution to get from A to B while avoiding obstacles. Even in this case, the geometric solution to the optimal-control problem is still fairly clear; however, the values for the other unknowns such as control profile or minimum time  118  are less clear. We would like a low-effort way to produce a good initial guess for our optimal-control solver. It should be inexpensive both in terms of computation time as well as programmer effort. The easiest guess would be to just linearly interpolate between start and end points (despite parts of the path being infeasible) and choose some arbitrary control (for example, zero). This clearly satisfies both criteria of low effort. Unfortunately, the nonlinear equation solver of the optimal-control code [101] was not able to converge from such a poor initial guess. An initial guess based on a shooting-type approach satisfies the criterion of low computational expense because the computational effort boils down to an explicit, initial-value ODE solver. It also has the advantage of being able to produce a consistent, feasible guess. However, the programmer will generally need to perform a large number of iterations on the control profile before a solution that comes reasonably close to satisfying the terminal boundary conditions is found. Moreover, if these terminal boundary conditions are changed, obstacles become nonstationary, or more obstacles are added, the researcher would find that there is essentially no return on the time invested in the initial program! The initial guess obtained via shooting that is used in the computations to follow is shown in Figure 6.16. Some effort was put into making this guess reasonably good so that it would not unduly affect the convergence of OCPRSQP. A DAI program that forces the robot to end up in a neighbourhood of the desired terminal conditions can be constructed as follows. In addition to the table constraints (1.3), we introduce two more pairs of constraints: 1. A pair of constraints in the form of two travelling waves with (p = | that uniformly 'squeezes' the viability region down to a shape with size 6 in the z-direction at time t = tj. We take 6 = 5 and choose tj = 2.25 > i t = 2.15. op  119  0  50  100 x  150  200  Figure 6.16: Initial Guess from Shooting for Multi-Obstacle Problem The constraints themselves are then x  ^squeeze \  Zmax -  — V\t (X -  V t) 2  where v\, v are chosen such that the waves are located at x = x \ 2  x=x  max  m n  = 0 and  at t = 0, and x = Xf — \8 and x = Xf + i<5 at time t = tf.  2. A pair of constraints in the form of artificial, stationary 'edges', that 'funnel' [87] the solution into a neighbourhood of B. Explicitly, these are 1  y - \x + 50 ^  ^funnel —  y -y - \x + 150 j We note that the program is easily modified to accommodate different boundary conditions, hence the time invested in constructing the first program is not entirely lost. A general program such as (6.8) can automatically handle different obstacles that may even be moving. Naturally, this approach produces a guess that is consistent and feasible. 120  X  Figure 6.17: DAI and Optimal Trajectories for Multi-Obstacle Problem The initial guess produced by this program is depicted in Figure 6.17. Also shown are the optimal-control solutions for two different tolerances. The following table shows the relative computation times to produce an optimal solution for different tolerances. The first uses an initial guess from the DAI program (6.8) and is listed under the column labelled (CPU1). The second uses an initial guess obtained from shooting (CPTJ2). The times quoted are in seconds. The initial guess produced by solving the DAI described was 0.50 seconds. TOL 10 10~ 10~ 10~ -3  4  5  6  CPU1 52.26 122.82 118.14 109.70  CPU2 5.57 126.9.7 105.75 83.69  Table 6.5: Optimal-control solutions using initial guesses obtained from shooting and DAI methods. Comparing the results, we see that the savings are not great in computation time when starting from a reasonably accurate initial guess obtained from shooting.  121  The savings that are significant are in terms of programmer time and program flexibility. It takes relatively little effort to construct the program and its solution. The initial guess obtained by means of a DAI was produced in 0.51 seconds of CPU time, or about 0.6% of the TOL = 1 0  6.3  -6  case.  A  Robot A r m  As our final example, we give some results for the robot arm application described in Section 1.3.2. Simulations are carried out to a final time tj = 40s, representing almost 20 periods of motion in all cases. The following parameter values are used in all simulations reported in this section: mi = m.2 = 36 kg, l\ = l = 1 m, g = 9.81 m s . - 2  2  In this example, equations (1.8) are solved with the initial conditions 0i(O) = 70°, 0 (O) = -140°, wi(0) = w (0) = 0. 2  Also, all results shown use  Uf  r a c  = 0.5, n  2  s a m p  (6.10)  = 1.  The default control is taken to be identically zero. This is done as an illustration of how the method can work despite limited knowledge on the part of the programmer. An alternative possibility for w  nat  is to use the Lagrange multipliers  for the corresponding DAE. If r is in fact small, this situation is analogous to a DAI implementation of optimal control (cf. Case 6.5). The Lagrange multiplier solution is in some sense optimal because the parabola (1.10) can be followed with r = 0. However, such a strategy would not be implementable in the presence of uncertainties or modelling errors. A DAI formulation of the problem with small r would circumvent this. 122  The qualitative comparison is rather unexciting because in all cases the robot arm simply swings in the feasible region. We note that the motions were very close to the actual parabolic constraint path given in (1.10) for all cases, even for those with relatively large r. As is typical in a DAI approach to programming, general motions are possible along with more specific ones. The general motion for larger r values may be appropriate, for example, if (moving) objects run the risk of intersecting the trajectory of the elbow. r 1500 1500 1500 550  %u  0.005 0.01 0.1 0.5  CPU  97.125 97.125 97.125 99.05  (a) 0 0 0 0  (a) 0 0 0 0  25.53 25.52 25.00 27.57  Table 6.6: Simulation results: Robot Arm  —r  1 '  y 0  1  \  1  1  i  I  / -•: 1  \  £-  I  ^—&Z  \/  ^  Yt = o  v  \\  -1  •  X  Figure 6.18: Motion of Robot Arm: r = 0.005  123  y o  -l Figure 6.19: Motion of Robot Arm: r = 0.5 Summarizing, in this chapter we saw that the DAI solver described in Chapter 3 is an efficient method for several problem instances. We started with a simple example of a mobile robot executing very general motion and ended with the more complicated case of robot arm motion along a specific path. We have also examined possible scenarios for interaction between DAIs and optimal control, and illustrated some advantages of the local nature of our solution method. The ability of this method to deal with all of them clearly demonstrates its potential and applicability to problems in robotic programming.  124  Chapter 7  Conclusions Like other new ideas in scientific computing, it will succeed on some problems and lose on others. - Gil Strang  7.1  Summary  In this thesis, we have described a numerical method for solving the systems of differential-algebraic inequalities that arise from the programmed-constraint approach to robot programming. The procedure is applicable to similar problems where inequalities are a natural way to express unilateral constraints of an arbitrary nature. The programmed-constraint approach is appealing because programs are often geometrically motivated and intuitive. Moreover, programs can be developed incrementally. However, the range of possible tasks and desiderata for a robot program is so diverse that no single method can be the most appropriate in all cases. A key motivation for a numerical method designed to solve DAIs is to reduce the amount of computation for such dynamic constraint satisfaction problems by  125  exploiting the fact that the feasibility region of the constraints typically has large, non-empty interior. Our approach combines a delay-free discretization (usually composite Euler [109]) of the equations of motion with a constrained minimization to obtain a local control. Local planning is introduced, which provides a state-dependent test for future constraint violation and is used as a switching mechanism to activate new constraints. Our simulation results to date indicate that the method is efficient, often exhibiting a small fraction of time steps explicitly accounting for the constraints. We also report relatively small CPU times, in particular compared to CPU times for corresponding optimal-control solutions. The local nature of the approach is new. This solution method is possible for the types of problems where a weaker specification of the control is desired, such as in the Least Constraint approach to robot programming. Because of this, a measure of computational efficiency and adaptability can be exploited. The natural extension of the above techniques to systems of DAEs subject to programmed constraints should increase the overall applicability; however, the approach is not applicable to every problem in robotics. For example, more investigation is required into problems that require finer control, such as docking or parts mating. We have also seen how an efficient solution to the DAI formulation of more traditional robot programs can complement existing approaches, in particular optimal control, both as an effective starter and as a compensator for new information. Generally, there is no strict connection between the DAI and optimal-control solutions; however, DAIs are easy to program and their solution adds very minimal computational expense to the optimal-control calculation. Thus, they provide a flexible alternative to shooting and a more consistent framework than traditional holonomic motion planners.  Also, the hidden-object example (Case 6.6 in Sec-  126  tion 6.2) illustrates a feature that is absent from motion planners: the potential for processing new information as it becomes available. Such a feature is imperative for successful autonomous robot programming. The generalization of delay-free discretizations to higher order as implicitexplicit Runge-Kutta schemes represents a new and potentially powerful method for numerically integrating differential equations where the terms in the right-hand side are of a different nature. We have motivated their development, given examples of them, and shown that they offer significant computational advantage over their linear multistep counterparts in all the situations considered thus far. The viewpoint of constructing a solution to a viability problem has lead to a greater understanding of the problem definition and what it is that a solution method should try to do. We have seen how the algorithm presented is a nonstandard selection method based on a nonstandard optimization criterion where the relative position within the viability region is maintained from one time step to the next.  7.2  Directions for Future Research  There are many potential candidates for future investigation that arise out of the present dissertation. Suitable avenues emerge from approaching the method of programmed constraints from different perspectives, as well as from increasing the focus on particular aspects of the solution method discussed. A brief summary of broad classifications of future work follows. • from the point of view of robotics: The move to more autonomous robotic systems from teleoperated ones is inevitable. There are many good reasons to want to take humans out of the feedback loop [81]. One of the more obvious  127  reasons is that delays between action and response can be long for meaningful human feedback. In this case, reactive systems must be able to fend for themselves and execute programs independent of human intervention. From the point of view of simulation, additional study could be carried out into increasing the robustness of the solution method in more applications, especially when constraints are ill-conditioned (intersect at very small angles, etc.). This is a weakness of the programmed-constraint methodology [87]. A natural direction for future research is to test our algorithm in practice, with the most obvious candidate being the mobile robots [15]. At present, the success of our algorithm has a critical dependence on the existence of a reasonably accurate model for the system. Such a model is needed in Step 4 in order to elicit information on how a choice of control will influence future states. Unfortunately, no such model exists for the Dynamite system [78, 15]. We have performed some experimentation on developing such a model using standard system identification and parameter-estimation techniques [77, 83, 21, 11]. The process of matching experimental data with a differential equation (such as (1-2)) is often a nontrivial task, and the results we obtained have thus far proven unsatisfactory in that robust predictions could not be made under normal operating conditions. Hence, the next step to this end would entail a more serious effort at system identification. However, this in turn would typically require detailed analysis of the transfer functions associated with the driving motor and steering mechanism, study of reliability of the power source, vision system, etc., all of which would be outside the scope of this thesis . We 1  A 'cleaner' possibility may exist through the use of techniques from artificial intelligence [23]; however, results and reliable software are still forthcoming. 1  128  note that our algorithm for solving DAIs is of course not restricted to problems associated with mobile robots. Thus, there is the distinct possibility of a future physical implementation on another system. • applications from outside of robotics: In Section 1.3.3, we described how the programmed-constraint approach could be used with respect to a reformulation of the moving-mesh problem. In some respects, it represents a fundamentally new way of viewing this problem. There is a diverse range of publications on this subject (see for example, [56, 55, 92, 74, 75, 123, 57]), each one demonstrating effective behaviour in appropriate circumstances. However, a generally accepted method has yet to emerge. Many of the proposed methods also have interpretations as different stabilizations of DAEs [9, 75], such as Baumgarte stabilization [16]. Our idea for a DAI formulation of the moving-mesh problem is the following (see also [9]): For simplicity, consider a one-dimensional partial differential equation v =  f(v,v ,v ),  t  x  xx  <b, t > 0.  a<x  A method-of-lines discretization on a spatial mesh a = XQ  <  X\  <  . . .  <  XN  = 6,  yields a set of ODEs for the solution V{ at each node x^. Vi =  f(vi,DiVi,D Vi), 2  where Dj, j — 1,2 is a (compact) discretization scheme for the j t h spatial derivative. If we now allow the (internal) mesh points to move and assign 129  control variables u to the mesh-point velocities, we obtain the moving-mesh equations vi  =  f(vi,D Vi,  ii  = Ui,  D Vi) + (DiVi)ui  x  2  i = 1,2,..., N - 1.  In Section 1.3.3, we mentioned reasons why strict enforcement of equidistribution may not be the best criterion for mesh movement. A^natural way to express a discretization of the arclength monitor function as an inequality is (xi  -  Zi_i)  2  +  (vi  -  Vi-i)  <  2  0(t),  where 9(t) represents some measure of the equiportion, ie. the average value of the monitor function on each subinterval. For the purposes of numerical stability, this is combined with a local quasiuniformity condition t£i  Ki  K  + 1  i  Ki  ~\~  1  LJ_ < _ L _  < _^ —  Xi-2  5  K  which limits the amount by which a mesh can change between successive time steps [40]. In practice, K is taken equal to 1 or 2. This formulation represents a novel albeit nontraditional application for DAIs and our solution algorithm. • IMEX RK discretization schemes: Three avenues of further research present themselves: 1. The generation of higher-order 'explicit' symplectic methods as described by (5.9) for partitioned Hamiltonian systems would have immediate application to conservative systems such as those found in molecular dynamics [107, 71] or astronomy [49]. These are two of the leading research areas with a vested interest in symplectic numerical integrators. 130  2. The use of up winding in numerical methods for advection problems can be interpreted as an addition of artificial diffusion to a hyperbolic, or inviscid term. It has been argued that a different criterion than the usual von Neumann analysis is appropriate when upwinding is considered (see, for example, [106]). It seems natural that the original inviscid term be treated explicitly, whereas the artificial diffusion contribution be treated implicitly. A successful time discretization in the presence of shocks and other discontinuous effects involves the use of total variation diminishing (TVD) schemes [105, 104]. Thus, RK-IMEX schemes with a T V D explicit component would be of great interest. 3. Naturally, as with all new methods, we will be interested in simply testing the method out in a number of diverse applications. There are many problems where a semi-implicit approach such as that offered by IMEX RK offers computational advantage.  For example, the immersed-fibre  problem (see [91, 110] and the references therein) naturally divides into terms that ideally should be dealt with in different ways: there are stiff, diffusion-related components and highly nonlinear forcing terms. Linear stability analysis has shown the existence of stable, but highly oscillatory solution modes [110]. Current solution methods have concentrated on explicit methods; however, the time-step restrictions remain quite severe [110]. Thus, a semi-implicit method such as IMEX R K can be expected to significantly improve the time stepping. Also for example, the GiererMeinhardt equations have been used as a model for problems in pattern formation [59, 100, 47]. These equations also divide naturally into terms requiring different treatment. Similar problems have been treated in [100]  131  by implicit-explicit linear multistep schemes. Their numerical solution is challenging [59]. Hence, this problem is well suited for an improved treatment by IMEX RK methods due to their enhanced stability region. DAEs and DAIs: The addition of equality constraints to a system such as (1.1) has two compelling facets. First, including DAEs permits complicated robotic systems to be modelled in a more natural way, without transforming to a minimal coordinate system through elimination of the constraints (see, for example, [51]). Thus, control of other robotic systems like the platonic beast [88] or other walking machines [86] could be programmed by means of programmed constraints. Second, from the point of view of theory, investigation is warranted into the interaction between viability (weak invariance) and the algebraic equation component of the DAE (strong invariance). more unified analysis: Typical results like proximal aiming [32] from viability theory [13, 14], stable bridges [111] from differential game theory [60, 73], and numerical methods for ODEs subject to inequalities [49, 120, 46, 103] do not offer control strategies for when the system is inside the viability set: action is only taken when the system reaches the boundary. In some situations, this is the only possible course of action. However, due to the local uncontrollability of nonholonomic systems, this is clearly unacceptable in many robotics applications [81, 76, 54]. There is certainly much scope for more analysis and development of (planning) strategies that would lead to a better understanding of the relationships between these aspects.  132  Bibliography [1] S. Abarbanel, D. Gottlieb, and M . Carpenter. On the removal of boundary errors caused by Runge-Kutta integration of nonlinear partial differential equations. SIAM J. Sclent. Corn-put., 17:777-782, 1996.  [2] U . Ascher, H. Chin, L. Petzold, and S. Reich. Stabilization of constrained mechanical systems with DAEs and invariant manifolds. J. Mech. Struct. Machines, 23:135-158, 1995. [3] U . Ascher, H. Chin, and S. Reich. Stabilization of DAEs and invariant manifolds. Nurner. Math., 67:131-149, 1994. [4] U . Ascher and P. Lin. Sequential regularization methods for higher index DAEs with constraint singularities: I. linear index-2 case. SIAM J. Numer. Anal., to appear. [5] U . Ascher and P. Lin. Sequential regularization methods for nonlinear higher index DAEs. SIAM J. Scient. Comput., to appear. [6] U . Ascher and L. Petzold. Projected collocation for higher-order higher-index differential-algebraic equations. J. Comp. Appl. Math., 43:243-259, 1992. [7] TJ. Ascher, S. Ruuth, and B. Wetton. Implicit-explicit methods for timedependent PDE's. SIAM J. Numer. Anal, 32:797-823, 1995. [8] U. Ascher and R. Spiteri. Collocation software for boundary value differentialalgebraic equations. SIAM J. Scient. Comp., 15:938-952, 1994. [9] U. Ascher and R. Spiteri. DAE models for moving mesh methods. In preparation, 1997.  [10] U.M. Ascher, S.J. Ruuth, and R.J. Spiteri. Implicit-explicit Runge-Kutta methods for time-dependent partial differential equations. Applied Numerical Math., Special Issue on Innovative Time Integrators, 1997. to appear.  133  K . J . Astrom and P. Eykhoff. System identification - a survey. Automatica, 7:123-167,1971. J.P. Aubin. A survey of viability theory. SIAM J. Control and Optimization, 28(4):749-788, 1990. J.P. Aubin. Viability Theory. Birkhauser, Berlin, 1991. J.P. Aubin and A. Cellina. Differential Inclusions. Springer Verlag, New York, 1984. R. Barman, S. Kingdon, J.J. Little, A.K. Mackworth, D.K. Pai, M . Sahota, H. Wilkinson, and Y . Zhang. Dynamo: real-time experiments with multiple mobile robots. In Proceedings of the IEEE Intelligent Vehicles Conference, pages 261-266, Tokyo, Japan, July, 1993. J. Baumgarte. Stabilization of constraints and iantegrals of motion in dynmaical systems. Comp. Methods Appl. Mech., 1:1-16, 1972. R.E. Bellman. Adaptive control processes: a guided tour. Princeton University Press, Princeton, 1961. R.E. Bellman. Introduction to the mathematical theory of control processes. Academic Press, New York, 1967. R.E. Bellman and S. Dreyfus. Applied dynamic programming. Princeton University Press, Princeton, 1962. J. Blom and P. Zegeling. A moving-grid interface for systems of onedimensional time-dependent partial differential equations. ACM TOMS, 20(2):194-214, 1994. H.G. Bock. Recent advances in parameter identification techniques for ODEs. In P. Deuflhard and E. Hairer, editors, Workshop on Numerical Treatment of Inverse Problems in Differential and Integral Equations, 1982. J. Borenstein and Y . Koren. Real-time obstacle avoidance for fast mobile robots. IEEE Trans. Syst. Man Cyber., 19(5):1179-1187, 1989. E. Bradley and R. Stolle. Automatic construction of accurate models of physical systems. Annals of Mathematics and ArtificialTntelligence, 17:1-28, 1996. K. Brenan, S. Campbell, and L. Petzold. Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations. North-Holland, 1989. 134  [25] R.W. Brockett. Asymptotic stability and feedback stabilization. In R.W. Brockett, R.S Millman, and H.J. Sussmann, editors, Differential Geometric Control Theory, pages 181-191. Birkhauser, Boston, 1983. [26] C. Budd, G. Koomullil, and A. Stuart. On the solution of convection-diffusion boundary value problems by grid adaptation. Technical report, Math. res. Rep. AM95-04, Univ. of Bristol, 1995. [27] J.F. Canny. The complexity of robot motion planning. PhD thesis, MIT, Dept.  Electrical Eng. Computer Sci., May 1987. [28] C. Canuto, M.Y. Hussaini, A. Quarteroni, and T.A. Zang. Spectral Methods in Fluid Dynamics.  Springer-Verlag, 1987.  [29] M . Carpenter, D. Gottlieb, S. Abarbanel, and W. Don. The theoretical accuracy of Runge-Kutta time discretizations for the initial-boundary value problem: A study of the boundary error. SIAM J. Scient. Comput., 16:1241-1252, 1995. [30] C W . Carroll. The created response surface technique for optimizing nonlinear, restrained systems. Operations Res., 9:169-184, 1961. [31] E. Charniak and D. McDermott.  Introduction to Artificial  Intelligence.  Addison-Wesley, New York, 1986. [32] F.H. Clarke, Yu. S. Ledyaev, R.J. Stern, and P.R. Wolenski. Qualitative properties of trajectories of control systems: A survey. Journal of Dynamical and Control Systems, 1(1):1—48, 1995.  [33] F.H. Clarke, Yu.S. Ledyaev, E.D. Sontag, and A.I. Subbotin. Asymptotic controllability implies feedback stabilization. Preprint. [34] J.D. Coope and R. Fletcher. Some numerical experience with globally convergent algorithms for nonlinearly constrained optimization. J. Opt. Theo. Applns, 32(4):1-16, 1979. [35] R. Courant. Variational methods for the solution of problems of equilibrium and vibration. Bull. Amer. Math. Soc, 49:1-23, 1943.  [36] M . Crouzeix. Une methode multipas implicite-explicite pour l'approximation des equations devolution paraboliques. Numer. Math., 35:257-276, 1980. [37] A . W . Divelbiss and J. Wen. Nonholonomic path planning with inequality constraints. IEEE, pages 52-57, 1994. 135  B. R. Donald. A search algorithm for motion planning with six degrees of freedom. Artificial Intelligence, 31(3), 1987. A. Dontchev and F. Lempio. Difference methods for differential inclusions: a survey. SIAM Review, 34(2):263-294, 1992. E.A. Dorfi and L.O'c. Drury. Simple adaptive grids for 1-D initial value problems. J. Comp. Phys., 69:175-195, 1987. A . V . Fiacco and G.P. McCormick. Nonlinear Programming: Sequential Un-  constrained Minimization  Techniques. John Wiley and Sons, New York, 1968.  R. Fletcher. An ideal penalty function for constrained optimization. J. Inst. Maths. Applns, 15:319-342, 1975. R. Fletcher. Practical Methods of Optimization,  Volume I. John Wiley and  Sons, Chichester, 1980. R. Fletcher. Practical Methods of Optimization,  Volume II. John Wiley and  Sons, Chichester, 1981. K.R. Frisch. The logarithmic potential method of convex programming, 1955. Memorandum, University Institute of Economics, Oslo. C. W. Gear. Maintaining solution invariants in the numerical solution of ODEs. SIAM J. Sci. Stat. Comp., 7:734-743, 1986.  A. Gierer and H. Meinhardt. A theory of biological pattern formation. Kybernetik, 12:30-39, 1972. E. Griepentrog. Gemischte Runge-Kutta-Verfahren fur steife Systeme. In Seminarbericht Nr. 11, Sekt. Math., pages 19-29, Humboldt-Universitat Berlin, 1978. E. Hairer, S.P. Norsett, and G. Wanner. Solving Ordinary Differential Equations I: Nonstiff Problems. Springer-Verlag, 1993. E. Hairer and G. Wanner. Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems. Springer-Verlag, 1991. E.J. Haug. Computer-Aided Kinematics and Dynamics of Mechanical System Vol.1: Basic Methods. Allyn and Bacon, 1989.  M.R. Hestenes. Multiplier and gradient methods. J. Opt. Theo. Applns, 4:303320, 1969. 136  [53] N . Hogan. Impedance control: An approach to manipulation. ASME Journal of Dynamics Systems, 107:1-7, March 1965. [54] H.P. Huang and N.H. McClamroch. Time-optimal control for a robotic contour following problem. IEEE Journal of Robotics and Automation, 4(2):140-149, 1988. [55] W. Huang, Y . Ren, and R. Russell. Moving mesh methods based on moving mesh partial differential equations. J. Comp. Phys., 113:279-290, 1994. [56] W. Huang, Y . Ren, and R.D. Russell. Moving mesh partial differential equations (MMPDEs) based on the equidistribution principle. SIAM J. Numer. Anal, 31:709-730, 1994. [57] J. Hyman and B. Larrouturou. Dynamic rezone methods for partial differential equations in one space dimension. Applied Numerical Mathematics, 5:435-450, 1989. [58] L. Ingber. Simulated annealing: Practice versus theory. Mathematical Computer Modelling, 18(ll):29-57, 1993. [59] D. Iron. Spectral Analysis of the Gierer-Meinhardt Equations. Master's thesis, Department of Mathematics, University of British Columbia, Vancouver, Canada, 1997. in preparation. [60] R. Isaacs. Differential games; a mathematical theory with applications to warfare and pursuit, control and optimization. J. Wiley and Sons, New York, 1965. [61] J. Funda and R. Taylor and K. Gruben and D. La Rose. Optimal motion control for teleoperated surgical robots. In SPIE Symposium on Optical Tools for Manufacturing and Advanced Automation, Boston, MA, September 1993. [62] J.G.Verwer, J.G. Blorn, and W. Hundsdorfer. An implicit-explicit approach for atmospheric transport-chemistry problems. Applied Numerical Mathematics, 20:191-209, 1996. [63] G.E. Karniadakis, M . Israeli, and S.A. Orszag. High-order splitting methods for the incompressible Navier-Stokes equations. J. Computational Phys., 97:414-443, 1991. [64] O. Khatib. Real time obstacle avoidance for manipulators and mobile robots. The International Journal of Robotics Research, 5(1):90—99, 1986. 137  [65] J. Kim and P. Moin. Application of a fractional-step method to incompressible Navier-Stokes equations. J. Computational Phys., 59:308-323, 1985. [66] D.E. Koditschek. Robot planning and control via potential functions. In The Robotics Review 1. MIT Press, 1989. [67] D.E. Koditschek. The control of natural motion in mechanical systems. ASME Journal of Dynamics Systems, Measurement and Control, 111(4):547-551, 1991. [68] D.E. Koditschek. Some applications of natural motion control. ASME Journal of Dynamics Systems, Measurement and Control, 113:552-557, 1991. [69] J.C. Latombe. Robot Motion Planning. Kluwer, 1991. [70] Yu. S. Ledyaev. Criteria for viability of trajectories of nonautonomous differential inclusions and their applications. Journal of Mathematical Analysis and Applications, 182:165-188, 1994. [71] B. Leimkuhler and R. Skeel. Symplectic integrators in constrained hamiltonian systems. J. Comp. Phys., 112:117-125, 1994. [72] W. Leler. Constraint Programming Languages: Their Specification and Generation. Addison-Wesley, New York, 1988. [73] J. Lewin. Differential games : theory and methods for solving game problems with singular surfaces. Springer-Verlag, New York, 1994. [74] S. Li and L. Petzold. Moving mesh methods with upwinding schemes for timedependent PDEs. Dept. of Computer Science and Army High Performance Research Center, Univ. of Minnesota, 1996. [75] S. Li, L. Petzold, and Y . Ren. Stability of moving mesh systems of partial differential equations. Dept. of Computer Science and Army High Performance Research Center, Univ. of Minnesota, 1996. [76] Z. Li and J.F. Canny, editors. Boston, 1993.  Nonholonomic Motion Planning. Kluwer,  [77] L. Ljung. System Identification: Theory for the User. Prentice-Hall, Inglewood Cliffs, New Jersey, 1987. [78] Michael K . Sahota. Real-Time Intelligent Behaviour in Dynamic Environments: Soccer-playing Robots. Master's thesis, Department of Computer Science, University of British Columbia, Vancouver, Canada, 1991. 138  K. Miller and R.N. Miller. Moving finite elements I. SIAM J. Numer. Anal., 18:1019-1032,1981. Miyazaki, Fumio, and Arimoto. Sensory feedback based on the artificial potential for robots. In 9th IFAC, Budapest, Hungary, 1984. R . M . Murray, Z. L i , and S.S. Sastry. A Mathematical Introduction to Robotic  Manipulation.  CRC Press, Boca Raton, Fla., 1994.  W.S. Newman and N. Hogan. High speed robot control and obstacle avoidance using dynamic potential functions. In IEEE Int. Conf. Robotics Automat., pages 14-24, Rayleigh, NC, 1987. J.P. Norton. Introduction to Identification. Academic Press, New York, 1986. P. Lotstedt. Numerical simulation of time-dependent contact and friction problems in rigid body mechanics. SIAM J. Sci. Stat. Comput., 5(2):370-393, 1984. P. Lotstedt. Numerical simulation of time-dependent contact and friction problems in rigid-body mechanics. SIAM J. Sci. Stat. Comput., 5(2), 1984. D.K. Pai. Programming anthropoid walking: Control and simulation. Technical Report TR90-1178, Cornell University, 1990. D.K. Pai. Least Constraint: A framework for the control of complex mechanical systems. In Proceedings of the American Control Conference, pages  615-621,1991. D.K. Pai, R.A. Barman, and S.K. Ralph. Platonic beasts: a new family of multilimbed robots. In Proceedings of the IEEE Conference on Robotics and Automation, 1994. W.J. Palm. Modeling, Analysis, and Control of Dynamic Systems. John Wiley  and Sons, 1983. V . V . Pavlov and Voronin A.N. The method of potential functions for coding constraints of the external space in an intelligent mobile robot. Soviet Automatic  Control, 6, 1984.  C S . Peskin. Numerical analysis of blood flow in the heart. J. Comp. Phys., 25:220-252, 1977.  139  [92] L. Petzold. Observations on an adaptive moving grid method for onedimensional systems of partial differential equations. Applied Numerical Mathematics, 3:347-360, 1987. [93] L.R. Petzold. A description of DASSL: a differential/algebraic system solver. Transactions on Scientific Computation, 1, 1982.  [94] M.J.D. Powell. A method for nonlinear constraints in minimization problems. In R. Fletcher, editor, Optimization. Academic Press, London, 1969. [95] W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. Numerical Recipes: the Art of Scientific Computing. Cambridge University Press, second edition, 1992. [96] E. Rimon and D.E. Koditschek. Exact robot navigation using artificial potential functions. IEEE Transactions on Robotics and Automation,  8(5):501-518,  1992. [97] R.T. Rockafellar. Convex Analysis. 1970.  Princeton University Press, Princeton,  [98] R.D. Ruth. A canonical integration technique. IEEE Trans. Nuclear Science, NS-30:2669-2671, 1983. [99] S. Ruuth. Implicit-explicit methods for time-dependent PDE's. Master's thesis, Ins. Appl. Math., Univ. of British Columbia, Vancouver, 1993. [100] S. Ruuth. Implicit-explicit methods for reaction-diffusion problems in pattern formation. J. Math. Biology, 34(2):148-176, 1995. [101] V . Schulz. Reduced SQP Methods for Large-Scale Optimal Control Problems in DAE with Application to Path Planning Problems for Satellite Mounted  Robots. PhD thesis, Interdisziplinares Zentrum fur Wissenschaftliches Rechnen, Univerisitat Heidelberg, 1996. [102] J.T. Schwartz and M . Sharir. On the "piano movers" problem. II. General techniques for computing topological properties of real algebraic manifolds. Advances Appl. Math., 4(1):298-351, 1983.  [103] L.F. Shampine. Conservation laws and the numerical solution of ODEs. Comp. and Maths, with Appls., Part b, 12:1287-1296, 1986.  [104] C. Shu. Total-variation-diminishing time discretizations. SIAM J. Sci. Stat. Comput., 9(6):1073-1084,1988. 140  [105] C. Shu and S. Osher. Efficient implementation of essentially nonoscillatory shock-capturing schemes. J. Comp. Phys., 77:439-471, 1988. [106] D. Sidilkover and TJ. Ascher. A multigrid solver for the steady state NavierStokes equations using the pressure-poisson formulation. Comp. Appl. Math. (SB MAC), 14:21-35, 1995. [107] Robert D. Skeel, Guihua Zhang, and Tamar Schlick. A family of symplectic integrators: stability, accuracy, and molecular dynamics applications. SIAM J. Sci. Comput., 18(l):203-222, 1997. [108] E.D. Sontag and H.J. Sussmann. Remarks on continuous feedback. In IEEE Conference on Decision and Control, pages 916-921, Piscataway, December 1980. IEEE Publications. [109] R. Spiteri, TJ. Ascher, and D. Pai. Numerical solution of differential systems with algebraic inequalities arising in robot programming. In Proceedings of the IEEE Conference on Robotics and Automation, pages 2373-2380, Nagoya,  1995. [110] J.M. Stockie and B.R. Wetton. Stability analysis for the immersed fiber problem. SIAM Journal Applied Math, 55:1577-1591, 1995. [Ill] A . Subbotin. Generalized solutions of first-order PDEs. Birkhauser, Boston,  1995. [112] K . Taubert. Differenzenverfahren fur Schwingungen mit trockener und zaher Reibung und fur Regelungssysteme. Num. Math., 26:379-395, 1976. [113] J. Van Tiel. Convex Analysis. John Wiley and Sons, New York, 1984. [114] R.B. Tilove. Local obstacle avoidance for mobile robots based on the method of artificial potentials. In IEEE Int. Conf. Robotics Automat., pages 566-571, Cincinnati, OH, May 1990. [115] Tomas Lozano-Perez. Spatial planning: A configuration space approach. IEEE Transactions on Computers, C-32(2):108-120, 1983.  [116] S. Turek. A comparative study of time stepping techniques for the incompressible Navier-Stokes equations: from fully implicit non-linear schemes to semi-explicit projection methods. Int. J. Num. Meth in Fluids, 22:987-1011, 1996.  141  [117] R.J. Vanderbei. An algorithmic and numerical comparison of several interiorpoint methods. Technical Report SOR-94-05, Princeton University, March 1994. [118] J.M. Varah. Stability restrictions on second order, three level finite difference schemes for parabolic equations. Siam J. Numerical Analysis, 17(2):300-309, 1980. [119] J. Verwer, J. Blom, and J. Sanz-Serna. An adaptive moving grid method for one-dimensional systems of partial differential equations. J. Comp. Phys., 82:454-486, 1989. [120] R. von Schwerin and M . Winckler. A guide to the integrator library M B SSIM, Version 1.00. Technical report, Interdisziplinares Zentrum fur Wissenschaftliches Rechnen, Universitat Heidelberg, 1994. [121] B.O. Watkins. Introduction to Control Systems. The MacMillan Company, New York, 1969. [122] J.T. Wen. Control Systems Handbook (ed. W. Levine), pages 1359-1368. CRC Press, 1995. [123] A.B. White. On the numerical solution of initial/boundary-value problems in one space dimension. SIAM J. Numer. Anal., 19:683-697, 1982. [124] S. Wright. A path-following infeasible-interior-point algorithm for linear complementarity problems. Preprint MCS-P334-1192, November 1992. [125] Y . Zhang and A. Mackworth. Constraint programming in constraint nets. In First Workshop in Principles and Practice of Constraint Programming, 1993.  142  Appendix A  Minimization Algorithm Consider the following nonlinear programming problem: mm F(u),  M G r - ' . f e ! ; ,  (A.la)  subject to c(u) > 0.  (A.lb)  We consider the solution of (A.la) by a penalty function method; that is, we solve a sequence of unconstrained minimization problems where the objective function F is augmented with terms that penalize the violation of the constraints (A.lb). From [94], a suitable penalty function for (A.l) is 4>{u, 0, 2) = F(u) + i(c(u) - 0)1 E (c(u) - 0)_, where 0 £ 3ft  ncnstr  , ~£  sft«cn trXn S  c n 3 t r  ^  a n (  j  _  and consists of weights & > 0, % = 1,2,..., n  mm  c n s t r  (A.2)  ( . o ) . The matrix 2 is diagonal 5  . Hence, (A.2) may be equivalently  written as -i  T^cnstr  <t>(u,Q,0 = F(u) + - £ €i(ci(u)-di)l. i=i 1  143  (A.3)  For notational convenience, we often combine these diagonal entries into a vector f. The earliest penalty function was used by Courant [35] in the context of equality-constrained programming problems. In this case, 0 = 0. We refer to [41, 44] for discussion of the theoretical results for this method. We also note that equation (A.2) represents a generalization of the ideas originally put forth in [35] to the inequality-constrained programming problem. The standard penalty function algorithm is the following: for fixed values of 0 and £, an intermediate solution « ( ' ( 0 , £ ) is found as an unconstrained minimizer s  of (A.3).  Then 0 and £ are iterated over in such a way that the sequence of  unconstrained minimizers {u^} converges to the constrained solution u* of (A.l). Algorithm A . l (Standard Penalty Function) 1. Choose a sequence tf( ' —• oo (typically, {1,10,10 ,...}). 2  s  2. For each  solve (A.3) for a local minimizer  = u(£( '). s  3. Terminate under a suitable convergence criterion, such as \\u^ —  M(  S - 1  e,  )|| <  for e sufficiently small.  A  Common drawbacks of this method include: • the inefficiency associated with solving many minimization subproblems, • the fact that the minimization subproblems are not exactly solvable, and (s)  • the increased ill-conditioning of the minimization subproblems as £,•—>• oo. The typical sequence for the £M suggested in Step 1 is meant as a compromise where {£<*)} increases in such a way as to balance the difficulties in finding accurate solutions for large values, while ensuring enough progress is made between 144  increments toward the solution of the original problem (A.l). The ill-conditioning associated with large £ tends to have the largest effect on the performance of the penalty method [44]. A more well-conditioned method avoids this problem by iterating toward the optimal value of 0 for fixed and finite £. We begin to describe such a method by defining the vector A G §?™  cnstr  with components A; :=  Substituting (A.4) into (A.3) and ignoring terms independent of u, we have  ^cnstr  (A.4) 1  (A-5)  i=i  We now have the following theorem (see, for example, [44] for a proof): Theorem A . l If second-order conditions hold at (u*, A*), then there exists a £' > 0 such that for all £ >  u* is an isolated local minimizer of (A.5), i.e., u* = u(X*).  The value A* is also the Lagrange multiplier at the solution to the original problem (A.l). Because this result holds independent of £, we suppress this argument henceforth and write <f> = <f>(u, A). An algorithm for a well-conditioned penalty function is as follows: Algorithm A.2 (Well-Conditioned  Penalty Function)  1. Determine a sequence {A( )} —> A*. s  2. For each A( ', compute s  u(*):= u(A<*>) = mm<p(u,X^). T h e continuity of <f> allows us to choose the more advantageous placement of the equality condition i n the following. J  145  3. Terminate when u^ has converged sufficiently to u*.  A  It remains to determine the sequence {A( '}. Following [44], we now briefly s  describe one particularly simple and effective choice, the ideal penalty function (as coined by Fletcher [42, 44]). We begin by noting that a convenient way to regard u*(X) is as the solution to a neighbouring problem: Let ^(A):=#u(A),A). It can be shown [44] that A* is a constrained maximizer of ip(X). Hence, methods for generating a sequence { A ^ } —>• A* can be derived from unconstrained minimization algorithms applied to —ij}{X). Using assumptions of strict complementarity at the solution and a regularity condition, it can be shown that V tp is negative definite. This leads to a natural 2  choice for the sequence { A ^ } : the sequence generated by Newton's method [43, 44]. Given an initial estimate A' ), the Newton iteration is 0  W =  -  W-^Lt^y  The appeal of this choice stems mainly from two convenient results [44, 42] that allow us to circumvent the usual drawbacks associated with the implementation of Newton's method: V^=-min^ («(A)),^ , C i  and, for large £,  This leads to the particularly simple iteration A(') = A* " * - min 8  1  146  fac^,  A ^ ) .  (A.6)  We now describe an ideal penalty function algorithm that is is used to solve the barrier minimization problem (3.5). The algorithm is designed to achieve linear convergence at no worse than rate (5. Algorithm A.3 (Ideal Penalty Function) 1. Initialize A = A^ ', £ =  iteration counter k = 0, and  0  HVVHIoo = °°-  (TypicaUy, we use A* ) = 0 and f (°) = 1.) 0  2. Solve  «( )(A,0 = min<X«,A,0 s  and let  c = c(«(*>(A,0). 3. Terminate if || V^> < | x> has converged sufficiently to 0. 4. Check for sufficient rate of convergence: If  I I V^lloo > / 5 | | V ^ - ) | | , 1  0 O  then set & <— 10^ for all i such that  |V^-| > 9||V^ - ||oo (  1)  )  and return to Step 2. 5. Step accepted: increment counter and update parameters, k*-k + l, \  { s )  <- A,  c.  6. Determine the next A in the sequence from (A.6) and return to Step 2.  147  A  In practice, f3 is chosen to be 0.25. We note that in Step 4, if any component Cj- is not reduced by this rate, the corresponding  is increased in a suitable fash-  ion, similar to Step 1 of Algorithm A . l . It can be shown that termination of this algorithm is guaranteed (see [44] for more details). As a final remark on Algorithm A.3, we note that it can still suffer somewhat from the ill-conditioning produced with large £ as discussed previously, but not to the extent of the standard penalty function method [42, 44]. Also in general, if the possibility of there being no solution goes undetected, £ can increase without bound, leading to floating-point exceptions. In our experiences using Algorithm A.3 to solve the barrier minimization problem (3.5), we have not found these considerations to be significant.  148  


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items