UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Planning dynamic vehicle motion using move trees Adam, Andrew 2007

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
831-ubc_2007-0305.pdf [ 3.71MB ]
Metadata
JSON: 831-1.0052035.json
JSON-LD: 831-1.0052035-ld.json
RDF/XML (Pretty): 831-1.0052035-rdf.xml
RDF/JSON: 831-1.0052035-rdf.json
Turtle: 831-1.0052035-turtle.txt
N-Triples: 831-1.0052035-rdf-ntriples.txt
Original Record: 831-1.0052035-source.json
Full Text
831-1.0052035-fulltext.txt
Citation
831-1.0052035.ris

Full Text

Planning Dynamic Vehicle Motion using Move Trees by  . Andrew A d a m  B . S c , T h e University of Essex, 2002  A THESIS S U B M I T T E D IN PARTIAL F U L F I L M E N T OF THE REQUIREMENTS  FOR THE DEGREE OF  Master of Science  T h e Faculty of G r a d u a t e Studies  (Computer  Science)  T h e University O f B r i t i s h C o l u m b i a  August 22, 2007  © A n d r e w A d a m 2007  ii  Abstract Generating skilled and well-planned behaviours for autonomous agents is a challenging problem c o m m o n to both computer animation and robotics. T h i s thesis presents a system that uses motion graphs for online m o t i o n planning, resulting i n skilled d r i v i n g behaviours for a dynamic model of a car i n a constrained environment.  T h e result reproduces skilled d r i v i n g behaviors. It is a partic-  ular challenge to get the cars to produce skidding-into-turn behaviors when approaching sharp corners, which can achieve the fastest speeds around a track. T h e techniques explored i n this thesis are potentially generalizable to other d y n a m i c vehicle behaviours, i n computer games or simulations. W e demonstrate that a well-formed move tree or m o t i o n graph, created from the output of a physics-based simulation can be used to produce realistic steering behaviours on a variety of tracks. W e show that a finite-horizon A * search algorithm is well suited to this task.  We have produced a number of  smooth animations that demonstrate considerable anticipation and agility, be it through acceleration/deceleration around tricky obstacles, or a h a r d s k i d d i n g t u r n into a corner after approaching at high speed. F i n a l l y , we offer a number of ways that we could speed up the algorithms for future work i n this area.  iii  Contents Abstract  ii  Contents  iii  List of Tables  vi  List of Figures  vu  Acknowledgements  vm  1 Introduction 1.1  1.2  1  Motivation  1  1.1.1  1  Motion Control  A u t o n o m o u s Vehicle C o n t r o l  3  1.2.1  R e a l W o r l d Systems  3  1.2.2  Simulated Systems  5  1.3  Goals and Scope  6  1.4  Contributions  7  1.5  Thesis O u t l i n e  '. . ••  2 Related Work . 2.1  8  Motion Graphs 2.1.1  7  8  F i n d i n g Transitions  :  2.2  M o r e D a t a D r i v e n A n i m a t i o n Techniques  .  2.3  A u t o n o m o u s Behaviours and P l a n n i n g A l g o r i t h m s  10 11 13  Contents  iv  3 Vehicle Steering  18  3.1  Vehicle M o d e l  18  3.2  Pseudocode P l a n n i n g A l g o r i t h m  19  3.3  D a t a Generation  20  3.4  F i n d i n g S i m i l a r Frames  20  3.5  C r e a t i n g a n d E x p a n d i n g Routes  21  3.5.1  22  A * Search  3.6  Route Selection and C l i p A l i g n m e n t  3.7  Collision Detection  •. . .  24  4 Results 4.1  4.2  26  M o v e Tree  26  4.1.1  L o n g Straight Track w i t h Other Cars  27  4.1.2  Cornered Track  28  4.1.3  Cornered Track w i t h Other Cars  29  Large M o v e Tree  30  4.2.1  L o n g Straight Track w i t h Other Cars  31  4.2.2  Cornered Track  32  4.2.3  4.3  23  '.  ' C o r n e r e d Track w i t h Other Cars  33  4.2.4  Sharp Cornered Track  33  4.2.5  F i n a l Track  34  Discussion  35  5 Discussion .  38  5.1  G r a p h T r a n s i t i o n Selection  38  5.2  Graph Pruning  5.3  Improving A *  5.4  S i m p l i s t i c G o a l Selection  40  5.5  Track Generation  40  5.6  Collision Detection  41  5.7  Additional Mobility  41  ,  39 ;  40  Contents  v  6 Conclusions  42  Appendix A Physics Implementation  44  Bibliography  48  List of Tables 4.1  S m a l l move tree, long track w i t h cars  29  4.2  5/7 branch move tree, cornered track  29  4.3  S m a l l move tree, cornered track w i t h cars. .  31  4.4  Large move tree, long track w i t h cars  31  4.5  Large move tree, cornered track  32  4.6  Large move tree, cornered track w i t h cars  33  4.7  Large move tree, sharp cornered track  34  4.8  Large move tree, final track  34  vii  List of Figures 1.1  M o t i o n C a p t u r e example  2  1.2  M a r s Rover artists impression  4  1.3  A n image from G T R 2 , a game for the P C , that demonstrates simulated vehicles  "  • •  5  2.1  M o t i o n G r a p h example  9  3.1  T h e car  3.2  T h e car velocity comparison  3.3  R o u t e tree expansion  22  3.4  Connecting Motion Clips  24  3.5  Collision detection  25  4.1  S m a l l M o v e Tree, 5 and 7 branches  27  4.2  L o n g straight track  28  4.3  Cornered track  4.4  Large Move Tree  32  4.5  Sharp cornered track  33  4.6  F i n a l track  35  4.7  F i n a l track skid  36  5.1  Dense M o t i o n G r a p h example.  39  5.2  Dense M o t i o n G r a p h example 2  39  18 '.  21  • •  30  viii  Acknowledgements I w o u l d like to thank M i c h i e l van de Panne for considerable help when cons t r u c t i n g this work, and a continual o u t p o u r i n g of ideas.  It w o u l d not have  been possible to undertake nor complete this work without his guidance.  I  w o u l d also like to thank R o b e r t B r i d s o n for being my second reader.  I would also like to thank M a r c o M o n s t e r of the now unfindable M o n s t r o u s Software for providing the code that implements the underlying car physics, as it was incredibly useful, and it is a shame that I cannot cite h i m directly. I also thank everyone who I d i d cite for allowing me to get up speed i n this area of research.  F i n a l l y I w o u l d like to thank my friends and family who have supported me throughout even when it was m a k i n g me miserable, i n particular K a t a y o o n K a saian who had to face the brunt of my difficulties. :)  i  Chapter 1  Introduction T h e a n i m a t i o n and simulation of autonomous vehicles is a challenging problem, and the demand for this to be done realistically has increased i n recent years. T h e topic has foundations i n animation, artificial intelligence and robotics. In particular computer games have pushed the need for real-time generated synthesized animation. A u t o n o m o u s vehicles w i l l not look or act realistically unless they are based on some sort of physics model, which directly impacts the motions observed for high velocity systems.  Self-driving vehicles are of interest  to generate realistic simulations of any k i n d of traffic, a n d for safer automated roads where the reduction of h u m a n error could save lives. fn this introduction, we discuss why we would study vehicle m o t i o n and present the major goals of our work, along w i t h the contributions- that we have made. A n outline of the remainder of this thesis concludes the chapter.  1.1  Motivation  1.1.1  Motion Control  A n y system that is controlled, either simulated or real, to produce a desired motion, can broadly be described as performing motion control. If we look at human and a n i m a l motion, we see incredibly complex systems that work w i t h a natural grace that is very difficult to reproduce mechanically. W e have barely reached the point where we can get a two legged robot to walk i n a graceful fashion, or place an item on a shelf, let alone reproduce the m o t i o n of a cat as it j u m p s from wall to wall, or something equally complex. T h e neuro-muscular  i  Chapter 1. Introduction  2  systems i n human and animals are researched i n the field of biomechanics, and we can t r y and apply these to our robotic designs. However, we might be more interested i n the results than the implementation when s t u d y i n g i n the field of animation, and so we can attempt to reproduce motions without necessarily modeling the complex underlying details. T h e r e are many applications of m o t i o n control. W e see robot arms i n i n dustries such as car manufacture. We need robots that can navigate dangerous environments such as deep underwater, or traversing the surface of other planets for research. In simulated systems, we have m o t i o n capture technology that has allowed us to reproduce human motion by attaching markers to various points on the body and recording their position in 3 D space, as shown i n F i g u r e 1.1.  F i g u r e 1.1:  A n actor has markers attached to his body, so that his movements  can be recorded and then reproduced for animation. (Courtesy of W i k i p e d i a . )  M o t i o n capture d a t a has allowed extremely lifelike animations to be produced i n games, although is more difficult to use in d y n a m i c environments, such as when two players collide i n a sports game. D y n a m i c collisions such as the m o t i o n of bodies collapsing after death i n a first-person shooter, are better modelled w i t h ragdoll physics systems.  Some recent systems have  attempted  to combine the data-driven approaches w i t h physical simulation, as seen i n Mandel[38].  3  Chapter 1. Introduction  1.2  Autonomous Vehicle Control  In this thesis our work is directly related to the field of autonomous vehicle control, for both real and simulated agents. T h e following two subsections look at vehicle research i n b o t h of these areas.  1.2.1  Real World Systems  K a n a k i a [26] writes that autonomous vehicles are being researched i n the 2007 Defense A d v a n c e d Research Projects Agency ( D A R P A ) G r a n d Challenge, where a car is, sought that can navigate i n a simulated urban environment for sixty miles i n less than six hours, without human guidance. T h i s has a $2 m i l l i o n top prize. For a previous challenge, a 132 mile route across the desert h a d to be navigated, w i t h 5 finishers. T h i s new challenge poses new problems, as now it w i l l have to act reasonably i n the face of other traffic.  T h i s could include  a variety of new obstacles, such as curbs, holes, cars, bicycles a n d pedestrians, a n d entrants must also obey the traffic laws. A car like this could save lives as it would have better reactions a n d senses as compared to human drivers, and it w o u l d never need to stop due to fatigue. It could drop you off at school or work and then park itself. It could also affect the speed of traffic, as cars could communicate w i t h each other to m a i n t a i n a faster group speed. T h i s vision is a long way from reality at present, however cars are slowly becoming more autonomous, w i t h systems such as anti-lock brakes, a n d we already have positioning systems such as G P S that can guide the driver. Other recent automated vehicles include R o o m b a , a disc based robotic vacu u m cleaner, and P a c k B o t , a small tank-like battlefield robot that can climb stairs and recover when it falls over, as w r i t t e n by C l a r k [10]. P a c k B o t s have been used to explore enemy caves a n d to detect explosives. C l a r k [10] also writes about a current project called r-Gator, an unmanned jeep that can shuttle supplies to a n d from areas of combat. It is not only on l a n d where robots can perform tasks. T h e y m a y even prove to be more useful for undersea operations, where humans find it difficult, a n d  Chapter 1. Introduction at high depths, impossible, to penetrate and explore.  1  [10] tells is about the  Autonomous Underwater Vehicles Laboratory ( A U V L a b ) , w h i c h has a vision of filling the v o i d of the ocean w i t h robots. T h e y developed the Odyssey class of submarine-like vessels into A U V s that survey offshore o i l fields and assist the U . S . N a v y w i t h mine warfare and battlespace preparation. T h e biggest challenge in this area is the provision of power, as most systems run on batteries. In the air we have aircraft that are intelligent, and that can perform automated take off and landing. These can be more useful i n areas where it is difficult for a l a n d robot to enter, such as dense urban areas, or even s i m p l y over the sea. For many years guided missiles have effectively acted as autonomous airborne agents.  Figure 1.2:  A n artists impression of what one of the M a r s Rovers could look  like o n the surface of the planet. (Image courtesy of N A S A . )  A n impressive ongoing project involves two robots on M a r s (see F i g u r e 1.2). The two rovers, named ' S p i r i t ' and ' O p p o r t u n i t y ' touched down i n 2004 after having travelled 311 million miles. T h e y have been functioning well for over 2 years, moving slowly across the difficult M a r t i a n terrain w i t h directions sent from E a r t h .  Interesting discoveries have been made as a result, such as the  suggestion from chlorine and bromine deposits that M a r s at one time had a salty sea.  Chapter 1. Introduction 1.2.2  5  Simulated Systems  Simulated systems can be seen i n many different areas. M a n y computer games now exhibit considerable realism of human and vehicle motion, as seen i n recent racing simulation G T R 2 ( F i g u r e 1.3), created by S i m b i n Development [1], which shows simulated vehicles.  It is a realistic simulation of the F I A G T Series  C h a m p i o n s h i p and features many different cars and tracks.  It can even be  attached to M o T e c telemetry software, which records real life car performances on race tracks, or T r a c k - I R which can be attached to the head for p a n n i n g around the cockpit. G a m e developers have been constantly t r y i n g to improve the physics model to make it as realistic as possible, and this version has improved tire physics at low speeds and better damage modelling. It also has A I routines that mean that that your opponents can be set to be tough or weak, conservative or aggressive, to tailor the game to your needs or ability level.  Figure  1.3:  demonstrates  An  image  from  simulated vehicles.  GTR2,  a  game  for  the  PC,  that  (Permission of image use gained  from  www.strategyinformer.com)  C r a s h testing can be an extremely expensive process.  It cannot be the  only m e t h o d used, they can be simulated. Kirkpatrick[28] developed a crash simulation model for the Ford C r o w n V i c t o r i a . H e conducted different simulated  Chapter 1. Introduction  6  tests such as the front bumper rigid pole impact test, front door rigid pole impact test and the vehicle frame rigid wall impact test. T h e vehicles' structural geometry were measured and non-structural components were removed from the vehicle. T h e measured surfaces were then fed into a vehicle mesh generation program called T r u e G r i d . Simulations of the crashes (and video of real crash tests), such as a frontal impact into a wall can be seen, and other similar work can be seen at P e n n s y l v a n i a State University [55].  S i m u l a t i o n of this type is  extremely useful, as the expense needed to run every type of test w o u l d be enormous, but this is offset by the fact that the simulation must be perfect for it to succeed. Other simulations can include t r a i n i n g simulations, for flying or d r i v i n g . C o m b a t situations for soldiers aire also being simulated as a cheaper and easier alternative to combat training.  T h e U . S . A r m y has funded development of  its own 'game' using the U n r e a l game engine. It is described i n Hachman[21] as 'an innovative, realistic computer game providing civilians w i t h an inside perspective and a v i r t u a l role i n today's premiere l a n d force, the U . S . A r m y ' . H u m a n m o t i o n can be seen i n many games, movies a n d simulations, and there is every reason to believe that animated human m o t i o n w i l l be i n greater demand i n the future, because they are an essential component of storytelling or simulating real life situations.  1.3  Goals and Scope  T h e long-term goal of this work is to create a system, based on m o t i o n graphs, for online m o t i o n planning, that produces skilled d r i v i n g behaviours for a dynamic model of a car i n a constrained environment, ft is a particular challenge to get the cars to produce highly d y n a m i c behaviours such as skidding-into-turn behaviours when approaching sharp corners, which can help achieve the fastest speeds around a track. T h e techniques developed could be applied to cars but also for other d y n a m i c vehicle behaviours, i n computer games or for intelligent autonomous vehicles i n the future. We wish to gain insight into the key issues  Chapter 1. Introduction  7  in the creation of such behaviours, and to determine where we m a y need to improve u p o n and expand this work i n the future. T h e success of our work w i l l be measured o n whether o r not we can produce good solutions that c a n exploit d y n a m i c behaviours such as skidding, while attempting to keep the c o m p u t a t i o n time at a reasonable level.  1.4  Contributions  We demonstrate that a well formed move tree or, equivalently, a m o t i o n graph, can be used to produce realistic steering behaviours on a variety of tracks. W e show that an A * search a l g o r i t h m is well suited to this task, a n d offer a number of ways that we could speed up the generation a n d tree searching for future work in this area. W e have produced a number of smooth animations where our agent chooses the fastest way to reach its goal, be it through acceleration/deceleration around tricky obstacles, or a hard s k i d d i n g t u r n into a corner after approaching at speed.  1.5  Thesis Outline  fn C h a p t e r 2, we present related work i n animation a n d m o t i o n p l a n n i n g . C h a p ter 3 formulates the problem that we wish to solve, and the mechanisms that we apply to solve it. Chapter 4 presents the results of this work along w i t h a discussion of those results. Chapter 5 discusses how this work could be i m proved u p o n a n d extended i n the future, including the a u t o m a t i o n of m o t i o n graph generation. C h a p t e r 6 gives the conclusions that we can take from the work, and is followed by the bibliography and appendices.  8  Chapter 2  Related Work T h i s chapter reviews work on a n i m a t i o n using m o t i o n graphs, and related topics.  2.1  Motion Graphs  M o s t h u m a n motions that we make can be described i n terms of successive sequences of actions, many of which are similar i n nature.  F i n d i n g our way  around a b u i l d i n g could be seen as a list of different motions, such as w a l k i n g forward, opening a door, climbing the stairs, etc. If we could collect each one of the motions needed to navigate and j o i n them together, then we could create a stream of motion to navigate any building. Motion graphs were formalised by K o v a r et al.[29]. A c o m m o n l y used solution to acquire motions, particularly human m o t i o n , is motion capture. M o t i o n capture technologies include mechanical armatures, magnetic sensors and m u l t i camera systems.  These systems can accurately reproduce any m o t i o n that a  real actor can perform. T h i s is a good way to reproduce m o t i o n , however it can be difficult to modify unless only minor changes are made, such as by motion warping i n W i t k i n and Popovi[59]. If there is no captured m o t i o n that is similar enough to a needed new motion then the only choice is to collect more d a t a which can be expensive and time consuming. T h e m o t i o n graph approach attempts to synthesize new streams of m o t i o n while retaining the quality of the original data.  The" m o t i o n graph itself is a d a t a structure that shows where  transitions can be made between various m o t i o n clips, ft can be constructed from unstructured motion data by detecting where these transitions can safely be made. W e can then select which paths we want to follow through the graph  Chapter 2. Related Work  9  to produce a new animation, which is called a graph walk. A m o t i o n graph is a directed graph where a l l edges correspond to clips of motion.  E a c h node can be considered to be a choice point connecting these  clips.  Motion clip. 1  • Motion clip 2  Motion: clip 3  (P) = 1 frame  Motion clip 1  Motion clip. 2  -|||*-~fl  Motion>clip,3 «||So.  = 1 node  Figure 2.1: T w o views of a m o t i o n graph.  In the top half of F i g u r e 2.1 we can see 3 m o t i o n clips where each circle represents a frame i n the clip. W e can start w i t h a t r i v i a l graph containing 2n nodes, one for the start and end of each motion clip. After c o m p u t i n g where allowable transitions are, we can add nodes and arcs to the graph representing the points where we can transition from one clip to another.  T h e transitions  Chapter 2. Related Work  10  can occur w i t h i n the same starting clip, as shown by the curved arc at the top of F i g u r e 2.1. T h e resulting graph, that has the remaining non-transition frames abstracted away from its graph edges is shown i n the b o t t o m half of F i g u r e 2.1. A motion graph requires a reasonable amount of connectivity between different nodes to provide compelling animation. It is also i m p o r t a n t to find transitions that w i l l result i n the highest quality motion. S m o o t h transitions between two motions typically require the use of motion blending as seen i n II P a r k et al.[39]'. It is also important to find transitions that w i l l result in the highest quality motion. M o t i o n graphs have become popular because they can a u t o m a t i c a l l y process a large m o t i o n database into the useful graph structure.  However, the  kinematic nature of motion graphs means they may lead to unrealistic behavior because this type of motion model does not take forces into account i n d y n a m i c environments.  2.1.1  Finding Transitions  B u i l d i n g a good m o t i o n graph requires finding an appropriate measure for when a transition can be reasonably made.  T h i s distance metric must ensure that  b o t h the position and velocity of any relevant D O F are close to each other. For h u m a n motion this would include the position of one of the joints, or i n our vehicle model it would be the car angle, position a n d velocities. Velocities must be considered, as illustrated by the example of a m a n w a l k i n g forwards and backwards. There w i l l be positions i n both walks where they have a similar pose but we would not want to transition between them, as we would have a n abrupt a n d unrealistic change i n direction. W e also have to reorient the d a t a to local frame coordinates. T h e data is recorded in global coordinates, but we need to capture the fact that a left t u r n while facing n o r t h is the same m o t i o n as a left turn while facing east, for example. If we look at the methodology of K o v a r et al.[29], if two frames Ai a n d Bj meet the threshold requirements for transition, then they can be blended by  11  Chapter 2. Related Work using frames Ai to Ai k-i +  w i t h frames B/-k+i  to Bj, after they have been  aligned with'one another. In this example, k defines the number of frames over which to blend. However approaches such as these can violate constraints i n the original motions, especially for the human motions, where feet may slide as a result of interpolation between two motions.  2.2  M o r e D a t a D r i v e n A n i m a t i o n Techniques  Motion editing is the process of m a k i n g changes to motion data. Gleicher [18] showed i n d i v i d u a l clips of motion can be edited through retargeting, w h i c h maps motions between characters of different proportions on to a m o t i o n clip while retaining the core constraints. B r u d e r l i n and Williams[8] use signal processing to reshape the data.  .  <  M o t i o n synthesis can be achieved by taking a set of example motions and performing multi-target blends which results i n a continuous space of parametized motion. T h i s can be seen i n W i l e y and Hahn[57] w h i c h uses linear interpolat i o n and Rose et al.[47] which uses radial basis functions. These methods are concerned w i t h generating parameterizations of clips whereas our work is about creating a sequence of clips that can be used to control a vehicle w i t h known environmental constraints. Gleicher et al.[17] created a system w h i c h b u i l t motion graphs w i t h a simpler structure by automatically detecting frequently occurring poses, w i t h transitions appearing at those points, reducing the number of nodes. K i m et al.[22] took r h y t h m i c d a t a and separated it by using beat analysis, then uses k-means to cluster and a M a r k o v model to represent transition probabilities between clusters, meaning new motions could be synthesized that upheld the r h y t h m i c structure.  Lee and Lee[36] controlled motion synthesis i n real time  w i t h d y n a m i c programming, precomputing the o p t i m a l transition to take from any node given one of a finite set of goal configurations. We can also construct statistical models. P u l l e n and Bregler[42] use kernelbased probability distributions to synthesize new motion based on the properties of the example motion.  Bowden[5], G a l a t a et al.[16] and B r a n d and  12  Chapter 2. Related Work  Hertzmann[6] a l l construct abstract states which represent sets of poses. T r a n sition probabilities between states are used to drive m o t i o n synthesis. However these techniques can lose important details a n d end up w i t h a n unrealistic average of different motions. T h e problem w i t h using statistical models i n this case is that it can be very difficult to t r u l y map the statistics of h u m a n motion. Tanco and Hilton[53] use a H i d d e n M a r k o v M o d e l ( H M M ) and k-means to • clusters of m o t i o n segments. A m a x i m u m likelihood algorithm is used to find a sequence of connecting motion segments between a start and end pose. P u l l e n and Bregler[43] break m o t i o n capture animations into smaller pieces and use these to create new " f i l l - i n " motions. Other graph based work includes Lamouret and van de Panne[31], which discussed characters w i t h a low number of degrees of freedom ( D O F ) . Schodl and Essa[52] apply m o t i o n graphs to video images. James and Fatahalian[25] use physical simulation w i t h i n deformable objects.  Perlin[40] and P e r l i n and  Goldberg[41] uses rules and blending to connect procedurally generated' pieces of m o t i o n into.usable streams. Faloutsos et al.[13] use support vector machines to create m o t i o n sequences as compositions of actions generated from a set of physically based controllers.  These were m a i n l y concerned w i t h creating  i n d i v i d u a l transitions whereas we are planning long sequences of m o t i o n . In the context of games, m o t i o n graphs are often referred to as  move trees.  T h e p r i m a r y difference is that these are generated manually rather t h a n automatically, which is also seen i n Perlin[40] and Rose et al.[46]. T h i s thesis uses manually constructed move trees. T h e i r construction could theoretically be automated, but our focus was their use i n efficiently planning constrained motions rather than automatic graph construction. W e need to create smooth transitions between motions at transition points. Perlin[40] interpolates between clips to create a  blend.  Lee and Shin[37] de-  fine orientation filters that allow blending operations on rotational data. Rose et al.[47j preserve kinematic constraints a n d d y n a m i c properties when creating transitions.  A r i k a n and Forsyth[4] and Lee et al.[35] identify transition loca-  tions based on a distance metric. P e r l i n and Goldberg[41] use a linear blending  Chapter 2. Related Work  13  method. A r i k a n and Forsyth[4], P u l l e n and Bregler[42] and Lee et al.[35] use displacement maps to achieve smooth transitions. Rose et al.[47] use a torque m i n i m i z a t i o n algorithm. Smooth animations are created i n this work by only using frames that are extremely close for comparison purposes, a n d do not require any complex blending procedures.  2.3  Autonomous Behaviours and Planning Algorithms  T h e behaviour of flocks, herds and schools are discussed i n Reynolds [45], as the B O I D S model. T h i s allows the programming of large a n i m a l groups to act i n support of a group goal, while also demonstrating realistic i n d i v i d u a l variability. S c r i p t i n g the path of each i n d i v i d u a l member of the group is time consuming and in many cases downright tedious. T h e work of Reynolds[45] applies three local behavioural rules to each member of the group: collision avoidance, velocity m a t c h i n g and flock centering (staying close to nearby flockmates). Some of the most interesting m o t i o n occurs when the flock is forced to interact w i t h and avoid objects i n its environment. A s an extension of B O f D S , G o et al.[19] tackle the problem of a n i m a t i n g the behaviour of vehicles w i t h i n interactive simulations. However i n d i v i d u a l vehicles generally do not have the same mentalities as flocks, and the d y n a m i c s mean that they have complicated control models and high velocities. T h e y combine online search w i t h steering behaviours, seen in [44], to guide the search function. For interactive games, speed is generally preferred to high accuracy, and so the m o t i o n must be generated rapidly. T h e y focus on extending the steering behaviour control model and combine it w i t h online path p l a n n i n g techniques, in an attempt to create more goal-driven synthesized vehicle animations.  It  is claimed to be suitable for not only space, water and l a n d based vehicles, but also birds, bicycles or any other simulated entity w i t h significant dynamics. Behaviours are designed i n a manner that means they can apply it to the motions  Chapter 2. Related Work of multiple entities.  14  T h e key components of this work are the interface for  steering behaviours, a visually plausible control model, the pre-integration of d y n a m i c trajectories to enable real time performance, and a search algorithm designed to operate w i t h limited time and information. Steering behaviours are denoted by the type of m o t i o n it produces, such as seeking or wandering. A major advantage' is that no matter what input is given, the output is always a vector of desired velocity. These behaviors can be mathematically combined and mean that there is a weak dependence between the control model and the behavior. However combining behaviours can result in nonsense behaviour. T h e y are also not well suited to nonholonomic (wheeled) vehicles w i t h inherent drift in their system dynamics, as computed paths may not be representative of the actual path taken. M o s t planning methods currently assume full knowledge of the environment. A popular path planning method was developed by LaValle[33][34], called Rapidly-exploring Random Trees ( R R T s ) , that can be used for many types of d y n a m i c system. T h i s technique is successfully applied by Y a m a n e et al.[60], who synthesized animations of human m a n i p u l a t i o n tasks, such as p l a c i n g a box on a shelf, and also by B r u c e and Veloso[7] who implement a real-time robot replanning system. Reynolds [44] uses a different approach that uses local information. Steering behaviours use a time-local a p p r o x i m a t i o n of steering forces that depend on desired behaviors. Steering forces are efficient to compute, but behaviours can get stuck i n local m i n i m a or behaviour loops because they only consider local information. W i t k i n and Kass[58] globally optimize a set of control inputs subject to specified constraints'and defined o p t i m i z a t i o n criteria, van de P a n n e et al.[56] compute an o p t i m a l control policy for small environments and for objects w i t h simple dynamics using state space controllers. Grzeszczuk and Terzopoulos[20] use simulated d y n a m i c entities control spaces to create local behaviour controllers. Frazzoli et al.[15] and Feron et al.[14] use trim trajectories to control a he-  Chapter 2. Related Work licopter.  15  T h i s is a similar problem to our work i n terms of vehicle control,  but has further complexity from being i n a d y n a m i c environment, so we w i l l discuss this work i n more depth. T r i m trajectories are predefined manoeuvres that are invariant w i t h respect to some state variables. F o r example, they are useful because they allow a high level planner to efficiently reason about local state reachability.  T h e operation of a n autonomous vehicle i n a n unknown,  d y n a m i c a n d hostile environment is an extremely complex problem when the vehicle is required to use its full maneuvering capabilities. T o deal w i t h such a system they decompose activities into a hierarchy a n d introduce control a n d decision layers.  Some systems are continuous while decision m a k i n g systems  are likely to be discrete, which means this system is referred to as a hybrid system. Such a controller is responsible for both the generation a n d execution of a flight plan. T o reduce complexity they base planning upon a quantization of the system dynamics, restricting the feasible system trajectories to a family of time-parametrized curves, which as a whole constitute a maneuver library. T h i s reduces the search space for solution, now not a continuous space. T h e i r system tries to capture the relevant characteristics of the vehicle dynamics a n d allow the creation of complex behaviours for the interconnection of primitives, to produce good approximations of o p t i m a l solutions. T h i s piecewise assembly of trajectories is similar i n essence to the system we propose.  However they  track the actual trajectories that occur given the input trajectories primitives, to ensure that they are close to each other based on the outside factors such as noise a n d unmodelled dynamics, whereas we directly assemble a n d play our stored animations. O u r work is directly related to the field of vehicle control, for b o t h real and simulated agents. Latombe[32] describes m o t i o n p l a n n i n g methods, that use a trajectory-planning stage and trajectory-tracking scheme.  T h e scheme  is meant to avoid obstacles and enforce the non-holonomic constraints of the vehicle. B u l l o a n d Murray[9] evaluate various linear and non-linear designs for trajectory tracking. Divelbiss and Wen[12] looked at the problem of parallel p a r k i n g non-holonomic vehicles that had between zero a n d two trailers. Trajec-  Chapter 2. Related Work  16  tories are planned offline and a linear quadratic regulator tracks the trajectory. Saffiotti[50] focuses on the control of autonomous robots and the presence of uncertainty i n real-world environments. Rosenblatt[48] proposes a distributed architecture which combines information from different sources each v o t i n g on the best way to complete there personal goals. Santana[51] aims to ensure safe local navigation for a robot. K e l l y and Nagy[27] determine a control input i n real-time to bring a vehicle from a start point to a goal.  M o s t of these are  dependent upon knowing a world model. Other work is based upon policy search methods. For the truck backing-up problem, Koza[30] assumes knowledge of the global state of the truck and used genetic programming.  Hougen et al.[23] learn a controller on a real tractor-  trailer robot, and give the self organising neural network controller access to the angle of the last trailer w i t h respect to the goal state. Hougen et al.[24] continue from this to look at the more difficult problem of backing up a double trailer truck i n simulation mode. T e m p o r a l Difference ( T D ) learning can be seen i n C o u l o m [ l l ] , w h i c h created a policy i n the now defunct R o b o t A u t o m o b i l e R a c i n g S i m u l a t i o n ( R A R S ) , which was a race between programmers to produce the best car controller. A c tions are learned based upon the cars position and velocity i n relation the local track features, and also the curvilinear distance from the start line. A newer version of the R A R S championship can be seen i n T h e O p e n R a c i n g C a r S i m u l a t o r (TORCS)[54]. fn Alton[2] and A l t o n and van de Panne[3], the m o t i o n of the vehicle is controlled by reinforcement learning and policy search.  T w o non-holonomic  vehicle types are tested, a car, and truck w i t h a trailer, w i t h different control problems involving going forwards and backwards. T h e y do not use a physics system to model car sliding as i n our work. A policy search framework is used to solve the m o t i o n control problem. T h e control policies use a nearest-neighbor function approximator ( N N F A ) to determine which policy w i l l be chosen at any particular time point. A particular challenge is the design of the reward function, where it is hard to work out what weights to give to various factors such as  Chapter 2. Related Work  17  energy efficiency, time efficiency, closeness to goal. Other notions of optimality, such as the smoothness of trajectories, are rewarded as those trajectories w i l l tend to be the ones that make m a x i m u m progress.  18  Chapter 3  Vehicle Steering r  In this chapter, we develop a technique for efficiently p l a n n i n g the d y n a m i c motion of a physics-based car simulation around highly constrained d y n a m i c environments.  3.1  Vehicle Model  Figure 3.1: T h e car.  T h e m a i n problem we are t r y i n g to solve is that of d r i v i n g a car around a track at speed using clips from a motion graph. T h i s agent has a 6 dimensional state space:  S = [x, y, 9, x, y, 6]  where x, y are coordinates and 0 is the angle the agent is facing, and x, y and 6 are the velocities of these. T h e agent has a steering angle as input, represented graphically by the direction the tires are pointing, which we w i l l call s. T h e "agent itself has a physics based model for movement, which is affected by the weight of  Chapter 3. Vehicle Steering  19  the agent, the size of the agent, etc. T h i s takes into account a large number of variables i n order to work out the position of the agent for the next frame, a n d constants such as drag, resistance a n d friction. T h e agent itself is described b y several constant parameters including front/rear axle distance, a n d also mass (in kg) a n d inertia (in k g per meter). In order to compute the position of the agent for the next frame, the velocity i n world coordinates is transformed to the agent reference frame. N e x t the lateral force on the wheels is calculated. Forces a n d torque are applied to compute the current acceleration. F i n a l l y , the accelerations are integrated to o b t a i n an updated velocity a n d position. T h e full implementation of the sliding physics is given i n A p p e n d i x A .  3.2  Pseudocode Planning Algorithm  O u r overall algorithm, which runs for each frame, is shown below. W e discuss it i n detail in the remainder of the chapter. for (each frame) { if (current frame is a transition point) { initialize route tree a n d a d d unexpanded start node to list while (there are unexpanded nodes a n d node limit has not been reached) { select next node if (path between current node a n d next node does not collide w i t h wall) expand node a n d a d d new unexpanded nodes t o list  }  } create a list of a l l possible paths from end points find the best path by heuristic if (best path is not current path) { set current path to best path  }  } }  Chapter 3. Vehicle Steering  3.3  20  D a t a Generation  In order to develop a motion graph (or move tree), we first need to collect relevant motion data.  A 'Logitech D u a l A c t i o n ' (i.e. P S 2 style) control pad  is used to interactively control turns of different curvature a n d speeds.  The  steering angle is mapped to the left analog control stick, and acceleration and b r a k i n g are mapped to b u t t o n s . 1  These turns are a l l made using the physics  model but there is no interaction w i t h scene objects, i.e. no bouncing off walls, apart from direct collisions w i t h walls w h i c h reduce velocities to zero.  Many  clips of a n i m a t i o n were recorded of a user d r i v i n g around tracks or w i t h no track, to experiment w i t h m a k i n g soft and hard turns, and the best of these were selected for use to try and represent as many of the most useful types of movement and cornering as possible. O n l y clips that d i d not contain collisions were selected. C l i p s were not doubled up using symmetry, each clip i n use is of an actual left or right turn.  3.4  Finding Similar Frames  fn order to construct a motion graph from the data, we need to identify potential transition points, either manually or automatically. T h e automatic identification of transition points requires finding similar frames. Before we can compare two frames fl and f2, we must rotate them so they are i n the same coordinate system. W e compare the velocities i n the local coordinate frame of the car rather t h a n i n world coordinates, because the global position and global orientation are irrelevant. N e x t , we need to find a metric for frame comparison. There are various ways we could go about this, but the final goal is come up w i t h a single value that measures the similarity of the state or 'situation' of the car (Figure W e use v.* to describe velocity of frame / i n x,  3.2).  to describe velocity of  ' t h e r i g h t a n a l o g s t i c k , o r t h e u p d o w n a x i s o f t h e left a n a l o g s t i c k w o u l d h a v e b e e n v i a b l e alternatives.  Chapter 3. Vehicle Steering frame / i n y, and oj  21  to describe angular velocity of frame /, w h i c h gives the  f  following function: d'=  y  -U^  +  lv  1  -V \ 2  +  -LJ \  \U  1  2  I  o  O  x,x Figure 3.2:  T h e car velocity comparison.  W e then use a threshold of d < e to detect frames that we can mark as being sufficiently similar to support a m o t i o n transition. U s i n g similar frames we can now begin construction of a Route Tree that gives possible paths for the agent to take. A route tree is a map of routes that the car can choose from based on its current location. T h i s can be expanded to various search depths to give a richer selection of paths. A route tree represents a specific instantiation of the move tree or m o t i o n graph that enumerates a l l the paths forward from the current state.  3.5  Creating and Expanding Routes  T h i s section explains how the route tree i n this work is created and expanded. A route tree is created when there are multiple choices available for the next motion clip at the current frame. T h e next node to be expanded is chosen v i a an algorithm, which after some experimentation was chosen to be A * search. W e use a finite search horizon expressed as a fixed p l a n n i n g time d u r a t i o n . A l s o , a  Chapter 3. Vehicle Steering  22  memory bounded value is given to limit the t o t a l number of nodes searched for larger expansions.  Figure 3.3: state.  A route tree enumerates possible future paths from a particular  In this example, there are three choices at each decision point.  The  magenta line at the top right is a line from the selected best node to the current goal. T h e best path is shown i n red.  3.5.1  A * Search  A * search is a widely used search algorithm, which is based u p o n best-first search, as seen i n Russell and Norvig[49]. Best first search incorporates strategies that choose to expand the current 'best' node, where the metric used to define 'best' is given by a heuristic.  A common example of a cost-to-go heuristic  for m o t i o n p l a n n i n g is the straight line distance, where the next node that is expanded is chosen to be the one that is currently closest to the goal. Just using cost-to-go can be inefficient, given that it ignores the ability to cull solution paths based on the cost-to-date.  Chapter. 3.  23  Vehicle Steering  A better approach combines the straight line distance h(n) w i t h the cost to reach the node g(n).  f(n) = g(n) + h(n) T h i s sum gives us an estimate of the cheapest solution that goes through node n. T h e straight-line distance-to-goal never underestimates the cost to reach the goal and therefore defines an admissible heuristic. T h e A * search therefore finds the o p t i m a l solution if run to completion. In order to optionally further constrain the search we can bound each search to a m a x i m u m number of nodes, meaning that for larger searches we may not achieve an o p t i m a l solution, but smaller searches can be performed i n real time. T h e number of nodes expanded for each search is discussed further i n the results section. In order to implement A * search efficiently, a d a t a structure must be chosen to place the items i n a priority queue i n a much faster manner. A heap is ideally suited to this task, i n this case a min-heap. A min-heap is one w h i c h is p a r t i a l l y ordered w i t h the lowest value at the top. T h i s allows us to store values i n an order and release the highest priority (lowest value) node when needed.  3.6  Route Selection and Clip Alignment  W h e n the agent reaches a decision point i n the route tree, it finds which one of the corresponding branches is on the best path to the goal, as determined by the A * search and sets its new start frame accordingly. T h i s frame must be translated and rotated into the agent's current coordinate frame.  Given a  current agent coordinate frame of A, the agent coordinate frame at the beginning of the current clip as A ', the new playback clip coordinate frame B', and the current frame i n that playback clip of B, we can compute the agents position as driven by the transformed m o t i o n clip. T h i s is shown i n F i g u r e 3.4.  Chapter 3. Vehicle Steering  F i g u r e 3.4:  24  M o t i o n clip B ' is rotated into place i n order to provide the next  frames of m o t i o n for car A .  3.7  Collision Detection  T h e agent needs knowledge of its environment i n order to plan its m o t i o n . T h i s is done by intersecting the potential paths, as defined by the route tree, w i t h the environment. Collision detection w i t h 2 D track walls is made by c o m p a r i n g the agents sides to every track or car obstacle line at m u l t i p l e points w i t h i n each m o t i o n clip. For a straight clip the lines between the start frame and the final frame are the only ones tested, but for a sharp t u r n up to three separate in-between frames are tested, as shown i n F i g u r e 3.5.  Chapter 3. Vehicle Steering  F i g u r e 3.5:  25  Collision detection tests, which is much more accurate given more  frames, (a) shows detection between the two end points of a curve, which is less accurate t h a n (b) given 2 extra frames.  26  Chapter 4  Results T h i s chapter describes the results of our. work. F i r s t a move tree w i t h a 5-way branching factor is constructed that guarantees continuity of position. T h i s move tree does not necessarily have smooth transitions i n terms of forward velocity. Second, we develop a large move tree w i t h smoother and more realistic transitions between these. F i n a l l y we discuss the implications of these results. W e use a goal location of  x=0,y=100000,  as measured i n metres, where the  x y plane represents the d r i v i n g surface. It would also be possible to implement a d y n a m i c goal to allow the agent to drive around circular style tracks: W e use the cost metric developed i n C h a p t e r 3.  4.1  M o v e Tree  T w o small hand constructed move trees are created from d a t a recorded using the j o y p a d . F i g u r e 4.1 shows schematic views of the two trees. R o u g h l y speaking, the trees consist of varying degrees of left and right turns.  For simplicity of  construction, velocities are not enforced to be perfectly continuous between successive animation clips. L a t e r a l components of the velocities are very close by design arid because of the car physics. T h e agent w i l l thus not suddenly lurch sideways when entering a. new animation. However, it may speed up or slow down quicker than true physics would allow. T h e larger move tree is the same as the first, except it has two more branches allowing for slightly sharper turns.  E a c h one of the clips could be used after  the end of any previous clip, and they were a l l 25 frames long (f second of the  Chapter 4. Results  27  Major State  Graph Transition  Approx. Trajectory  F i g u r e 4.1: T h i s is a figure of the small move trees, w i t h a) 5 a n d b) 7 branches, showing the possible transitions and approximate trajectories from the only possible state. E a c h clip is 25 frames in length.  final movie)'. T h e 5 and 7 branch move trees were tested against each other to see how they performed on a variety of tracks. W e now describe the results on a variety of environments.  4.1.1  Long Straight Track with Other Cars  T h e straight track shown in Figure 4.2 is designed to test the ability to avoid other moving cars on a long straight road, using various search depths.  The  test is conducted w i t h the 5 branch tree, which does not offer m a n y choices for movement, but is useful on this style of track where most movements w i l l be straight. T h e performance for various levels of 'look ahead' or depth were evaluated.  T h e results of these tests shown in Table 4.1 show the benefit of longer time1  horizon planning. L o o k i n g ahead one or two clips results i n b o t h cars avoiding obstacles at the last possible moment and b o t h collide quickly. L o o k i n g three or four clips ahead fared better but b o t h still crashed, although they showed a greater degree of anticipation of future car movements. L o o k i n g ahead five clips (124 frames) allowed the car to not crash d u r i n g two minutes of d r i v i n g . ' S e e w w w . c s . u b c . c a / ~ a n d y a d a m for v i d e o s .  Chapter 4. Results  28  <5 <5  F i g u r e 4.2:  L o n g straight track. T h e equally-spaced t i l t e d rectangles on track  represent cars moving at constant speed that act as d y n a m i c obstacles.  Whenever the agent does decide to turn, it makes only slight movements that allow it to get through the gaps among the cars at m a x i m u m speed.  This  does not prove that this planning depth is sufficient for a l l possible tracks and obstacle placements.  4.1.2  Cornered Track  T h e cornered track, as shown i n Figure 4.3, is designed to test the ability to travel around a more interesting track w i t h a variety of corner angles. T h i s produced a number of interesting behaviours.  F i r s t l y , the test that  plans only 2 clips moves forward quickly, but only reacts when it reaches a w a l l . T h i s is caused by only having a few different options for movement, w i t h no fast yet slight t u r n available. T h e a n i m a t i o n using a 4 clip planning horizon performs better. However the search may result i n a slower time to the goal despite searching further ahead as a result of our choice to b o u n d the node search to the first f5000 nodes encountered. T h i s phenomenon repeats itself w i t h the two deeper searches using  Chapter 4. Results  29  times, (sees)  depth(frames/nodes) . 24/1  10 sec crash  49/2  17 sec crash  ' 74/3  57 sec crash  99/4  110 sec crash  124/5  no crash after 2 mins  Table 4.1: T h e results for the-small 5 branch move tree on the long track w i t h other cars.  depth(f ram.es/nodes) branch node limit time to finish line (sees) 149/6  5  15000  33  299/12  5  45000  39  49/2  7  15000  35.5  99/4  7  15000  31.5  149/6  7  15000  35  Table 4.2: T h e results for the 5 a n d 7 branch move trees o n the cornered track.  the 5 branch tree, where the search of 6 clips a n d 15000 nodes performs much better than the search of 12 clips of 45000 nodes.  4.1.3  Cornered Track with Other Cars  T h e same track as above is used but w i t h a collection of other cars that serve as m o v i n g obstacles. T h e majority of tests are conducted here as this problem is the most interesting a n d difficult. A variety of tests were r u n w i t h the 5 branch tree, but a l l of t h e m e n d i n collision, most of them i n the same tricky area of track. T h e 7 branch tree w i t h a planning horizon of 2 clips cannot avoid the cars i n time a n d make it around the corner, a n d collides w i t h a wall. A s the planning depth increases, the agent manages to improve its performance, i.e. to get to the goal faster.  Chapter 4. Results  F i g u r e 4.3:  30  Cornered track. T h e car begins at the b o t t o m and its goal is to  drive towards the top.  T w o tests are r u n at a depth of 6 clips, one w i t h each p l a n n i n g step bounded to 15000 nodes and another bounded to 45000 nodes. Surprisingly the 45000 node version was slower. T h e only reason for this can be that a more o p t i m a l (faster) decision earlier i n the track results i n it having to follow a slightly slower path later, perhaps getting held up in 'traffic'.  4.2  Large M o v e Tree  A more expansive move tree is constructed i n order to produce animations w i t h more realistic transitions while avoiding the need for blending. T h i s is shown in F i g u r e 4.4. T h e tree was given three m a i n states from which a number of different turns could be performed - a Stop state, a Medium state and a Fast state, fn order to make it easier to m a i n t a i n a constant speed and make decisions about going fast or slow at any point, two in-between states were added between each m a i n state. T h i s allows the agent to speed up and slow down w i t h o u t h a v i n g to go a l l the way up to commit to moving to Fast and then back to M e d i u m , needing a  Chapter 4. Results depth(frames / nodes) branch node limit  31  times (sees)  149/6  5  45000  crash  199/8  5  45000  crash  299/12  5  15000  crash  299/12  5  45000  crash  49/2  7  15000  crash  99/4  7  15000  41  149/6  7  15000  38  299/12  7  15000  33.5  299/12  7  45000  35  Table 4.3: T h e results for the small move trees o n the cornered track w i t h other cars.  large section of straight track. T h e car can speed up to mfl a n d then slow down to M e d i u m again, gaining speed but not being stuck i n one long acceleration or deceleration a n i m a t i o n .  4.2.1  Long Straight Track with Other Cars depth(frames) node limit 299  45000  times still going 2 m i n  Table 4.4: Results for the large move tree on the long track w i t h other cars.  A test was r u n o n the long track w i t h other cars m o v i n g as obstacles. T h e agent tends to h u g the wall, m a k i n g slight speed adjustments, a n d very slight turns, to keep it aligned w i t h the gaps between the cars. It also shows that the agent is always a t t e m p t i n g to go straight a n d not r a n d o m l y choosing t o t u r n for an unnecessary purpose.  32  Chapter 4. Results  Skid  \l Fast  Major State  o:  O Minor State Skid  mf1  v.!  G r a p h Transition  Med  Approx. Trajectory  OV-:  (repeated on right side)  V  / Figure 4.4:  4.2.2  mf2  i  <( )>  sm2  sm1  Stop  A large move tree showing a total of seven speeds.  Cornered Track  depth( frames) node limit time to finish line (sees) 49  45000  44  Table 4.5: Results for the large move tree on the cornered track.  One test was r u n o n the cornered track t o ensure it produced a good ani m a t i o n . T h e animation, though being expectedly slower than those w i t h the smaller tree, which was ignoring physics, was much smoother t h a n those animations and also intelligently anticipated upcoming corners.  Chapter 4. Results  4.2.3  33  Cornered Track with Other Cars depth( frames) node limit time to finish line (sees) ' 400  15000  45.5  400  45000  50  400  250000  46  700  250000  45.5  Table 4.6: Results for the large move tree on the cornered track w i t h other cars.  Tests were r u n o n the cornered track w i t h other cars. W i t h a 15000 node planning horizon a reasonable time a n d smooth animation is produced, b u t at 45000 we see a slower performing result. T o show that this is not the result of a potential coding error, an a d d i t i o n a l test was r u n w i t h a 250000 node p l a n n i n g horizon. T h i s produced a similar time to the first test. It seemed that the agent had to sit behind other cars occasionally, probably because i t h a d deduced that it could not speed up a n d slow down i n time to get around the car a n d also the following corner. A l l of the animations were realistic a n d smooth.  4.2.4  Sharp Cornered Track  Figure 4.5: Sharp cornered track.  Despite producing smooth animations o n the track above, we have not provided the system any need to use the hard slides provided w i t h i n the move tree.  Chapter 4. Results  34  In order to do this a track (Figure 4.5) was created w i t h long straight sections and sharp hard turns, which should ensure that the fastest (if not only) way to get around the corner would be to skid into it, as shown i n F i g u r e 4.7)  depth( frames) node limit 200  15000  400  15000  Table 4.7: T h e results for the large move tree on the sharp cornered track.  T h e results proved successful. In a l l cases the agent used the skid to get around the corner.  F o r the depth 200 version the particular instantiation of  the skid causes it to leave the corner slowly. T h i s could perhaps be rectified by having hard skids that leave the agent at different angles at the conclusion. However a l l of the skids allow the agent to reach high speed going into the corner and escape effectively.  4.2.5  Final Track  T h e final track (Figure 4.6) was produced to show a l l of the above elements, ft contains normal turns, a sharp t u r n , a n d many obstacle cars d r i v i n g i n the large center area. Figure 4.7 shows the skid. F o r a better representation of the results see the online videos.  ;  depth node limit time to finish line (sees) 200  5000  48.5  400  250000  51  Table 4.8: T h e results for the large move tree o n the final track.  A small lookahead was used w i t h a low node limit, a n d the a n i m a t i o n was created i n around 3-4 minutes, which results i n around 5 seconds of c o m p u t a t i o n  Chapter 4. Results  35  F i g u r e 4.6: F i n a l track.  per frame. L o o k i n g further ahead and t a k i n g much longer to process d i d not produce significantly better results i n this case, but the number and closeness of opposition cars was increased.  T h e low node l i m i t could not find a path  through, but the larger search produced a smooth a n i m a t i o n and path through the track, m a k i n g it look easy. These final movies show the c u l m i n a t i o n of a l l of the above elements, to show that they a l l work as one.  4.3  Discussion  We have learned various things of interest from these tests. F i r s t , interesting local m i n i m a exist. S i m p l y increasing the number of nodes we search does not automatically mean that we are going to achieve a better performing result. T h i s is because a slightly faster earlier route may yet slow the agent down at a later point.  Despite this we can still see from other examples that longer  planning horizons are more likely to produce a good a n i m a t i o n and in a l l cases  Chapter 4. Results  Figure 4.7:  36  A resulting skid on the final track.  reduce the possibility of a collision. There is also clear evidence that searching too far ahead while not searching enough nodes gives us poor results. T h e way we structure our m o t i o n tree greatly affects the q u a l i t y of a n i m a t i o n we can produce, and can be tailored to i n d i v i d u a l tracks. T h e 5 branch graph worked well o n the straight track at avoiding other agents, however it performs poorly on the curved track that forced it to negotiate tight corners at exact angles.  A d d i n g just two more turns to the tree allows it to get around the  track, and planning further ahead allows it to negotiate the track i n a much faster time. W h i l e the 5 and 7 branch trees prove useful for testing aspects of move trees, the animations they produced are not smooth and do not conform to the physics constraints.  P r o d u c i n g a move tree that ensures transitions are  very close to each other in terms of velocities allows us to produce smoother animations and come closer to meeting the goals of this work, namely to produce animations conforming to a physical system.  However the c o m p u t a t i o n time  for these smaller trees is real time in some cases, as opposed to hours for the largest tests conducted. M o s t m i d range tests, which produce almost as good or occasionally better results than the really large tests, take around 10-20 minutes when using the large move tree, depending u p o n the number of track  Chapter 4. Results  37  obstacles, and whether the whole tree could be expanded and each step. T i g h t corridors resulted in much shorter c o m p u t a t i o n times as large sections of the motion become highly constrained. These computation times could be improved significantly by the implementation of some of the suggestions made i n the next chapter. W h e n comparing the results produced to those produced by a h u m a n using a j o y p a d , w i t h practice it was possible for a human to get around all tracks, but it is very difficult to reproduce the d y n a m i c s k i d d i n g behaviours for tight corners, w i t h v i r t u a l l y every attempt at high speed skids resulting i n crashes into the wall, or ending up short of the corner.  Similar problems occur when t r y i n g  to avoid car obstacles, resulting in crashes or slow' routes taken after t w i t c h based movements to avoid cars at the last seconds. In particular they could not reproduce the long sections of straight d r i v i n g seen when the agent is d r i v i n g on the straight track against other cars. T h e agent could be much more exact about its movements, whereas a person may get lucky on a tricky track but w i l l usually require multiple attempts to even get through t r i c k y areas at a decent speed. O v e r a l l h u m a n control produced motions that were less smooth and were often unsuccessful, although perhaps they could come close w i t h a refined control system and sufficient practice. T w o instances can be given where h u m a n control resulted i n better animations. Firstly, this occurs when very simple tracks were given, and much practice made at cornering exactly to limits, and secondly when the agent had a lot of scope for a path to take and ended up i n a s u b o p t i m a l area later i n the track, it is possible that a h u m a n could realise this mistake and choose a better route based u p o n their past experience. In the first case, the planning result depends on how rich or how suited the move tree we give is to that particular track and set of obstacles. T h e overall conclusion is that the agent based animations are much more likely to succeed and produce nice smooth d r i v i n g animations.  B y using the  methods i n this thesis we can get the car to produce highly d y n a m i c manoeuvres such as s k i d d i n g behaviours i n order to achieve the goal of navigating a track in m i n i m a l time.  f  38  Chapter 5  Discussion In this chapter we discuss various limitations of our work and how the methods and results might be improved.  5.1  G r a p h T r a n s i t i o n Selection  T h e move trees used to o b t a i n our results were hand constructed. D e v e l o p i n g good move trees is a time consuming and painstaking process. M o t i o n graphs can be used to automate this process.  However the use of m o t i o n graphs is  potentially problematic for our problem i n several ways. If we identify suitable transitions by comparing every frame w i t h every other frame, we can end up w i t h an extremely dense m o t i o n graph. T h i s can happen whenever two similar motions occur i n a clip. Figure 5.1 shows where connections are made for two frames n and ra, but connections are also made at n+1  and m+1,  n+2  and  m+2, etc. M a n y of these connections can be removed w i t h o u t any significant loss of functionality. One choice is to not compare the frame w i t h the next k  c  frames that occur after it. T h i s figure can be adjusted by the user for each set of m o t i o n graphs to give o p t i m a l results. T h i s p r i m a r y problem can also occur when we have sections of very similar data. F o r example, if there are portions of the m o t i o n d a t a that have the agent d r i v i n g straight at a constant speed, we can end up w i t h an huge increase in the number of connections between all the similar frames of the agent going straight (Figure 5.2). T h i s can be tackled by not comparing against the next fc frames s  that occur after the last similar frame that we have found. Once again, this value can be adjusted by the user to give the best results for the d a t a they are  Chapter 5.  Motion clip 1 with K  Discussion  (g|-^§HQM§H^  c  K =5 c  •(C) = 1 frame  F i g u r e 5.1:  A dense, degenerate motion graph can arise from the occurrence of  highly similar motions.  using.  Motion clip 1  Motion clip 1 with K (b s  K =9 s  (ft) = 1 frame  F i g u r e 5.2:  5.2  A n o t h e r type of degeneracy i n m o t i o n graphs.  Graph Pruning  To avoid creating a motion graph that is overly dense, we need t o able t o strip away motions that provide us w i t h m i n i m a l or repeated animations.  A good  Chapter 5. Discussion  40  example of this would be where we end up many repeated connections where the agent is s i m p l y d r i v i n g straight,  ft may be possible to m a t c h a l l of the  closest frames to each other, but apply some sort of a l g o r i t h m that removes any a n i m a t i o n clips that add little to the overall possibilities of movement. Edges can also be eliminated when they are not part of the strongly connected components of the graph.  5.3  Improving A *  T h e heuristic for expanding nodes using A * could be improved by weighting in factors such as the agents current speed and angle. It w o u l d be possible to work out an approximate figure for the time the agent needs to t u r n , face the goal, and reach m a x i m u m velocity from any orientation/velocity c o m b i n a t i o n . T h i s may improve the order that nodes are expanded, and reduce the number of nodes that we need to expand in order to find an o p t i m a l or close to o p t i m a l solution.  5.4  Simplistic Goal Selection  A t the moment we employ a goal which s i m p l y tells the agent to m a x i m i z e its y value at any point. T h i s does not allow us to traverse looped tracks. It would be easy to implement a system that updates the goal based on the current corner the agent is at, but a more fluid system would allow the agent to set its own goal at any point.  5.5  Track Generation  A l l of the tracks used i n this project were hand constructed w h i c h was time consuming.  A n automation of this process would be preferable as long as it  could be done i n a way that gave the user a reasonable level of control over  Chapter 5. Discussion  41  what is produced, i n terms of length, number of corners, sharpness of corners etc.  5.6  Collision Detection  T h e collision detection a l g o r i t h m could be significantly improved. T h e trajectory of the car is represented using piecewise line segments, a n d every such segment is tested against every single line segment representing the environment. T h i s can be made significantly more efficient.  5.7  Additional Mobility  A n interesting further application of this work would be to provide the vehicle w i t h the ability to reverse.  T h i s would allow the agent to move through  extremely tight sections of track and should greatly reduce the chance of the agent becoming trapped.  s  42  Chapter 6  Conclusions W e have demonstrated.that a well formed move tree based on m o t i o n clips derived from a physics-based system can be used to perform motion p l a n n i n g that exhibits considerable anticipation.  T h e move tree is expanded and searched  using A * search to produce realistic steering behaviours.  W e have provided  results on various tracks using different search trees, node limits and p l a n n i n g horizons. W e have produced a number of smooth animations where our agent demonstrates skilled dynamic behavior i n reaching its goal, be it through acceleration/deceleration around tricky obstacles, or a hard s k i d d i n g t u r n into a corner after approaching at speed. T h e work can be repeated on general tracks and obstacles other t h a n those tested upon here, and w i t h any simple or complex, hand-constructed or autom a t i c a l l y derived move tree. It may also be useful for other physical systems that could base their movements upon motion graphs. It is not as suitable for environments w i t h rugged terrain, where movement may be affected by the surface of movement, but in a smooth open environment, i n c l u d i n g space/air/water, the ideas here should be just as applicable, and could be extended into three dimensions. A s long as the move tree given to the agent is suited to the track, or rich in content, then on average the animations produced w i l l be more skilled or better performing than those produced by human control. T h e most notable success of this work is the production of highly d y n a m i c planned motions such as skidding behaviours, which to our knowledge have not been demonstrated i n equivalent path finding procedures. P l a n n i n g and control of highly d y n a m i c motions very much remains an open  Chapter 6. Conclusions  area of research i n a n i m a t i o n a n d robotics.  43  44  Appendix A  Physics Implementation T h i s is the implementation of the sliding p h y s i c s . 1  / / global constants / / These constants are arbitrary values, not realistic ones. D R A G = 4.0 / / factor for air resistance (drag) R E S I S T A N C E = 20.0 / / factor for rolling resistance C A R = -4.20 / / cornering stiffness C A F = -4.0 / / cornering stiffness M A X G R I P = 3.0 / / m a x i m u m (normalised) friction force, =diameter of friction circle carb = 1; / / distance from C G to front axle i n metres care = 1; / / idem to rear axle i n metres carwb = carb + care; / / car wheelbase i n metres c a r m = 1500; / / car mass i n kg cari = 1500; / / car inertia i n kg per metre  / / global variables g X , g Y , g A ; / / car position and angle g v X , g v Y , g v A ; / / velocities of car cars; / / steering angle carthr, carbra; / / car throttle and brakes (throttle used but brakes not) dt; / / delta t, time interval  1  O r i g i n a l l y d e v e l o p e d b y M a r c o M o n s t e r o f M o n s t r o u s Software - u n f o r t u n a t e l y since it  w a s f o u n d , a l l references t o i t h a v e d i s a p p e a r e d a n d t h e w e b s i t e h a s s h u t d o w n .  Appendix A. Physics Implementation  / / local variables  .  .  45  '  f v X , v Y ; / / velocity fawcX, a w c Y ; / / acceleration in world coordinates ffX, f Y ; / / force frsX, r s Y ; / / resistance faX, a Y ; / / acceleration fftX, f t Y ; / / traction fflatfX,  flatfY,  flatrX,  flatrY;  //flat  fwheelbase; / / wheelbase ra; / / rotation angle ss; / / sideslip saf, sar; / / slip angle front and rear t; / / torque aa; / / angular acceleration sn, cs; / / sine and cosine, of car angle y ; / / yawspeed w; / / weight  sn = sin( g A ) cs = cos( g A ) / / x is to the front of the car, y is to the right, z is down / / transform velocity i n world reference frame to velocity i n car reference frame v X = cs * g v Y + sn * g v X v Y = -sn * g v Y + cs * g v X  / / L a t e r a l force on wheels / / R e s u l t i n g velocity of the wheels as result of the yaw rate of the car b o d y //  v  = yawrate * r where r is distance of wheel to C G (approx. half wheel base)  / / yawrate (ang.velocity) must be i n r a d / s yawspeed = wheelbase * 0.5 * g v A  Appendix A. Physics Implementation  46  if( v X = 0 ) r a = 0 else r a = atan2( ys, v X ) / / Calculate the side slip angle of the car (a.k.a. beta) if( v X = 0 ) ss = 0 else ss = atan2( v Y , v X ) / / C a l c u l a t e slip angles for front a n d rear wheels (a.k.a. alpha) saf = ss + ra - cars sar = ss - r a / / weight per axle = half car mass times I G (=9.8m/s2) w = carm * 9.8 * 0.5 / / lateral force on front wheels = ( C a * slip angle) capped to friction circle * load flatfX  = 0  flatfY = C A F * saf flatfY  = lower value of ( M A X G R I P , flatfY )  flatfY  = higher value of ( - M A X G R I P , flatfY )  flatfY  =  flatfY*  w  / / lateral force on rear wheels flatrX  = 0  flatrY  = C A R * sar  flatrY  = lower value of ( M A X G R I P and flatrY )  flatrY  = higher value of ( - M A X G R f P and flatrY )  flatrY  = flatrY * w  / / longtitudinal force on rear wheels - very simple t r a c t i o n model f t X = 100 * ( cartho - carbra * ([vX]/[vX]) ) ) ftY = 0  .  / / Forces and torque on b o d y / / drag and rolling resistance r s X = -( R E S I S T A N C E S  + DRAG*vX*[vX] )  r s Y = -( R E S I S T A N C E * v Y + D R A G * v Y * [ v Y ] )  Appendix A. Physics Implementation  47  / / sum forces fX = f t X + sin( cars ) * flatfX + flatrX + r s X f Y = f t Y + cos( cars ) * flatfY + flatrY + r s Y / / torque on body from lateral forces t = carb * flatfY - care * f l a t r Y  / / Acceleration / / N e w t o n F = m.a, therefore a = F / m a X = fX / carm a Y = f Y . / carm aa = t / cari  / / V e l o c i t y a n d position / / transform acceleration from car reference frame to world reference frame a w c X = cs * a Y + sn * a X a w c Y = -sn * a Y + cs * a X / / velocity is integrated acceleration g v X = g v X 4- ( dt * a w c X ) g v Y = g v Y + ( dt * a w c Y ) / / position is integrated velocity g X = g X + ( dt * g v X ) g Y = g Y + ( dt * g v Y )  / / A n g u l a r velocity and heading / / integrate angular acceleration to get angular velocity g v A = g v A + ( dt * a a ) / / integrate angular velocity to get angular orientation g A = g A + ( dt * g v A ) '  48  Bibliography [1] S i m b i n  Development  Team  AB.  Gtr2  game.  http://gtr-  game.lOtacle.com/index.php?id=246&L=l. [2] K e n A l t o n . Shaping and policy search for nearest-neighbour control p o l i cies w i t h applications to vehicle steering. Master's thesis, U B C C o m p u t e r Science 2004, 2004. [3] K e n A l t o n and M i c h i e l van de Panne. L e a r n i n g to steer on w i n d i n g tracks using semi-parametric control policies. In Robotics and Automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE International Conference on, Vol., Iss., 18-22, 2005. [4].Okan A r i k a n and D . A . Forsyth. Interactive motion generation from examples. In SIGGRAPH '02: Proceedings of the 29th annual conference on Computer graphics and interactive techniques, pages 483-490, N e w Y o r k , N Y , U S A , 2002. A C M Press. [5] R . Bowden. L e a r n i n g statistical models of human motion. In IEEE Workshop on Human Modeling, Analysis and Synthesis, CVPR, 2000., 2000. [6] M a t t h e w B r a n d and A a r o n H e r t z m a n n .  Style machines.  In K u r t A k e -  ley, editor, Siggraph 2000, Computer Graphics Proceedings, pages 183-192. A C M Press / A C M S I G G R A P H / A d d i s o n Wesley L o n g m a n , 2000. [7] J . Bruce and M . Veloso. Real-time randomized p a t h p l a n n i n g for robot navigation. In Proceedings of IROS-2002, 2002.  49  [8] A r m i n B r u d e r l i n a n d ' L a n c e W i l l i a m s . M o t i o n signal processing. In SIG-  GRAPH '95: Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pages 97-104, N e w Y o r k , N Y , U S A , 1995. A C M Press. [9] F . B u l l o and R . M u r r a y . E x p e r i m e n t a l comparison of trajectory trackers  • for a car w i t h trailers. In IFAC World Conference, pages J^.01-1,.12, 1996., 1996. [10] L a u r e n J . C l a r k .  Robots  serve humans on  land,  i n sea  and  air.  http://web.mit.edu/newsoffice/2005/roboticvehicles-0302.html.  [11] R e m i C o u l o m . Reinforcement Learning Using Neural Networks, with Applications to Motor Control. P h D thesis, Institut N a t i o n a l Polytechnique de Grenoble, 2002. [12] A . Divelbiss and J . T . W e n . Trajectory tracking control of a car trailer  system. In IEEE Transactions on Control Systems Technology, 5(3):269278, 1997. [13] Petros Faloutsos, M i c h i e l van de Panne, and D e m e t r i Terzopoulos. C o m posable controllers for physics-based character animation. In SIGGRAPH  '01: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 251-260, N e w Y o r k , N Y , U S A , 2001. A C M Press. [14] E . Feron, E . Frazzoli, and M . Dahleh. Real-time motion p l a n n i n g for agile  autonomous vehicles. In AIAA Conf. on Guidance, Navigation and Con-  trol,Denver, August 2000., 2000. [15] E . F r a z z o l i , , M . D a h l e h , and E . Feron.  Robust h y b r i d control for au-  tonomous vehicles motion planning: In Technical report, Laboratory for  Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, MA, 1999. Technical report LIDS-P-2468., 1999.  50  [16] A p h r o d i t e G a l a t a , N e i l Johnson, and D a v i d Hogg. L e a r n i n g variable length markov models of behaviour. In Computer Vision and Image Understanding: CVIU, volume 81, pages 398-413, 2001. [17] M . Gleicher, H . Shin, L . K o v a r , and A . Jepsen.  Snap-together  motion:  Assembling run-time animation. In Proceedings of the 2003 Symposium on Interactive 3D Graphics, April 2003., 2003. [18] M i c h a e l Gleicher. Retargetting motion to new characters. In SIGGRAPH '98: Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pages 33-42, N e w Y o r k , N Y , U S A , 1998. A C M Press. [19] Jared G o , T h u c D . V u , and James J . Kuffher. A u t o n o m o u s behaviors for interactive vehicle animations. Graph. Models, 68(2):90-112, 2006. [20] Radek Grzeszczuk and D e m e t r i Terzopoulos.  A u t o m a t e d learning of  muscle-actuated locomotion through control abstraction. Computer Graphics, 2 9 ( A n n u a l Conference Series):63-70, 1995. [21], M a r k  Hachman.  U . S . army  develops  free  combat  simulation.  http://findarticles.eom/p/articles/mi-zdext/is.200205/ai_n9517793. [22] Tae hoon K i m , Sang II P a r k , and Sung Y o n g Shin. R h y t h m i c - m o t i o n synthesis based on motion-beat analysis. ACM Trans. Graph., 22(3):392-401, 2003. [23] D . Hougen, J . Fischer, M . G i n i , and J . Slagle. Fast connectionist learning for trailer backing using a real robot. In IEEE Int'l Conf. on Robotics and Automation, pages 1917-1922, 1996. [24] D . Hougen, M . G i n i , and J . Slagle.  R a p i d , unsupervised  connectionist  learning for backing a robot w i t h two trailers. In Proceedings of the IEEE International Conference on Robotics and Automation, pp: 2950-2955. [25] D o u g L . James and K a y v o n Fatahalian. P r e c o m p u t i n g interactive d y n a m i c deformable scenes. ACM Trans. Graph., 22(3):879-887, 2003.  51  [26] R a h u l  Kanakia.  Robot  car  to  tackle  http://daily.stanford.edu/article/2006/10/11/robot  city  streets.  CarToTackleCityStreets  [27] A l o n z o K e l l y and B r y a n Nagy. Reactive nonholonomic trajectory generat i o n v i a parametric o p t i m a l control. In International Journal of Robotics Research, 22(7):583-601, 2003. [28] D r .  Steven  Kirkpatrick.  Ford  crown  victoria  crash  simulation.  http://www.arasvo.com/crown.victoria/crown_vic.htm. [29] L . K o v a r , M . Gleicher, and F . P i g h i n . M o t i o n graphs. In Proceedings of ACM SIGGRAPH 02, 2002., 2002. [30] J o h n R . K o z a .  A genetic approach to finding a controller to back up a  tractor-trailer truck.  In Proceedings of the 1992 American Control Con-  ference, volume III, pages 2307-2311, Evanston, I L , U S A , 1992. A m e r i c a n Automatic Control Council. [31] A l e x i s Lamouret and M i c h i e l van de Panne. M o t i o n synthesis by example. In Proceedings of the Eurographics workshop on Computer animation and simulation '96, pages 199-212, N e w Y o r k , N Y , U S A , 1996. Springer-Verlag New Y o r k , Inc. [32] Jean-Claude L a t o m b e . Robot Motion Planning. K l u w e r A c a d e m i c P u b l i s h ers, Norwell, M A , U S A , 1991. [33] S. L a V a l l e . R a p i d l y - e x p l o r i n g random trees: A new t o o l for p a t h planning. . In Computer Science Dept., Iowa State University., Oct. 1998., 1998. [34] S. L a V a l l e and J . K u . R a n d o m i z e d kinodynamic planning. In Proc. IEEE Int'l Conf. on Robotics and Automation, 1999. [35] J . Lee, J . C h a i , P. Reitsma, J . Hodgins, and N . P o l l a r d . Interactive control of avatars animated w i t h human motion data. In Proc. SIGGRAPH 2002., 2002.  52 [36] Jehee Lee and K a n g H o o n Lee. P r e c o m p u t i n g avatar behavior from human motion data.  In SCA  '04:  Proceedings of the  2004  ACM  GRAPH/'Eurographics symposium on Computer animation, pages 79-87, N e w Y o r k , N Y , U S A , 2004. A C M Press. [37] Jehee Lee and Sung Y o n g Shin. A hierarchical approach to interactive motion editing for human-like figures. In SIGGRAPH '99: Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pages 39-48, N e w Y o r k , N Y , U S A , 1999. A C M P r e s s / A d d i s o n - W e s l e y P u b lishing C o . [38] M i c h a e l J . M a n d e l . Versatile and interactive v i r t u a l humans: H y b r i d use of data-driven and dynamics-based motion synthesis. M a s t e r ' s thesis, 2004. [39] Sang II P a r k , H y u n J o o n Shin, Tae H o o n K i m , and S u n g Y o n g S h i n . O n line motion blending for real-time locomotion generation: Research articles. Comput. Animat. Virtual Worlds, 15(3-4):125-138, 2004. [40] K e n P e r l i n . R e a l time responsive a n i m a t i o n w i t h personality. IEEE Transactions on Visualization and Computer Graphics, 1(1):5—15, 1995. [41] K e n P e r l i n and A t h o m a s Goldberg. Improv: A system for scripting interactive actors i n v i r t u a l worlds. Computer Graphics, 3 0 ( A n n u a l Conference Series) :205-216, 1996. [42] K a t h e r i n e P u l l e n and C h r i s t o p h Bregler. A n i m a t i n g by multi-level sampling. In CA '00: Proceedings of the Computer Animation, page 36, W a s h ington, D C , U S A , 2000. I E E E C o m p u t e r Society, [43] K a t h e r i n e P u l l e n and C h r i s t o p h Bregler. M o t i o n capture assisted animation: t e x t u r i n g and synthesis. In SIGGRAPH '02: Proceedings of the 29th annual conference on Computer graphics and interactive techniques, pages 501-508, N e w Y o r k , N Y , U S A , 2002. A C M Press. [44] C r a i g Reynolds. Steering behaviors for autonomous characters. Developers Conference 1999, 1999.  In Game  SIG-  53  [45] C r a i g W . Reynolds. Flocks, herds, and schools: A distributed behavioral model. Computer Graphics, 21(4):25-34, 1987. [46] Charles Rose, M i c h a e l F . Cohen, and B o b b y Bodenheimer.  Verbs and  adverbs: M u l t i d i m e n s i o n a l motion interpolation. IEEE Computer Graphics and Applications, 18(5):32-41, /1998. I [47] Charles Rose, B r i a n Guenter, B o b b y Bodenheimer, and M i c h a e l F . C o h e n . Efficient generation of motion transitions using spacetime constraints.  In  SIGGRAPH '96: Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 147-154, N e w Y o r k , N Y , U S A , 1996. A C M Press. [48] J . K . Rosenblatt. D A M N : A distributed architecture for mobile navigation. In Proc. of the AAAI Spring Symp. on Lessons Learned from Implememted Software Architectures for Physical Agents, Stanford, C A , f995. [49] Stuart J . Russell and Peter N o r v i g . Artificial Intelligence: A Modern Approach. Pearson E d u c a t i o n , 2003. [50] Alessandro Saffiotti. H a n d l i n g uncertainty i n control of autonomous robots. Lecture Notes in Computer Science, 1600:381, 1999. [51] Pedro Santana.  Survival kit: A b o t t o m layer for robot navigation, cite-  seer.ist.psu.edu/article/santana05survival.html. [52] A r n o Schodl and Irfan A . Essa. C o n t r o l l e d animation of video sprites, fn SCA '02: Proceedings ofthe 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation, pages 121-127, N e w Y o r k , N Y , U S A , 2002. A C M Press. [53] L . M . Tanco and A . H i l t o n . Realistic synthesis of novel h u m a n movements from a database of m o t i o n capture examples. In HUMO '00: Proceedings of the Workshop on Human Motion (HUMO'00), page 137, W a s h i n g t o n , D C , U S A , 2000. f E E E C o m p u t e r Society.  54  [54] T O R C S .  http://torcs.sourceforge.net.  [55] Pennsylvania State University. V i r t u a l crash and tire simulation laboratory. http://www.arl.psu.edu/capabilities/mm_vehicledynmcs_crashlab.html. [56] M i c h i e l van de Panne, Eugene F i u m e , and Zvonko Vranesic. Reusable motion synthesis using state-space controllers. In SIGGRAPH '90: Proceedings of the 17th annual conference on Computer graphics and interactive techniques, pages 225-234, N e w Y o r k , N Y , U S A , 1990. A C M Press. [57] Douglas J . W i l e y and James K . H a h n . Interpolation synthesis of articulated figure motion. IEEE Computer Graphics and Applications, 17(6):39-45, / ' 1997. [58] A n d r e w W i t k i n and M i c h a e l Kass.  Spacetime constraints.  Computer  Graphics, 22(4): 159-168, 1988. [59] A n d r e w W i t k i n and Zoran P o p o v i c . M o t i o n warping. Computer Graphics, 2 9 ( A n n u a l Conference Series): 105-108, 1995. [60] K a t s u Yamane, James J . Kuffner, and Jessica K . Hodgins. Synthesizing animations of human m a n i p u l a t i o n tasks. ACM Trans. Graph., 23(3):532539, 2004.  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0052035/manifest

Comment

Related Items