UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Style exploration and generalization for character animation Agrawal, Shailen 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2016_february_agrawal_shailen.pdf [ 45.63MB ]
Metadata
JSON: 24-1.0221364.json
JSON-LD: 24-1.0221364-ld.json
RDF/XML (Pretty): 24-1.0221364-rdf.xml
RDF/JSON: 24-1.0221364-rdf.json
Turtle: 24-1.0221364-turtle.txt
N-Triples: 24-1.0221364-rdf-ntriples.txt
Original Record: 24-1.0221364-source.json
Full Text
24-1.0221364-fulltext.txt
Citation
24-1.0221364.ris

Full Text

Style Exploration and Generalizationfor Character AnimationbyShailen AgrawalB.Tech., Indian Institute of Technology Delhi, 2008M.Tech., Indian Institute of Technology Delhi, 2009A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Computer Science)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)December 2015c© Shailen Agrawal 2015AbstractBelievable character animation arises from a well orchestrated performanceby a digital character. Various techniques have been developed to helpdrive this performance in an effort to create believable character animations.However, automatic style exploration and generalization from motion dataare still open problems. We tackle several different aspects of the motiongeneration problem which aim to advance the state of the art in the areasof style exploration and generalization.First, we describe a novel optimization framework that produces a di-verse range of motions for physics-based characters for tasks such as jumps,flips, and walks. This stands in contrast to the more common use of opti-mization to produce a single optimal motion. The solutions can be optimizedto achieve motion diversity or diversity in the proportions of the simulatedcharacters. Exploration of style of task achievement for physics-based char-acter animation can be performed automatically by exploiting “null spaces”defined by the task.Second, we perform automatic style generalization by generalizing a con-troller for varying degree of task achievement for a specified task. We de-scribe an exploratory approach which explores trade-offs between competingobjectives for a specified task. Pareto-optimality can be used to explore var-ious degrees of task achievement for a given style of physics-based characteranimation. We describe our algorithms for computing a set of controllersthat span the pareto-optimal front for jumping motions which explore thetrade-off between effort and jump height. We also develop supernaturaljump controllers through the optimized introduction of external forces.Third, we develop a data-driven approach to model sub-steps, such as,sliding foot pivots and foot shuffling. These sub-steps are often an integralcomponent of the style observed in task-specific locomotion. We presenta model for generating these sub-steps via a foot step planning algorithmwhich is then used to generate full body motion. The system is able togeneralize the style observed in task-specific locomotion to novel scenarios.iiPrefaceVersions of Chapter 3 have been published in the following :• Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposiumon Computer Animation, SCA ’13 [3] (Best paper honorable men-tion).• IEEE Transactions on Visualization and Computer Graphics [4]. (Ex-tended journal version).The ideas originated in discussions between myself and Michiel van dePanne. I conducted most of the implementation and testing. Shuo Shenhelped perform tests for a 2D toy example and helped run optimizations.Shuo Shen also helped to write scripts for better organizing the optimiza-tion runs. Michiel van de Panne and I contributed to the writing of themanuscript.A version of Chapter 4 has been published in the following :• Proceedings of Motion on Games, MIG ’13 [5]. (Best paper award)The ideas originated in discussions between myself and Michiel van dePanne. I conducted the implementation, testing and contributed to writingfor the manuscript. Michiel van de Panne provided guidance and insightsduring the implementation phase, and also contributed to the writing of themanuscript.A version of Chapter 5 is in submission [in preparation]. The ideas orig-inated in discussions between myself and Michiel van de Panne. I imple-mented the system and worked on the writing for the manuscript. Michielvan de Panne contributed with insights and ideas during the implementa-tion, and helped with the writing of the manuscript.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Overview of Contributions . . . . . . . . . . . . . . . . . . . 21.2 Style Exploration using Physics-based Character Animation 21.3 Style Generalization using Pareto-Optimal Control for Physics-based Characters . . . . . . . . . . . . . . . . . . . . . . . . . 51.4 Style Generalization from Motion Capture Data . . . . . . . 62 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.1 Physics-based Character Control . . . . . . . . . . . . . . . . 82.2 Kinematic Character Control . . . . . . . . . . . . . . . . . . 113 Diverse Motions and Character Shapes for Simulated Skills 183.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . 203.3 Diversity Optimization . . . . . . . . . . . . . . . . . . . . . 213.4 Distance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . 233.4.1 Motion Distance Metric . . . . . . . . . . . . . . . . . 233.4.2 Shape Distance Metric . . . . . . . . . . . . . . . . . 243.5 Character Models . . . . . . . . . . . . . . . . . . . . . . . . 253.6 Results for Motion Variations . . . . . . . . . . . . . . . . . 273.6.1 Motion Tasks . . . . . . . . . . . . . . . . . . . . . . 27ivTable of Contents3.6.2 Control over Magnitude of Motion Variation . . . . . 313.7 Results for Shape Variations . . . . . . . . . . . . . . . . . . 313.8 Discussion and Conclusions . . . . . . . . . . . . . . . . . . . 354 Pareto Optimal Control for Natural and Supernatural Mo-tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 394.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . 404.3 Controller and Task Representation . . . . . . . . . . . . . . 414.3.1 Controller phases . . . . . . . . . . . . . . . . . . . . 424.3.2 Task Representation . . . . . . . . . . . . . . . . . . . 424.3.3 Natural and Supernatural Motion . . . . . . . . . . . 444.4 Optimization Framework . . . . . . . . . . . . . . . . . . . . 444.4.1 Binned Single Objective Optimization (BSOO) . . . . 464.4.2 Multi-Objective Optimization for Discovering the Pareto-Optimal Front . . . . . . . . . . . . . . . . . . . . . . 474.4.3 Re-sampled Multi Objective Optimization (RMOO) . 504.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575 Task Based Locomotion . . . . . . . . . . . . . . . . . . . . . . 615.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 615.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625.3 Template-based Footstep Plans . . . . . . . . . . . . . . . . . 655.3.1 Phases, Steps, and Templates . . . . . . . . . . . . . 655.3.2 Template Retrieval . . . . . . . . . . . . . . . . . . . 695.4 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 745.4.1 Data Prior Objective . . . . . . . . . . . . . . . . . . 755.4.2 Smooth Step Objective . . . . . . . . . . . . . . . . . 765.4.3 Average Feet Orientation Objective . . . . . . . . . . 765.4.4 Distance From Goal Objective . . . . . . . . . . . . . 765.4.5 Average Feet Location Objective . . . . . . . . . . . . 775.5 Full Body Motion Generation . . . . . . . . . . . . . . . . . 785.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 805.6.1 Task Categories . . . . . . . . . . . . . . . . . . . . . 805.6.2 Task Effort . . . . . . . . . . . . . . . . . . . . . . . . 845.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89vTable of Contents6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906.1 Summary and Contributions . . . . . . . . . . . . . . . . . . 906.2 General Discussion and Future Directions . . . . . . . . . . . 916.2.1 Crowd Sourcing Controllers . . . . . . . . . . . . . . 916.2.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . 926.2.3 Discrete and Continuous Planning . . . . . . . . . . . 926.2.4 Physics-based Characters in Games and Visualizations92Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94viList of Tables3.1 Optimization allowing for changes in character shape. Theresults provide the mean and standard deviation for each typeof metric when optimized for motion diversity (MD, row 1)and shape diversity (SD, row 2). . . . . . . . . . . . . . . . . 354.1 Weights used for the internal joint effort metric . . . . . . . . 444.2 Results using RMOO . . . . . . . . . . . . . . . . . . . . . . . 574.3 Results using BSOO . . . . . . . . . . . . . . . . . . . . . . . 585.1 Foot Step Strategies . . . . . . . . . . . . . . . . . . . . . . . 665.2 Weights for objective functions. . . . . . . . . . . . . . . . . . 785.3 Description of task categories. The motion capture setups aredescribed in Figure 5.13. . . . . . . . . . . . . . . . . . . . . . 81viiList of Figures1.1 Elements shared between contributions. . . . . . . . . . . . . 31.2 Input-Output Diagram for Chapter 3. . . . . . . . . . . . . . 41.3 Input-Output Diagram for Chapter 4. . . . . . . . . . . . . . 51.4 Input-Output Diagram for Chapter 5. . . . . . . . . . . . . . 72.1 Physics-based character animation examples [20] (used withpermission). . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2 Closed loop motion control. . . . . . . . . . . . . . . . . . . 102.3 Illustration of an actual optimization run with covariance ma-trix adaptation on a simple two-dimensional problem. Thespherical optimization landscape is depicted with solid linesof equal f -values. The population (dots) is much larger thannecessary, but clearly shows how the distribution of the popu-lation (dotted line) changes during the optimization. On thissimple problem, the population concentrates over the globaloptimum within a few generations. Source : Wikipedia (pub-lic domain image). . . . . . . . . . . . . . . . . . . . . . . . . 112.4 Motion capture suit and cameras used for capturing motiondata. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.5 Foot step planning algorithm being tested on Honda ASIMO [1](used with permission). . . . . . . . . . . . . . . . . . . . . . 142.6 A generative model for identity and style [50] (used with per-mission). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.7 Probabilistic data-driven model for animation generation [12](used with permission). . . . . . . . . . . . . . . . . . . . . . 163.1 Diverse Motion Variations for a Forward Jump. . . . . . . . 183.2 Abstract view of diversity optimization. . . . . . . . . . . . . 193.3 Effect of K on diversity optimization in a 2D domain with asum-of-squared-distances metric. All 4 synthesized solutionpoints begin at the reference point shown in red. . . . . . . . 223.4 Sampled Character Shapes . . . . . . . . . . . . . . . . . . . 26viiiList of Figures3.5 Results for 2D motions (a,b,c,d,e) and 3D motions (f,g,h,i).For each task we show the input motion, M0, on the first rowand the four synthesized variations, M1–M4, on the remainingrows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.6 Two methods for controlling the degree of diversity. (a) Thefigures in light blue, orange, pink, green, and blue show re-sults at points that are 0, 25, 50, 75, and 100% along theoptimization path. (b) The orange figures are significantlystronger than the blue figure. . . . . . . . . . . . . . . . . . . 323.7 Optimization results that allow for changing character pro-portions. Motions can be optimized for motion diversity (a)or shape diversity (b). For each task we show the input mo-tion, M0, on the first row and the four synthesized variations,M1–M4, on the remaining rows. . . . . . . . . . . . . . . . . 333.8 Optimization results that allow for changing character pro-portions. Motions can be optimized for motion diversity (a)or shape diversity (b). For each task we show the input mo-tion, M0, on the first row and the four synthesized variations,M1–M4, on the remaining rows. . . . . . . . . . . . . . . . . 344.1 Pareto-Optimal Controllers for Standing Jump. Each char-acter shown represents the pose at the peak height for re-spective controllers lying on the pareto-optimal front. Browncharacters represent natural jumps whereas blue controllersrepresent supernatural jumps. . . . . . . . . . . . . . . . . . 394.2 Motion control phases . . . . . . . . . . . . . . . . . . . . . . 424.3 (a) A plot of COM velocity vs time. The red rectangle in-dicates the region where external forces are active. (b) Thesudden jump in gravity occurs at take-off. The red rectanglehighlights the supernatural region. . . . . . . . . . . . . . . . 454.4 Binned single objective optimizations ( BSOO ). Here, f1 isthe peak height of the jump and f2 is fso which is defined inEq. 4.5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46ixList of Figures4.5 Hypervolume sorting. The blue point is a reference pointchosen to be worse off than all possible samples in all objec-tives. The red regions shows the area (Lebesque measure in2D) of rectangles (hypercuboids in 2D) specified by a sam-ple on the pareto-optimal front and the reference point. Thegreen region denotes the area contribution (hypervolume con-tribution in 2D) of the green non-dominated sample on thepareto-optimal front. . . . . . . . . . . . . . . . . . . . . . . 494.6 Comparison between re-sampling strategies . . . . . . . . . . 524.7 The effect of a large penalty weight wSE but no scaling onpareto-optimal front distribution. Only two controllers areproduced in the natural regime of the pareto-optimal front.The controllers on the pareto-optimal front for a height >2.4m use an external force assist. . . . . . . . . . . . . . . . 534.8 Plots of samples generated from BSOO and plots of pareto-optimal fronts obtained from RMOO . . . . . . . . . . . . . 564.9 Natural and supernatural motions. The snapshots of the mo-tions are created by capturing the pose at the beginning ofeach phase. The snapshots for each of the standing in-placejumps are offset horizontally for visualization purposes. . . . 595.1 Task-specific locomotion involving writing on a whiteboard,moving a box, and sitting on a box. The motion exhibitsside-stepping, heel pivots, foot pivots, turns, and steps. . . . 625.2 System Overview. . . . . . . . . . . . . . . . . . . . . . . . . 635.3 Footstep plans for various task entries and exits. . . . . . . . 655.4 Motion phases and stepping strategies for several instances ofwriting-task transitions. . . . . . . . . . . . . . . . . . . . . . 675.5 A state diagram showing possible transitions between variousfootstep styles. The color coding used here for each footstepstyle is used in the results in the rest of this chapter and inthe video. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685.6 Reconstructing task-specific footstep templates. The mostsuitable template is selected for the given task phase basedon similarity of the transition. . . . . . . . . . . . . . . . . . 695.7 Template Selection Mechanism. A writing task exit and entrytemplate selection scenario is shown above. Task categoriescan be different for entry and exit template selection in amore general case. . . . . . . . . . . . . . . . . . . . . . . . . 70xList of Figures5.8 Choice of the best pair of templates for entry and exit fromk-most suitable footstep templates. . . . . . . . . . . . . . . . 735.9 Comparison of unoptimized task-specific footstep plan withan optimized plan. The character is directed to exit from awriting task on the left, represented by the sphere, and sit ona box, located on the right. . . . . . . . . . . . . . . . . . . . 745.10 Optimization phases. . . . . . . . . . . . . . . . . . . . . . . 755.11 Feet trajectory by warping closest motion segment from data.Grey footsteps and curves represent original swing foot tra-jectory. Black footsteps and curves represent warped swingfoot trajectory. Root motion is also warped using a transfor-mation of the swing foot warp along with a transformation tomatch the incoming root trajectory. . . . . . . . . . . . . . . 785.12 The IK system takes feet, root and task related transformsas end effectors. The algorithm is warm started with a data-driven pose. . . . . . . . . . . . . . . . . . . . . . . . . . . . 795.13 Representative setups for recording various entry and exitstrategies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815.14 Task Categories. . . . . . . . . . . . . . . . . . . . . . . . . . 825.15 A comparison of some of the different strategies for entering alow effort task depending on the relative location of the nexttask. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885.16 Effect of character anthropometry of synthesized task-specificfoot step plan. . . . . . . . . . . . . . . . . . . . . . . . . . . 89xiAcknowledgementsI would like to thank those who have walked with me on this amazingjourney. Especially to my respected supervisory committee, namely, Dr.Michiel van de Panne, Dr. Dinesh Pai, and Dr. Robert J. Woodham,for their continuous guidance and intellectual inspiration. It has been myhonor to work with these experts in the field of modeling character motion. Icannot imagine a better mentor than Dr. Michiel van de Panne, my researchsupervisor. His intellectual rigor is inspiring, as is his passion and energyfor doing research. His mentorship has been multifaceted; included notonly providing thought-provoking comments on my work and teaching meabout research process and methodology, but also introducing me to thevarious applications of virtual characters in a broader sense. I would like tothank Dr. Taku Komura, the external examiner for his insightful comments,questions and feedback.Special thanks go to my friends Vinita Banthia, Ajith & Monika Chan-dran, Prateek Chopra, Tom Hazelton, Eason Hu, Mohit Law, Li Li, CarmenLiang, Libin Liu, Henry Li, Owen Lo, Yuki Ohsawa, Cihan Okay, RohitPandey, Aaron Posehn, I-Chao Shen, Shuo Shen, Jyoti Singh, Ying-Li In-grid Tsai and Shirish Upadhyay for providing both company and valuableinsights. I would like to thank Libin Liu and Shuo Shen for fun discussionsabout physics-based and kinematic character animation and their applica-tions. I would also like to thank Matthew Arnold and Vanessa Kong fortaking the time to help during various motion capture sessions. I hopethe joys, laughs, surprises, and much more will continue in our professionaland personal interactions. I am grateful for all the help and support I re-ceived from Sheldon Andrews with my work with the Vortex physics engine.Also, many thanks to Kimberly Voll for getting me excited about the worldof video game development and for organizing many game programmingevents.Last but not least, I would especially like to thank my parents Brah-manand and Renu Aggarwal, brother Saurabh Agrawal, sister-in-law NehaxiiAcknowledgementsAgrawal, partner Vanessa Kong, relatives and friends for their unconditionallove, support, and everything they have done for me. I cannot imagine a lifewithout their love.xiiiChapter 1IntroductionHumans have an intrinsic desire for expression through verbalization, ges-tures or through the use of expressive media such as music, painting etc.Movies and games have also risen in popularity as a means of entertainmentand expression. Visualization is also another form of expression and it isbeing extensively used for medical and engineering purposes.Animation is an integral component of games, movies and visualizations.Flip-book and stop-motion animations were some of the first experimentswith making imagery move. These traditional animation techniques stillcontinue to be widely used for both training and artistic purposes. Astechnology advanced, the use of computers to create animation has becomemore prevalent. This has led to the development of software tools to assistanimators in their pursuit of the art of making images come to life.As time has progressed, although the use of digital tools for creating an-imations has become more prevalent, many of the traditional animationprinciples from the pen and paper paradigm have carried over to this digi-tal domain. However, this transition has not come without a cost. Now asignificant effort is needed to learn to use these new tools in order to createnatural looking and believable animations. Currently, creating digital ani-mations in a production setting is still a labor intensive process. It usuallyrequires teams of highly trained artists to spend hundreds of hours, each,for creating animation shorts, movie clips, and animations for video gamesand visualizations. Hence, more than ever, there is now a need for auto-mated and intuitive techniques for creating animation. We hope to assistanimators in their endeavor of creating believable animations through theuse of advanced but intuitive animation synthesis tools and algorithms. Inthis thesis, we will describe our contributions to the areas of style general-ization and exploration techniques for character animation in order to assistanimators in achieving the right look for their animations while alleviatingsome of their burden by automating such synthesis.11.1. Overview of Contributions1.1 Overview of ContributionsMotion has an inherent associated style. Differences in style while perform-ing a task can arise due to both anthropometrical differences and psycholog-ical states of the performing actors. For example, a person with a heavierbuild might need to put in extra effort to perform a jump of the same heightas a person with a lighter build. This extra effort can produce visual dif-ferences in the style of the jump. On the other hand, when a child sneaksinto the kitchen to swipe a cookie from a jar, their style of gait could bequite different from when the same child is casually strolling in a park. Inother words, juxtaposing styles with task oriented motions creates a richspace of motion expression. We explore this idea of style exploration andgeneralization in multiple domains :• Physics-based approaches can be used to explore a wide array of di-verse styles for a specified skill for simulated characters as describedin Section 1.2 and Chapter 3.• We propose a multi-objective optimization technique for generating apareto-optimal front of controllers in order to explore the trade-offsbetween competing objectives such as jump height vs energy expendi-ture, jump distance vs energy expenditure, etc. This in turn leads tostyle generalization for physics-based characters as described in Sec-tion 1.3 and Chapter 4.• Finally, in Section 1.4 and Chapter 5 we describe a method to general-ize stepping styles from motion capture data to novel scenarios. Thistechnique allows for real-time generation of novel but style-preservinganimations while interacting with a synthetic environment.We show the overlaps between various components for each of ourcontribution in Figure 1.1.1.2 Style Exploration using Physics-basedCharacter AnimationArtists often play with subtle changes in body postures, foot placements,and step timings to create a sense of emotional expression emanating fromthe character. Creation of such animations often entails exploration of var-ious possibilities. A selection of the desired outcome from the result of21.2. Style Exploration using Physics-based Character AnimationFigure 1.1: Elements shared between contributions.this exploration is then performed. Often this exploration is performed inthe vicinity of an original animation which roughly represents the intendedartistic direction.Optimization has proven to be a powerful tool in designing and exploringthe motion of physics-based characters. Given a set of objective functionsand an initial guess at a motion or its controls, a variety of numerical opti-mization methods can be used to refine the motion control (or the motionitself) until the process converges to a result that maximizes (or minimizes)the objective function. In this context, a well-constructed optimization prob-lem should have a well-defined maximum (or minimum) and have an objec-tive function that accurately captures the user’s intent. Objective functionsthat minimize control effort are a common way of defining such an optimiza-tion problem. However, this also unfortunately eliminates the possibility ofseeing the rich non-optimal variations in style that are commonly seen inhuman motions and which help give life to animated motions.We propose to assist the process of motion exploration by performing au-tomatic exploration of diverse style variations as described in Chapter 3.The original animation in our implementation comes from simulating aphysics-based character controller. The notion of how different a synthe-sized variation is from the original animation is formalized by encapsulatingthis in a distance metric. User created objective functions are then usedin an optimization procedure to maximize the pairwise distances betweenthe synthesized animations, and the original animation as illustrated in Fig-ure 1.2. The optimization results in styles which are as diverse as possiblefrom the input style. This removes the onus of exploration of styles from31.2. Style Exploration using Physics-based Character AnimationFigure 1.2: Input-Output Diagram for Chapter 3.the artist and they can instead focus on creating just the original animationand specifying what features they want to preserve from this input anima-tion. Hence, the input for our system is an original animation in the formof a physics-based character controller, a set of constraints and the desirednumber of synthesized controllers.In Chapter 3 we propose to work with underconstrained motion specifi-cations such as “do a backflip and land in this region” and to then auto-matically generate highly-diverse motion variations that satisfy the givenspecifications. This enables the exploration of “the space of back flips”and, similarly, the spaces of possibilities for other types of underconstraineddynamic motions. We loosely refer to these spaces of possible motions asmotion null spaces.41.3. Style Generalization using Pareto-Optimal Control for Physics-based CharactersWe expect that the proposed method can be applied in a variety of sce-narios as it can allow animators to reduce the amount of manual iterationrequired to design a motion. Informal discussions with animators reveal thatwhen a new motion is encountered quite some time is spent exploring for amotion that has the “right” look. Also, diverse motion synthesis can be usedin situations when motion variation becomes important, i.e., for crowds ornon-repetitive motions for an individual.1.3 Style Generalization using Pareto-OptimalControl for Physics-based CharactersHumans are generally efficient at achieving their motion goals. This makesoptimization a natural tool for the design of physics-based motions. Whilethe focus of a lot of work in this area has been to produce individual opti-mized motion instances, most motions in reality have goals that are naturallyparameterized, such as walking speed or jump height.Figure 1.3: Input-Output Diagram for Chapter 4.51.4. Style Generalization from Motion Capture DataIn Chapter 4 we investigate several possible algorithms for computing aset of solutions that span a pareto-optimal front. Pareto optimality, for ourproblem domain, is a state of value assignment to samples such that it isimpossible to make an improvement with respect to any particular dimen-sion without making at least one of the dimensions in the sample’s valueassignment worse off. In the context of a human jump that trades-off effortfor jump height, the goal of the algorithm is to automatically compute a setof solutions that span the range from low-effort low-height jumps to large-effort high-height jumps. By aiming to simultaneously compute a range ofsolutions, we can expect that there is a significant efficiency to be gained.1.4 Style Generalization from Motion CaptureDataMotion capture has risen in its popularity because of the ease with which itcan be used to recreate performances of live actors in a virtual environments.It is increasingly being used for creating digital doubles, cartoons, and an-imations for video games. The animations resulting from motion capturespreserve the style of the live actor. As long as we have the relevant data,directing the style of the virtual characters is trivial from an algorithmicstandpoint. However, generalizing motion capture data to novel scenarios,while also preserving the style of the live actor, is a challenging and an openproblem.Generalizing motion while performing tasks such as writing on a whiteboard, lifting an d moving boxes and, siting and getting up from chairs isa challenging and interesting problem. Many locomotion tasks exhibit arich vocabulary of foot steps such as side steps, toe pivots, heel pivots, andintentional foot slides. These subtle but important variations in steppingstrategies help in increasing the realism of synthesized task-specific charac-ter animation. However, locomotion is often treated as navigating betweentwo locations, as accomplished by a sequence of well-defined, non-slidingsteps. In order to generate realistic task-specific locomotion behavior wepresent a method for modeling and synthesizing task-specific stepping be-haviors in Chapter 5. The model is based on an annotated set of examplemotion clips that are captured for each task. An optimized foot-step planis computed at interactive rates via the proposed system which is then fur-ther used as the basis for producing the full body motion. We demonstratethe synthesis of high-quality motions for three tasks: whiteboard writing,61.4. Style Generalization from Motion Capture DataFigure 1.4: Input-Output Diagram for Chapter 5.moving boxes, and sitting behaviors. We show that the model allows forretargeting to characters of varying proportions by yielding motion plansthat are appropriately tailored to these proportions. We also account fortask effort or task duration via our planning algorithm resulting in syn-thesis of stepping behaviors capable of exhibiting co-articulation behaviorswhile performing a sequence of tasks. This helps further enhance the realismof the synthesized task-specific animations. A brief overview of the systemis shown in Figure 1.4, and the details are described in Chapter 5.7Chapter 2Related WorkPhysics-based and motion capture based control can be achieved in a varietyof ways. There is a large body of research exploring methodologies forcoordinated character control. We now briefly describe the background andterminology used in this thesis.2.1 Physics-based Character ControlPhysics-based Character Animation uses a physics simulation to drivethe animation of the character. The desired actuation of the character, asa whole, is achieved by actuating each joint through internal torques. Thisspecification is, by definition, under-actuated as there is no direct controlover the Center of Mass (COM) position and orientation. Hence, althoughsuch animation systems create physically plausible animation, creation of adesired style of character animation is still a challenging problem. Motioncapture data is often used in both kinematic and physics-based regimes toseed the style of the motion. However, generalizing this learnt style andfurther exploring new styles using these seed styles is still an open researchproblem.Incorporating physics into an animation creation system facilitates recon-structing physically plausible animations as a by-product of the charactersimulation and the interaction of the character with the environment. How-ever, the use of physics for character simulation also leaves us with a difficultFigure 2.1: Physics-based character animation examples [20] (used withpermission).82.1. Physics-based Character Controlcontrol problem to solve. The use of naive approaches, such as, specificationof target joint angles for PD controllers during each phase, for creation ofphysics-based animation requires significant familiarity with the dynamicsregime. In addition, a human or an imaginary creature is usually under-actuated. Hence the overall control of the character is performed indirectlyby a careful orchestration of joint actuations. Often, the effect of such ac-tuations on the produced motion is not intuitive. A lot of trial and erroris required before the desired joint actuations can be discovered. Each suchtrial involves performing a simulation with a set of test parameters and ob-serving the result of the simulation. This can be quite a time consumingand often a frustrating process. In the past, the absence of techniques whichcould alleviate issues such as these has led to limited adoption of physics-based animation as a mainstream animation creation paradigm. However,with the development of techniques which facilitate higher level of controlof the character along with access to faster computation has resulted in are-kindling of interest in physics-based character animation, especially inthe game design community.Due to the capability of producing accurate responses to the environment,physics-based simulation forms a natural fit with applications such as gamesand visualizations. Passive phenomenon such as waves, smoke and fire aregenerally simulated using various physics-based techniques in both moviesand video-games. Active control of physics-based characters, however, re-mains a challenging problem. The lack of a good and efficient model forsimulating the often-changing and non-linear relationship between joint ac-tuations and the resulting character’s movement, is the main hurdle facedby physics-based character animation techniques. Most approaches attemptto re-cast the problem of generating a “good” motion as an optimizationproblem by coming up with carefully designed objective functions. Theseobjective functions are usually constructed such that their optimal solutionsrepresent the desired degree of task achievement.A motion controller is responsible for generating the control signals whichdrive the physics-based character. The control signals consist of torquesgenerated in the joints. A careful orchestration of these control signals isrequired to attain various goals such as maintaining balance, reaching aspecific location with a specific orientation and end effector configuration,observing a particular style of foot placement for the next step etc. In Figure2.2 we show a typical motion control loop.92.1. Physics-based Character ControlDesign approachesPhysics-based character control can be achieved in a variety of ways. Manymethods are based on tracking a reference motion based on motion capturedata, as seen in many recent methods: [19, 21, 37, 43, 45, 51, 59, 79, 82]A second approach is to develop appropriate objective functions withoutrelying on captured data and to then synthesize motions using online oroffline optimization. Representative examples of this approach include [8,22, 44, 46, 69, 76]. A third approach directly involves users in authoring theshape of the controlled motion, e.g., [20, 44, 53, 82]. In practice, methodsoften combine multiple aspects of these three possible sources of motion.Many control strategies have been developed for physics-based charactersover the past two decades, often with a focus on human locomotion. A recentsurvey can be found in [24]. The controllers are often structured around asequence of motion phases, represented as states in a finite state machines,with further continuous feedback laws added within each state, such as thosethat make use of foot placement for balance [29, 39, 55, 62, 80].Covariance matrix adaptation (CMA)[27] has become a populargeneric method for the derivative-free optimization problems that commonlyarise when shooting-style methods are applied to the synthesis of controllersfor physics-based characters. CMA is a derivative-free iterative algorithmwhich maintains a Gaussian distribution of the parameters. At each iter-ation, the covariance matrix is adapted using the weighted fitness of thesampled distribution as illustrated in Figure 2.3. Adaptation of the covari-ance matrix amounts to learning a second order model of the underlyingFigure 2.2: Closed loop motion control.102.2. Kinematic Character ControlFigure 2.3: Illustration of an actual optimization run with covariance matrixadaptation on a simple two-dimensional problem. The spherical optimiza-tion landscape is depicted with solid lines of equal f -values. The population(dots) is much larger than necessary, but clearly shows how the distribu-tion of the population (dotted line) changes during the optimization. Onthis simple problem, the population concentrates over the global optimumwithin a few generations. Source : Wikipedia (public domain image).objective function. The Gaussian distribution is initialized with a sphericaldistribution and at each iteration M samples for the parameter vector aregenerated using the current Gaussian distribution. A subset comprised ofthe best samples for each iteration (according to their respective fitness val-ues) is used to update the Gaussian distribution. Eventually after repeatingthis process for a few iterations the distribution converges to a region of lowobjective function value.2.2 Kinematic Character ControlKinematic Character Animation describes the motion of a character withoutconsidering the cause of motion. Such direct specifications are intuitive asobservations from real life can be directly applied to creation of animation.112.2. Kinematic Character ControlAn animation specified by describing the pose at some specific instancesof time, i.e. Keys, is called Keyframed Animation. Key-framed animationstill enjoys wide popularity in spite of being one of the first animation tech-niques. This is partly due to the ease with which keys can be specified.Also, a simpler relationship between the output animation and the inputkeys has resulted in wider acceptance of key-framing for use in productionenvironments.The ease of use of key-framing techniques, however, comes at a cost. Key-framing can often be a long and tedious process. It can take years of practiceto master the art of generating key-frames commensurate with the desiredartistic intent, be it physical plausibility or cartoonish exaggeration. It cantake many hours to produce animations, which are just a few seconds long,for even highly trained artists.There are two main paradigms associated with generating kinematic ani-mation. Forward Kinematic Animation calculates positions and orientationsof particular parts of the character, such as its end effectors, by using theinformation of root position and orientation along with joint configurations.Inverse Kinematic Animation [61] on the other hand calculates the configu-ration of the joints from pre-specified end effector location and orientations.Motion CaptureIn order to create natural and realistic animation often an alternate method-ology of capturing live motion is employed. This process of recording motiondata from a live actor is called Motion Capture. The most common formof this system requires the actor to wear a suit with reflective markers asshown in Figure 2.4. The motion of these markers is recorded using a setof specialized motion capture cameras. The intersection of the view conesof these cameras defines a capture volume. After the recording process hasbeen completed, a post-process is performed over the recorded video streamsfrom each camera to triangulate the marker locations. The capture volumemust encompasses the live actor for the entire duration of the motion inorder to be able to faithfully reconstruct the positions of all the markers.Another post process can then be performed to recover skeletal animationof the live actor from the motion marker data. This skeletal animation canthen be transferred to a virtual character. Motion capture produces naturaland realistic motion. However, generalizing the motion to new scenarios isstill a challenging and an open research problem.122.2. Kinematic Character ControlFigure 2.4: Motion capture suit and cameras used for capturing motiondata.Motion EditingA simple way to generalize motion capture data to a novel scenarios isto perform local edits that adapt the motions to meet desired constraints.Motion capture clips can be edited by warping [75], which smoothly addsoffsets to the motion, or by retargeting in order to transfer the motion toa character having different proportions [25, 41]. Multi-targeted blends canbe used to create a space of parameterization from a set of example motions[56, 74].Motion sequencingMany kinematic methods re-sequence existing motion data to create novelanimations. A motion graph can be used to model the allowable ways inwhich motion capture clips can be sequenced while satisfying the requiredconstraints on the quality of the transitions between clips. Full bodied char-acter animations can then reconstructed by generating walks on these graphs[7, 34, 49, 78] or by building probabilistic models of transitions and characterposes [12, 73]. In our work, allowable step sequences are implicitly modeledusing the task-specific step-sequences that we use as templates for planningkey phases of the motions.132.2. Kinematic Character ControlFootstep PlanningLocomotion is often modeled in terms of the sequence of footstep locations,their orientations, and their timing. The foot placements should avoid ob-stacles, should follow natural stepping patterns, and make progress towardsthe desired goal. Footstep planning has been used as the basis for a vari-ety of locomotion synthesis algorithms, e.g., [15, 65, 66]. However motionplanning that incorporates the rich stepping structure arising in a varietyof natural task-specific contexts remains an open problem. Footstep plan-ning has also been investigated for humanoid robots [14, 35, 36], as shown inFigure 2.5. Human locomotion often exhibits a rich stepping vocabulary, in-cluding side-steps, foot pivots, and intentional foot sliding. We believe thatthese sub-steps are a core component of believable character interactionswith the environment.Humans, unlike robots, exhibit interesting styles in footstep patterns. Thefootsteps of gaits provide a rich and expressive characterization of loco-motion tasks. Side-steps, foot slides, foot pivots and shuffles are commonamongst humans while locomoting and performing a task. Planning a foot-step path to goal, while incorporating these stylistic constraints, is a problemof considerable interest towards creating believable character animation.Applying environmental constraints and sub-stepping foot step patterns,as described in Chapter 5, creates an interesting avenue for animation syn-thesis. Parameterizing both the environment and task constraints whilemaintaining a “natural” and effective foot step strategy is a difficult andinteresting problem that we investigate in Chapter 5.Figure 2.5: Foot step planning algorithm being tested on Honda ASIMO [1](used with permission).142.2. Kinematic Character ControlPose and motion reconstructionComputing the skeletal joint parameters from end effector constraints canbe computationally expensive. Multiple approaches have been proposedto alleviate some complexity, e.g., [6, 52, 61]. Inverse kinematics appliedto characters as a whole is often informally called full body IK. When ap-plied to human-like characters, full body IK can often produce unnaturalsolutions because most IK problems are under-constrained. Data-drivenapproaches can favor solutions which lie closer to the poses found in the mo-tion database resulting in natural-looking poses [26]. Similarly, statisticaldynamical models, e.g., [12, 72] can be used to develop full motion sequences,including maximum-likelihood solutions that satisfy given spatio-temporalconstraints, e.g., [12]. In our work we use a data-driven prior to warm startan iterative IK algorithm [6]. This produces natural-looking poses while stillutilizing the low cost benefit of using an iterative IK algorithm.Motion StyleA number of synthesis methods attempt to factor motions into a number ofseparable attributes that might include motion attributes or ‘content’, e.g.,walking speed, and style attributes, e.g., relating to the emotional state orother unique characteristics of a person’s walking gait. Style generalizationand editing from motion capture data has been looked at from a perspectiveof recreating simple locomotion sequences [50]. A database of human mo-tions is first recorded where multiple actors perform a wide variety of motionstyles for pre-specified actions. Using this database a generative model ofthe form x = g(a, e) for style can be constructed using multilinear analysis,where a and e control the “identity” and “style” of the motion variation.Using linear interpolation between motions with different identity and style,novel motion styles can be created for the same identity or the same stylecan be applied to different identities. Similarly, style generalization has beeninvestigated for locomotion and other types of choreography, e.g., [11, 72].Our proposed method provides a degree of implicit support for motion stylesvia the set of motion capture examples that serve as underlying templates.Although techniques such as the ones described above allow for creation ofnew styles or generalizing a style to different individuals, generalizing stylein novel environments is still an open problem. We draw inspiration from alarge set of techniques in this area to propose our model-based foot steppingtechnique for task-specific locomotion as described in Chapter 5.152.2. Kinematic Character ControlFigure 2.6: A generative model for identity and style [50] (used with per-mission).Probabilistic Pose ModelsData-driven probabilistic techniques can help in creating natural-lookingcharacter animation via the use of motion-capture data. Motion capturesequences relevant to the tasks to be reproduced are provided as inputs. Astatistical dynamical model can be automatically learnt from this motioncapture data. The statistical dynamical model is used to model a motionsubspace which is natural for human motions. In this paradigm the userspecifies some spatio-temporal constraints throughout the motion. Trajec-tory optimization can be then used to reconstruct motion which satisfiesuser constraints and is most likely according to the motion priors describedby the statistical dynamical model [12].Often a real-time solution to pose prediction is desired. Techniques suchFigure 2.7: Probabilistic data-driven model for animation generation [12](used with permission).162.2. Kinematic Character Controlas the one described above require offline computation of the solution onceconstraints have been provided. However, approaches such as Style-basedIK [26] can be used for generating a full body pose in real time, given effectorconstraints.Interacting with the environmentRealistic motion during interactions with the environment greatly adds tothe believability of synthesized motions. This includes everyday tasks suchas moving objects between shelves and tables, opening doors or closing doorsand walking through them, and cooking in a kitchen. Realistic stepping forsuch scenarios remains an under-explored area, with a number of exceptions,e.g., [12, 49, 57, 77]. There hasWe find inspiration in the work of [77] towards synthesizing motions formoving objects between bookshelves, where our aim is to replace the fixedfoot constraints with natural foot stepping patterns. Prior work on motionpatches [42] also shares a number of our goals: characters must positionthemselves correctly in order to sit at a desk and climb a ladder towards aslide. Work on interaction patches [58] and multi-character editing [33] alsoshare our goal of building realistic interactions with other characters andthe environment. Our work aims to develop a more general, more nuancedmodel that is dedicated to producing high-quality fine-scale task-specificstepping motions. As compared to prior art, we focus less on path planningand more on synthesizing self-consistent, optimized stepping strategies17Chapter 3Diverse Motions andCharacter Shapes forSimulated SkillsFigure 3.1: Diverse Motion Variations for a Forward Jump.3.1 IntroductionIn this chapter we describe our first contribution namely, the use of diver-sity optimization to allow for the synthesis of sets of simulated motions andcharacter shapes that span the many possible ways in which an undercon-strained motion can be achieved. As key elements of this framework, wepropose: (1) an objective function tailored to producing diverse motions;(2) the exploration of shape diversity in order to produce motion variationsthat result from adaptation to a wide variety of character shapes; (3) theuse of round-robin covariance matrix adaptation (CMA) as an effective op-timization strategy; and (4) the use of known pose similarity metrics asbeing equally suitable as distance metrics for measuring motion diversityand character shape diversity.An abstract view of the diversity optimization problem is shown in Fig-ure 3.2. The motion null space is described by the subspace of motionsthat satisfy physics, satisfy the motion constraints, and are realizable by183.1. IntroductionFigure 3.2: Abstract view of diversity optimization.the character and its underlying controller parameterization. In this workwe do not recover the full motion null space, but rather discover a set ofmaximally-diverse examples that help define its extent. We also recover thepaths of progressive modification that lead to the maximally-diverse set.Given an input motion, shown in red, the goal is to develop synthesized mo-tion variations that are as distant as possible from each other and the inputmotion. The straight lines in Figure 3.2 abstractly illustrate the relevantpairwise distances for the synthesis of three new diverse motion variations.The degree of achievable motion variation is constrained by factors such asjoint limits, torque limits, limits on the allowable dimensions of the bodyparts, and the particular choice of parameterization of the underlying con-trol. We call these emergent constraints because it is difficult to know inadvance how these implicit and explicit limitations will end up shaping thespace of realizable motions for a given task.Most previous methods aim to optimize an objective function related toeffort and task criteria in order to find a single optimal solution. The freeparameters are commonly those that define the internal forces and torquesthat drive the motion. Our work introduces motion diversity and shapediversity as objective functions, and also explores adding character shapeparameters to the list of free parameters. By character shape we refer tothe lengths of the various rigid links of the articulated figures. This is alsocalled Anthropometry in the kinesiology literature.193.2. Related Work3.2 Related WorkDiversity optimization is a problem of general interest in the context ofplanning and AI [18, 28, 60, 64]. The problem is often posed as finding adiverse set of k configurations of a discrete set of parameters such that agiven goal, i.e., a target price range of a product, is satisfied. A commondiversity metric is to take the sum of a pairwise distance metric over allpossible pairs of the k configurations. More recently, the selection of diversesolutions has also been addressed for problems such as designing furniturelayouts [48], where solutions generated using Monte Carlo sampling canbe used to produce a diversified list using a maximal marginal relevancecriterion.Kinematic models for motion style and variationNumerous kinematic models for motion style and variation have been pro-posed. Procedural noise [54] or learned models of noise [40] can be added tokinematic character motions to achieve motion variety and realism. Manystatistical models have been developed to learn motion styles or to estimatestyle and content in a separable fashion, e.g., [10, 30, 72]. However, thesemethods create kinematic models of existing styles and are limited in theirability to create new and strikingly different styles of motion.Dynamic models for motion style and variationVariations of physics-based motions have also been explored. The pas-sive motion of objects can be shaped so that objects roll and bounce toachieve given goals [13]. The stochastic nature of the search allows for a va-riety of solutions to be generated for the same problem. Natural sources ofvariation such as motor noise and environmental conditions can be used toachieve a degree of variation in walking gaits [70]. Another approach to gen-erating motion variety is to provide tools that enable the user to efficientlysift for desirable solutions among many automatically-generated variations.This is applied to passive physics-based simulations in [63] and to a controlsystem for a 24-DOF monoped hopper dog in the Design Galleries (DG) ap-proach [47]. This latter example is closest in spirit to the problem we wishto tackle. The motion of the hopping dog is shaped by 7 time-varying sinu-soids that modulate the forward velocity, hopping height, and the pose ofthe ears, tail, and neck. The amplitude, offsets, and frequencies of these si-nusoids are then explored to produce motion variations. Our method works203.3. Diversity Optimizationon significantly more complex human and robot figures performing a muchwider range of more constrained motion tasks.We draw inspiration from the above work in order to develop a methodthat can propose novel motion styles for physics-based motion tasks. Inorder to ensure that a good exploration of the space of realizable motions isachieved and is available to the user, we seek to find the most diverse set ofmotion styles. In doing so, the optimization naturally finds novel forms ofcoordinated full body motion as well as exposing the emergent constraintsthat eventually serve to limit the extent of possible motion variation. Ourproblem is such that the cost of finding new samples that satisfy the taskconstraints is a challenging problem as the solution set becomes more distantfrom the input motion. As such, we develop an algorithm that takes anintegrated approach to maintaining task constraints and producing motiondiversity. This stands in contrast to approaches that generally assume alightweight sample-generation process [47, 48, 70].3.3 Diversity OptimizationThe goal of the diversity optimization is to generate a set of N synthesizedmotions which are as different as possible from each other while still sat-isfying desired constraints. This is captured by maximizing the followingdiversity objective:D(Pi) =N∑i=0−C(Mi) + N∑j=0(d(Mi,Mj) +Kdmin(Mi)) (3.1)where P is the vector of input parameters being optimized for and Pi isthe ith element of this vector, {Mi} denotes the set of N motions, C(Mi)is a positively valued constraint function, d(Mi,Mj) is a distance measurebetween two motions and dmin(Mi) is the minimal distance of all pairwisedistances involving motion Mi. K is a weighting constant that we shalldiscuss in further detail. The motions themselves are parameterized viatheir underlying controller according to Mi = M(Pi), where Pi are thecontroller’s free parameters (§3.5).The first term of the objective function, given by the constraint function,evaluates to zero when the motion constraints are satisfied. This “null space”corresponds to a subspace in the parameter space that contains the realizable213.3. Diversity OptimizationFigure 3.3: Effect of K on diversity optimization in a 2D domain with asum-of-squared-distances metric. All 4 synthesized solution points begin atthe reference point shown in red.motion styles that are of interest. Without loss of generality, we assume thatthe constraint is initially satisifed forM0, the reference motion that is used asa starting point, i.e., C(M0) = 0. If this is not true, then C(M0) can be usedas an objective function in order to satisfy this condition. The d(Mi,Mj)term encourages diversity through a maximization of the sum of pairwisedistances between the motions. However, by itself this can yield clusters ofmotions that have negligible distances within the cluster. The Kdmin(Mi)term addresses this by encouraging dispersion between all motion pairs, asmeasured by the minimum pairwise distance for any given motion pair.Figure 3.3 illustrates the effect of assigning different values to K, theweight assigned to the dmin term, for an example where squared Euclideandistances are used as the distance measure. A stochastic optimization pro-duces solutions such as the ones shown. Choosing K = 0 results in unde-sireable clusters of points located at extremal points of the feasible domainbecause the benefit of the large inter-cluster distances outweighs the ben-efit of increasing the intra-cluster distances. As shown by the K = 100result, large values of K can resolve the clustering problem but result inthe optimization converging to solutions that fail to find extremal pointsin the domain due to an insufficient reward for diversity. An appropriatemiddle ground is to choose a value for K that encourages both diversity anddispersion. We use K = 10 for all our results.We use Covariance Matrix Adaptation Evolution Strategy [27] (CMA-ES,however we will refer to it as just CMA in the rest of this chapter) as ourstochastic optimization strategy. We develop a round-robin CMA algorithmfor maximizing D, developed on top of a standard CMA implementation.223.4. Distance MetricsThis applies one generation of CMA optimization to each of the N motions,in turn, before moving on to the next generation of the optimization foreach of the motions. During the optimization of motion variation i, theother motions are held fixed. The optimization is performed using 500-1000CMA generations with 16 samples per generation (M). CMA is terminatedwhen the maximum generation count is reached (1000) or when the diversityobjective advances by less than a given . Good results are typically obtainedwithin the first 500 generations. When the character dimensions are alsotreated as free parameters (§3.7), optimization is typically performed for2000 generations with a maximum generation count of 3000.Each of the N CMA optimizations maintains its own state information,including a mean and covariance matrix associated with its samples. Be-cause the objective function is constructed over mutual distance metrics, theresults of optimizing a given motion Mi will also inadvertently change theobjective function values seen by the other motion variations. An assump-tion of the round-robin optimization process is that these changes will besufficiently small to avoid adverse behavior. In practice, we have not encoun-tered adverse behavior. An alternative strategy is to add motion variationsone at a time, each time optimizing only the most recently added motion. Inpractice, we found that this progressive addition strategy performed worsethan the round-robin method (§3.8). All our results are thus computed us-ing round-robin CMA optimization. One other potential strategy is to treatthe synthesis of all N motions as part of a single large optimization problemhaving N × |P | parameters, where |P | denotes the cardinality of the set offree parameters to be optimized for each motion. This strategy results in anN -fold increase of the number of optimization parameters and thus it doesnot scale well.3.4 Distance MetricsThe pairwise distance metric provides the foundation for the measurementof the diversity of the solution set. We define separate distance metrics formotion-diversity and shape-diversity objective functions.3.4.1 Motion Distance MetricWe investigate three possible choices: (1) hands + feet (HF); (2) mass dis-tance (MD); and (3) weighted joint angles (WJA). These choices correspond233.4. Distance Metricsto pose similarity metrics that are commonly used to identify good tran-sition points when peforming kinematic motion blending between motionclips. The distance metrics between corresponding poses q and q′ are de-fined according to:dHF(q, q′) =∑i∈hands,feet||xi − x′i||2dMD(q, q′) =∑imi||xi − x′i||2dWJA(q, q′) =∑iwi(θi − θ′i)2where xi and x′i correspond to the locations of the center of mass of link ifor pose q and q′, respectively, after the character’s centers of mass havebeen aligned for the two poses. θ refers to the character’s joint angles androot orientation. The MD distance metric has been previously used as asimilarity metric [34, 38]. We use the WJA weights suggested in a previousstudy on pose similarity metrics [68].The distance between two motions is computed as a sum of the posedistances for a number of corresponding sample points on the pair of mo-tions to be compared. Meaningful correspondences are obtained by us-ing the phase-based structure of the motions (§3.5). Within each phase,correspondences are established using the normalized time, tˆ, where tˆ =(t − tstart)/(tend − tstart). For walking motions, we use 10 samples for eachof four motion phases. For jumping motions, we use 10 samples for each ofthe two airborne motion phases and do not sample the other phases becausethe most interesting variations should occur during airborne segments.Interesting motion variations can be found using all three distance met-rics. Many of the results given in the remainder of this chapter and theaccompanying video use the mass-distance metric because of its convenientparameter-free nature. The choices of distance metric are further detailedin the results section (§3.6).3.4.2 Shape Distance MetricThe shape distance is defined according to:dSHAPE(s, s′) =∑i∈linkswi|Li − L′i|243.5. Character Modelswhere a shape, s, is defined according to a set of link lengths, {Li}. Weuse uniform weights values of wi = 1. A useful alternate choice would beto make these proportional the link masses. Note that the shape distancemetric (and hence the shape diversity) has no dependence on the motions.However, given a body shape, it should be possible to perform the desiredskill. This is captured by the constraint function in equation 3.1.3.5 Character Models2D human model: The planar human figures have 17 links and are simu-lated using Box2D [9] or Vortex [16]. After taking symmetry into account,there are 10 joints that require independent control. The character has amass of 80 kg and a height of 160 cm. High-valued torque limits are usedso as to allow the character to perform highly dynamic motions that canmake for compelling animations: 300 Nm for the hips and upper-waist, 200Nm for the knees, ankles, lower-waist, and shoulders, and 50 Nm for the re-maining joints. All joints are controlled using using proportional-derivative(PD) controllers with gains that are manually set to appropriate values forthe input motions. The feet, pelvis, and upper torso are assigned targetorientations in the world coordinate frame during stance phase. In all othercases, joints have target orientations specified in local coordinates, i.e., withrespect to their proximal (parent) link.3D human model: This model has 15 links and 24 internal degrees offreedom. The shoulders, hips, waist, ankles, neck and wrist joints are mod-eled as Universal joints (U-joints are joints which allow the links connectedto it to bend in any direction but not twist with respect to each other, i.e.,have 2 DOF), while the elbows and knees are modeled as hinge joints. Wechoose to use U-joints instead of Spherical joints (3 DOF) for some jointswhich should ideally be modeled with Spherical joints because the simulator(Vortex) we used allowed us to use “Position-locked” PD control only forhinge joints and U-joints. Position-locked PD control enables us to takelarge time step (a frequency of 120 Hz) for our simulation. The characterhas a mass of 80 kg and height of 180 cm. Torque limits are 200 Nm forthe hips and waist, 150 Nm for the knees and ankles, 50 Nm for the neck,and 100 Nm for the shoulders, elbows, and wrists. Joints are driven usingPD controllers. For walking gaits we track the pelvis, swing-leg femur, andswing-leg foot orientation in a world-aligned coordinate frame [82]. Duringjumps, we similarly track the desired pelvis orientation in world space dur-253.5. Character Modelsing stance phases. Otherwise, all joints track joint-local target angles. Theforward dynamics is simulated using Vortex [16].3D bird-robot walker: The model has 10 links and 14 internal degreesof freedom. The middle pair of joints on each leg are modeled as hinge jointsand the remaining joints are U-joints. The character weighs 70 kg and is2.9 m tall. The control setup is analogous to that of the 3D human model.Figure 3.4: Sampled Character ShapesShape Parameterization: A parameterized 2D human model is createdby allowing each link length Li to vary from the default according to L′i =rL0, 0.4 ≤ r ≤ 2.5. A minimal length of L′i ≥ 0.03 m is also enforced. Themass of each link is held constant and thus an increase in the length of alink corresponds to a decrease in its density. Figure 3.4 shows a characterwith the default dimensions on the left, followed by four characters that arerandomly sampled from the parameterized version of the model.263.6. Results for Motion Variations3.6 Results for Motion VariationsWe first explore the creation of a set of four diverse motion variations fornine constrained motion tasks while holding the character shape fixed. Theresults are illustrated in Figure 4.9 and in the accompanying video. Thedifferences between some of the motion variations are best observed whenplayed at half speed For a given motion task, we use Mn to refer to themotion variations, where n denotes the row number. The top row, M0, cor-responds to the input motion and the remaining rows, M1–M4, correspondto the four synthesized motion variations. We refer to the individual im-age columns using the letters A–F. Images within a column are chosen tocorrespond to a particular motion phase, which is more meaningful than atime-based correspondence.Four maximally-diverse motion variations can by synthesized in 30 min-utes using 16 threads on an 8 core - Xeon machine using an unoptimizedmultithreaded implementation with Vortex and with 40% CPU usage. Allthe 3D examples are simulated using Vortex, as are the 2D jump-high andjump-forward shown in the accompanying video. A time step of 1/120 sec-ond is achieved using the “position-locked” PD-control available in Vortex.The remainder of our 2D simulations are implemented using our originalmultithreaded Box2D implementation with 16 threads on an 8-core Xeonmachine. These run at 3900 Hz because of the explicit integration of thePD-controllers. Computing 4 variations of an input motion thus requires1–4 hours of computation for these motions.3.6.1 Motion TasksBackflip: We first create diverse variations for a standing backflip witha landing constraint. The landing constraint that requires the final centerof mass location of the foot to be in a 30 cm landing box. The constraintpenalty is zero for landing within this box. Constraint costs outside of thisregion are modeled according to C(M) = K||d||2, where K = 5000 andd is the Euclidean distance by which the constraint is violated. Falls aregiven a large constant penalty, as is a failure of the root link to achievea near-360◦ rotation. The constraint violation penalty and falling penaltyare also modeled in a similar way for the remaining motions, unless statedotherwise. An input motion is manually authored using the given phasestructure. The landing constraint is placed to match the landing location ofthe input motion.273.6. Results for Motion Variations(a) Back flip. (b) Jump forward. (c) Jump onto.(d) Jump over. (e) Jump high. (f) Jump backwards(g) Bird robot jumpforward(h) Ministry of sillywalks(i) Bird robot silly walksFigure 3.5: Results for 2D motions (a,b,c,d,e) and 3D motions (f,g,h,i).For each task we show the input motion, M0, on the first row and the foursynthesized variations, M1–M4, on the remaining rows. 283.6. Results for Motion VariationsFigure 4.9(a) illustrates four motions created from a single diversity-optimization run and uses the WJA distance metric. M1 and M2 showsignificant variations in takeoff pose (C), overall height (D), and landingpose (F). M3 learns to assume a pike pose during the flight phase. Whilesome of the motions exhibit qualities that might not be capable of beingmatched by an athlete, these are also the qualities that make the motionappealing. If desired, the limits of human musculature and joints can bemodeled with more fidelity in order to achieve more subdued results. Wehave also successfully experimented with backflip variations without a land-ing constraint, which leads to back flips that also vary in their horizontaldistance traveled.Jump forward: We compute variations for a forward jump with alanding-box constraint, as shown in Figure 4.9(b) and using the mass dis-tance metric. A manually authored controller is first developed that is ca-pable of a small in-place jump. This is then optimized using CMA, withthe landing-box constraint penalty serving as an objective function. Theresulting motion is the input motion motion for the diversity optimization.The same manually authored controller also serves as a starting point forthe development of the input motions for all the other jump motions. M1and M3 produce jump variations with different styles of back arches, whileM2 realizes a jump with considerable forward pitch of the torso. M4 bearsa resemblence to M0 but considerably differs in phases C–E.Jump onto: This motion is constrained to land on top of the box, withthe landing constraint implementing a small safety margin. The input mo-tion is obtained by optimizing the small in-place jump to first jump forwardonto a box of height h = 0. The motion is then adapted to a box height ofh = 0.5 m using a continuation-based optimization [81]. The degree of avail-able style variation decreases as the box height increases and so we thereforestop at a moderate box height for this example. The weighted joint-angledistance metric is used to create the four motion variations shown in Fig-ure 4.9(c). M1 and M2 create jump styles that perform pikes, each with itsown style of arm motion. M3 performs a tuck jump with a forward-pitchedtorso. M4 develops a jump with a backwards arch.Jump over: A jump over an obstacle is implemented using an obstacle-clearance constraint and a landing constraint that enforces a minimal jumplength. The input motion is developed in an analogous fashion to the Jump293.6. Results for Motion VariationsOnto motion. The small in-place jump is first optimized to meet the requiredjump length. The motion is then adapted to clear an obstacle height ofh = 0.55 m using continuation-based optimization. Four diverse motionvariations are synthesized using the weighted joint angle distance metricand are illustrated in Figure 4.9(d).Jump high: An in-place jumping motion is developed using two con-straints: a minimal-height constraint for the peak-COM height and a land-ing constraint. The input motion is created by optimizing the small in-place jump to satisfy the given constraints. Four diverse motion styles areproduced using the mass distance metric and are shown in Figure 4.9(e).The resulting variations include two different styles with backwards arches(M1,M2) and two jumps, M3 and M4, that perform pikes of varying extentsin conjunction with different types of coordinated arm movements.Backwards jump: The backwards jump for the 3D human model re-quires landing within a 0.25 m of a target point that is located 0.8 m behindthe character. The input motion comes from a manually authored jumpthat is then optimized to satisfy the landing constraint. Four diverse stylevariations are synthesized using the mass distance metric and are shownin Figure 4.9(f). The resulting dynamic motions exhibit a wide range ofmid-air poses.Robot jump: This jump used a landing constraint located 4 m aheadof the starting location with a radius of 0.75 m. An additional upper bodyrotation constraint is uses to prevent excessive forward pitch. A manuallyauthored small jump is optimized to satisfy the landing constraint whichthen serves as the input motion. Four diverse styles are produced using themass distance metric (see Figure 4.9(g). The styles achieve a wide range ofmid-air poses.Human and Robot silly walks: A diverse range of exaggerated dy-namic walks can be achieved, making it easy to create physics-based walksthat are reminiscent of Monty Python’s “Ministry of Silly Walks” [83]. Astep-length constraint is applied in the diversity optimization results shownin Figure 4.9(h,i). Each step requires the character to achieve a step lengthof (1.0± 0.5, 0.0± 0.5) m for the human and (1.2± 0.3, 0.0± 0.3) m for therobot, where (x, y) represents the forward and lateral length of the step. Anupper body orientation constraint requires that the pelvis and torso remain303.7. Results for Shape Variationswithin 15◦ of the vertical. The character is evaluated from a static positioncorresponding to the pose demanded by the first phase of the controller andis simulated for 30 steps. The first 10 steps are used to allow the characterto attain a limit cycle behavior. The constraints and motion diversity arethen evaluated over the last 20 steps. If the character falls during the firststep, a penalty of 50,000 is applied, with the penalty linearly decreasing to 0for falling after the 30th step. This provides a suitable gradient for the opti-mization to exploit. Four variations are synthesized using the mass-distancemetric. The resulting styles are highly diverse and dynamic in nature. Therobot silly walks have particularly interesting styles: M1 is a tip-toe walk;M2 has a serious and purposeful tone; M3 is an asymmetric loping walk;and M4 yields a chin-in-the-air proud style of walk.3.6.2 Control over Magnitude of Motion VariationWe provide two methods for control over the magnitude of motion variation.The first takes advantage of the progressive nature of the CMA optimiza-tion, which provides a solution path from the initial motion to each of themaximally diverse motions. We parameterize this path according to the un-derlying distance metric, and allow the user to explore intermediate pointsalong these paths. A second method is to place intuitive constraints on thecontroller. We experiment with two different strength models for motions, adefault strength model and a second ’supernatural’ strength model that in-creases the joint PD constants and torque limits by a factor of 1.5–3. Thesetwo methods are illustrated in Figure 3.6 and the accompanying video. Athird method would be to constrain the motion variations to be within somedesired  of a reference motion in terms of energy expenditure or distancemetric. We leave the exploration of this idea as future work.The set of motion variations can be further shaped through additionalmotion constraints, the creation of an intial motion that more closely resem-bles the desired class of motions, and refining the choice of free parametersaccording to the degrees of freedom where the variation is desired.3.7 Results for Shape VariationsDiverse solutions for a given task can also be obtained by allowing forvariation in the dimensions of the links of a character, which we refer toas its shape. Characters having different body dimensions (shapes) willnecessarily need to develop different motions in order to accomplish the same313.7. Results for Shape Variations(a) Parameterized optimization path (b) Altered character strengthFigure 3.6: Two methods for controlling the degree of diversity. (a) Thefigures in light blue, orange, pink, green, and blue show results at points thatare 0, 25, 50, 75, and 100% along the optimization path. (b) The orangefigures are significantly stronger than the blue figure.task. These types of motion variations are fundamentally different than themotion variations that can be achieved by adapting only the applied control.We explore the effect of adding shape parameters to the optimization freeparameters in two settings: (1) optimization for motion diversity; and (2)optimization for shape diversity. These are applied to the Jump forward andBack flip tasks. Example results from a typical run with N = 4 variationsare illustrated in Figure 3.8. The motions are best seen in the accompanyingvideo material. The optimizations use 10 shape parameters and 36 controlparameters and thus have 36 + 10 = 46 free parameters in total. The CMAoptimization process converges in approximately 2000 generations, which re-quires about 2 hours of compute time in our multi-threaded implementation.In order to be able to compare the effect of optimizing for motion diversityvs shape diversity, we compute both metrics for all motions, even thoughonly one of them is being used a given time as the objective function. Theresults are given in Table 3.1, as measured across twelve motions, consistingof three runs with N = 4 variations each.For both the jump and the backflip, the use of the motion metric asthe objective function produces significantly more motion variation than isachieved as a by-product from optimizing for shape variation. A similarprinciple holds true for shape diversity optimization – it produces signifi-323.7. Results for Shape Variations(a) Jump forward with motion diversity (b) Jump forward with shape diversityFigure 3.7: Optimization results that allow for changing character propor-tions. Motions can be optimized for motion diversity (a) or shape diversity(b). For each task we show the input motion, M0, on the first row and thefour synthesized variations, M1–M4, on the remaining rows.333.7. Results for Shape Variations(a) Back flip with motion diversity (b) Back flip with shape diversityFigure 3.8: Optimization results that allow for changing character propor-tions. Motions can be optimized for motion diversity (a) or shape diver-sity (b). For each task we show the input motion, M0, on the first row andthe four synthesized variations, M1–M4, on the remaining rows.343.8. Discussion and ConclusionsJump forward Back flipMD metric SD metric MD metric SD metricobjective µ σ µ σ µ σ µ σMD 30.6 9.6 1.36 0.42 21.8 8.0 1.37 0.52SD 17.2 18.4 3.05 0.93 5.3 3.1 1.76 0.46Table 3.1: Optimization allowing for changes in character shape. The re-sults provide the mean and standard deviation for each type of metric whenoptimized for motion diversity (MD, row 1) and shape diversity (SD, row 2).cantly more shape diversity than is achieved as a by-product of optimizingfor motion variation. Nevertheless, optimization for either metric still in-troduces variation with respect to the other distance metric. An objectivefunction that explicitly rewards both motion diversity and shape diversitymay be a good compromise in many situations, although we leave a moredetailed investigation as future work. As seen in the video results, changes inshape can produce motion variations that would likely be difficult to achieveotherwise. The backflips of short characters and characters with very longarms are good examples of this.3.8 Discussion and ConclusionsThe ability to automatically synthesize diverse styles of physics-based mo-tions provides for a new type of ‘imagination amplification’ for animation.The synthesized motions and the synthesized character proportions continueto diverge from each other until they are limited by some combination ofjoint limits, torque limits, ability to recover balance, and the built-in con-straints of the controller parameterization and shape parameterization. Theinterplay of these limiting factors is such that it is difficult for an anima-tor to preconceive of how they might shape the space of possible motions.Our work provides a new tool to greatly facilitate the exploration of thespace of possible motions. While some exploration of possible motions isalso achieved with current optimization methods by manually adding orreweighting objective function terms, we argue that this is neither conve-nient nor principled if the objective function must be specifically tailoredto each motion. We show that pose similarity metrics drawn from previ-ous work can also be used as distance metrics for achieving diverse motion353.8. Discussion and Conclusionsvariations.Diversity optimization works equally well for exploring the space of bodyshapes that are capable of a given task. The optimization provides the suc-cessful body shapes as well as the accompanying motions and their under-lying controllers. We show that shape variation can be a natural byproductof aiming to achieve motion variation, and vice-versa, i.e., that some mo-tion variation will naturally occur as a byproduct of optimizing for shapevariation. Taken together, motion variation and character shape variationprovide a significant and interesting space of motions to explore. The diver-sity optimization provides easy access to this space. As such it serves a verydifferent purpose from the task-and-effort goals that have been traditionallyexplored in the context of physics-based animation methods. Taken in iso-lation, the motions produced by our method are not optimal with respectto any given objective function, and yet we argue that they are interestingsolutions.The method may fail to produce expected styles of motion variation forany of several reasons. The solution may not converge to a global maximum.The solution may be strongly multimodal with considerable distances be-tween modes, in which case CMA may have trouble finding distant modes.We have observed one case of a flip being discovered as a new means for per-forming a high jump. Undesirable motion variations may also occur if theparameterization of the controller is ill suited to producing natural humanmotions. As a result, the design of the underlying controller and the choiceof initial motion will also influence the final result. We currently do not yetuse a principled mechanism for scaling joint torque limits as a function ofthe character dimensions. This might result in characters of large or smallproportions being limited in their capabilities.The quality of our results for common motions such as walking dependon the viewer’s expectations of the character. The walk variations producedfor the robot are plausible while those produced for the human model arephysically plausible and entertaining but not natural. Also, the lack of anyexplicit optimization for control effort may also contribute to a motion beingunnatural. We expect that effort-related terms could also be included in theobjective functions or that other forms of optimization could be added as aseparate post-process, although we have not experimented with this.363.8. Discussion and ConclusionsWe use a set of four motion variations, i.e., N = 4, to provide a con-sistent picture of the types of motion variations that are generated. Ingeneral, repeating the stochastic optimization may produce additional mo-tion variations. While computing the required distance metrics is O(N2),the dominant cost lies with the simulations used to evaluate the CMA sam-ples. As such, the overall diversity optimization is effectively O(N). Ourexperiments with N = 10 show that this does produce additional distinctivemotion variations, although the mean distance between motion variationsdoes begin to fall.We conducted a simple test to compare the simultaneous optimizationof N motion variations, as performed by the round-robin CMA, againstthe progressive addition strategy (§3.3) that only optimizes the most re-cently added motion until it is as diverse as possible. This is tested forN = 4 on the backflip problem. Using the results from 5 runs we ob-tain Dmin = 41.8, D = 54.4, Dmax = 64.1 for the one-at-a-time results, andDmin = 71.2, D = 83.5, Dmax = 103.7 for the N = 4 simultaneous opti-mizations. The results show that the worst-case simultaneous optimizationstill significantly outperforms the best one-at-a-time optimization. Anothernaive approach would be to generate many motion variations that satisfythe task constraint and then simply retain the N variations that are most di-verse. However, the problem domain is such that there is no efficient way togenerate motion variations that satisfy the task constraint, and particularlynot for large motion variations that satisfy the task constriant. The methodpresented in this chapter is motivated in part by the desire to develop sucha motion null space.Care needs to be taken to avoid overconstraining the motion. A mo-tion that is tightly constrained, such as hitting a particular keyframe ata given point in time, will have a limited subspace within which to opti-mize for diversity and our method would likely have trouble meeting suchspecific constraints. The optimization problem also benefits from the useof feedback-based control that is built into the motion parameterization,such as the balancing controller or the use of a SIMBICON-style feedbackloop in our 3D walking examples. These feedback structures allow the opti-mization to make faster progress and also result in a more robust simulatedmotion. An alternate approach would be to investigate the use of trajectory-based optimization methods for such problems, instead of the control-basedmethod that we currently employ.373.8. Discussion and ConclusionsAn interesting direction for future work would to be explore the design ofa multidimensional motion null space with the help of our diversity results.We currently only support the exploration of continuously parameterizedsolutions along the optimization paths resulting from the CMA solutions,i.e., the result shown in Figure 3.6(a). It may be possible to use the currentresults to define a large subspace in which arbitrary convex combinations ofcontrol parameters, Pj , yield a continous multidimensional motion null spacethat contains significantly more variations than that observed along the opti-mization paths alone. Another exciting possibility would be to use a video-to-controller system [67] in order to quickly and automatically create theinput motion and its controller from a direct demonstration. Diversity opti-mization could then be used to immediately create feasible style variationsthat are constrained to have a similar outcome.38Chapter 4Pareto Optimal Control forNatural and SupernaturalMotionsFigure 4.1: Pareto-Optimal Controllers for Standing Jump. Each charactershown represents the pose at the peak height for respective controllers ly-ing on the pareto-optimal front. Brown characters represent natural jumpswhereas blue controllers represent supernatural jumps.4.1 IntroductionIn this chapter we investigate several possible algorithms for computing aset of solutions that span a pareto-optimal front. Pareto optimality, for ourproblem domain, is a state of high dimensional value assignment to sam-ples such that it is impossible to make an improvement with respect to anyparticular dimension without making at least one of the dimensions in thesample’s value assignment worse off. Pareto optimal solutions can be usedfor exploring the trade-offs between various terms in an objective function.Many optimization problems for animation problems have multi-term ob-jective functions with relative weightings that are defined by hand. These394.2. Related Workcan alternatively be approached using multi-objective optimization, witheach term contributing a dimension towards a multi-dimensional pareto-optimal front. Our height-vs-effort optimization can also be thought of inthese terms. In this chapter we leave the pursuit of the higher-dimensional(d > 2) pareto-optimal fronts as future work, although we expect that manyof the core ideas and algorithms will still apply.In this chapter we also explore the idea of motions which are assistedby external forces on an as needed basis. This force is applied during theairborne rising phase of a jump and allows for jump heights that would oth-erwise be unachievable. Unlike kinematic approaches for altering motions,the controller-based solution still allows the motion to evolve in accordancewith other interactions with the environment. We show that the pareto-optimal front can be computed in a way that spans both the natural andsupernatural regimes of motion.4.2 Related WorkCovariance matrix adaptation (CMA)[27] has become a popular genericmethod for the derivative-free optimization problems that commonly arisewhen shooting-style methods are applied to the synthesis of controllers forphysics-based characters. Notable examples of methods which use offline op-timization to develop parameterized control solutions include [19, 71]. Thework of Wang [71] is particularly relevant to our work described in this chap-ter as it uses CMA to solve for optimal energy motions at a discrete set oflocomotion speeds. The desired speed is approximately enforced by estab-lishing a velocity bracket around the target speed and assigning a penaltyfor going outside of this bracket. Taken together, the solutions effectivelydefine a pareto-optimal front. We include a similar algorithm in our set ofpareto-optimal front algorithms for use as a performance benchmark.A variety of algorithms have been developed specifically for multi-objectiveoptimization [23, 31, 32]. MOO (1+λ) CMA-ES [31] is of particular interestbecause of its invariance properties. However, in its default form we found itto be ill suited for our multi-objective optimization problems. We adapt itin several ways in order to produce an algorithm that works for the domainof highly constrained character controllers and which can span the regimesof natural and supernatural motions.404.3. Controller and Task RepresentationOptimization for supernatural motion synthesisKinematic momentum scaling methods have been proposed to directly syn-thesize modified trajectories for characters that then represent extreme phys-ical capabilities [78]. Our work takes the alternative approach of integratingexternal forces into the controller that can be used when the internal forcesare insufficient to achieve a task. In this way the controllers are designed tospan both natural and supernatural motion regimes, and the character canat all time still interact in a dynamic fashion with its environment.4.3 Controller and Task RepresentationIn this section we describe the parameterization of the specific pareto-optimal control problem that we shall consider. The input consists of asuccessful controller instance that is used to seed the optimization. Thiscontroller was designed manually but this process can be automated by us-ing motion capture data and a sampling based approach [45] or by usingvideo-based methods [67]. The free parameters consist of the joint targetangles for the hips, knees, back, abdominal region, shoulder and elbows forall phases; joint target angles for the ankles for take off and rising phase; andthe external force parameters. We optimize for a total of 34 free parameters.The simulations are performed using a sagittal plane 2D human modelthat is well suited to the standing jump motion that we investigate. Themodel has 17 links, weighs 80 kg, and is 160 cm tall. The motion is sim-ulated using the Vortex [16] physics engine. The controller imposes left-right symmetry. All joints are controlled using PD Controllers with fixed,manually-determined gains. The PD controller gains were manually tunedto create natural looking motions arising from the development of severaltest controllers which performed a jump. The tuning was performed withthe aim of creating a realistic take off where the character has to power thejump by throwing its arms forward and extending its knees and hips duringtake off, have a smooth follow through while creating a compliant landing.Jumps can become unnatural if the ankles are allowed to be too stiff, whichallows a jump to be achieved with a single powerful push of the ankles. Thefeet, pelvis, and upper torso are servoed to target angles that are expressedin the world coordinate frame while all other joints servo using local jointangles.414.3. Controller and Task Representation4.3.1 Controller phasesA parameterized controller is used in conjunction with a forward dynamicssimulation to produce the motions. The motion is broken into differentphases as shown in the Figure 4.2.The jump starts with a standing phase which has a timed transition intoa crouched take-off position. Target angles over time are modeled usingpiecewise linear trajectories. A virtual force, implemented using internaltorques, is applied to the COM of the character to maintain balance similarto the virtual force application in [20]. After spending some time in thecrouched phase the controller transitions to a take-off phase during whichthe character rapidly extends its arms upwards and straightens out its kneesand ankles to produce a vertical and forward momentum ( if desired ). Oncethe character breaks contact with the ground the controller transitions intoflight phase. While in air, a simple time prediction strategy determines thetime taken to reach the peak point of the motion. Once the peak height of themotion is attained, the character transitions to a falling phase. During thisphase, again, a simple prediction strategy determines both the approximatetime and location of touch-down. Inverse kinematics is used to target thefeet to their predicted touch-down location. After coming into contact withthe ground, the character transitions to a landing phase where a virtual forceis applied to the COM of the character to bring the horizontal momentumto zero while the target angles for this phase raise the character to attain astanding posture.4.3.2 Task RepresentationAn in-place jump motion that achieves a given jump height is optimized forminimal control effort with the help of a penalty function that constrains thelanding position as well as the maximum head height achieved during thejump (eq. (4.1)). The landing constraint term, Cl, evaluates to zero whensatisfied and returns a constant and large failure penalty when violated (eq.Figure 4.2: Motion control phases424.3. Controller and Task Representation(4.3)). This is equivalent to rejecting solutions that do not satisfy the land-ing constraint. However, we do not explicitly discard these evaluations butrather rely on the optimization algorithm to perform any required adapta-tions by providing feedback via the fitness assessment. The height of thejump of the character can be controlled by constraining the head positionat the peak point of the jump. The head constraint function, Ch, is mod-eled using a penalty that evaluates to zero when the constraint is satisfiedto within a desired value, while rising quadratically when it is violated asseen in eq. (4.2). The treatment of the jump height as an independent vari-able allows for the evolution of a baseline algorithm that uses control-effortoptimizations for a set of fixed target-height bins.fh(qo) = Ch + Cl (4.1)Ch ={wp(hp − ho)2 if |hp − ho| > hr0 if |hp − ho| ≤ hr (4.2)Cl ={fp if |lp − lo| > lr0 if |lp − lo| ≤ lr (4.3)In the above, hr and lr define the bracket of values for which constraintsatisfaction holds true. ho and lo are the constraint target values, hp andlp are the height and location of landing of the current sample. wp is theweight which scales the height constraint. In practice the results are notsensitive to this weight as long as a large enough value is used. We usewp = 5000 for all our experiments. We use fp = 100000 to ensure that it isalways greater than any possible fitness value.We use a simple joint squared torque model as energy metric. However,since the “effort” spent per joints is different as noted by [69], we use aweighted effort metric which is of the following form :eje(q) =∑iwi||τi||2 (4.4)The weights wi before normalization are listed in Table 5.2. These valuesare chosen empirically and correspond to an intuitive assumption of theeffort spent in a particular joint. This weighting was found to produce a morenatural looking result than an unweighted effort metric for creating jumpingmotions. As an example, unweighted effort metric often results in jumps434.4. Optimization Frameworkwhere the character powers its jumps using unnaturally high contributionfrom hips and knees as compared to its ankles. Weights are normalized sothat∑wi = n, where n is the number of joints.hips knees ankles shoulders elbows2.5 2 1 1 0.5wrists head neck back abs0.25 1 1 2.5 3Table 4.1: Weights used for the internal joint effort metric4.3.3 Natural and Supernatural MotionWe want to generate natural motions when possible and motions assisted byexternal forces only on an “as needed” basis. These supernatural motionscan be used to animate super-human and imaginary characters. Super-natural motions are generated by allowing for reduced gravity. Gravity isreduced to a specified value at time of take-off. Right after that the grav-ity linearly increases to its default value of −9.8m/s2 as can be seen fromFigure 4.3. After this point the system maintains this default value for therest of the jump. Thus, there are two parameters which define the extentto which an external assist is provided to the motions: the magnitude ofthe initial reduced gravitational force and the time duration before whichgravity returns to normal. Reducing gravity in such a manner is function-ally equivalent to applying an external force to the center of mass of thecharacter. This model of application of external force by modifying gravitywas used because of convenience in representation and implementation inthe physics engine.4.4 Optimization FrameworkA parametric family of pareto-optimal controllers spanning a range of fitnessvalues is desirable since once such a family is pre-computed, an appropriatecontroller for the task at hand can be selected from this family in real-time.Samples on the pareto-optimal front can be selected on the basis of theirsuitability for a certain task. We propose an optimization procedure to pre-compute this pareto-optimal front of controllers. The proposed algorithmsare applied to a working example of simulated in-place standing jumps.Additionally, we generate supernatural motions with an assistive external444.4. Optimization Framework(a) COM Velocity Profile (b) Gravity ProfileFigure 4.3: (a) A plot of COM velocity vs time. The red rectangle indicatesthe region where external forces are active. (b) The sudden jump in gravityoccurs at take-off. The red rectangle highlights the supernatural region.force when a particular jump height is unattainable by the character. Forthe controllers generated within this supernatural region we still want thecharacter to minimize its use of the assistive force by maximizing the use ofinternal energy towards achieving the task. The goal is to produce super-natural jumps that look as realistic as possible.In this section we describe our proposed optimization framework as well asa simple binned single objective optimization (BSOO) strategy for compar-ison. We first describe the BSOO scheme which classifies jumps into bins ofvarying jump heights and uses a single objective optimization for each bin.This type of approach has been previously used in developing controllers forcharacter locomotion, e.g., [71]. Our approach recovers a complete pareto-optimal front in a single optimization process instead of generating a singlesolution from each optimization run . We also show that the BSOO strat-egy is inefficient when compared to the multiobjective algorithm which canavoid sampling redundancies that arise in the BSOO strategy. We adaptan existing multi-objective optimization technique [31] (MOO) to this effectand apply it to a high-dimensional and constrained physics-based characteranimation problem.454.4. Optimization Framework4.4.1 Binned Single Objective Optimization (BSOO)The simplest way to explore the pareto-optimal front of the controllers rep-resenting the trade-offs between two different objectives is to categorize oneof them into bins. A single objective optimization for the second objectivecan then be performed for the task specified by the first objective bin. Forour example we split the task (jump height) into 16 uniformly spaced bins,each representing a height constraint. All bins have the same landing con-straint.The objective function fh defined in eq.(4.1) is combined with aneffort metric for joint energy (eq.(4.4)) and a supernatural effort metric toproduce the following :fso(qo) = fh + log(wjeeje + wseese) (4.5)where,ese =∑t||Fext||2 (4.6)and wje and wse are the respective weights. The square of external forces(Fext) is summed over all time-steps when they are active.Figure 4.4: Binned single objective optimizations ( BSOO ). Here, f1 is thepeak height of the jump and f2 is fso which is defined in Eq. 4.5.464.4. Optimization FrameworkFor effectively computing each point on the pareto-optimal front we runthe single objective optimizations using CMA [27] with 16 offsprings pergeneration and an initial σ = 0.015. Starting with the smallest height bin,the optimization for each bin is stopped if the improvement of functionvalue is less than  over a span of 300 generations or the total number ofgenerations produced exceeds 3000. The optimal point for the current binis used as a starting point for the optimization of the next bin in sequence.This process is stopped after all the bins have been optimized.The optimization process is illustrated in Figure 4.4. The task achieve-ment is binned into multiple successive bins of target jump heights. Begin-ning from an initial state illustrated in red on the left, each of the targetheight bins is optimized sequentially. The yellow line provides a schematicillustration of the evolution of solution path over time. Once the optimiza-tion has achieved convergence for a particular bin, this sample is then usedas a starting point for the optimization for the next bin. In our workingexample we begin with a jump of low height. This jump is then optimizedto achieve the target height of the nearest bin. Once converged, this thenseeds the optimization for the next bin and so on until all the bins have beenoptimized for.4.4.2 Multi-Objective Optimization for Discovering thePareto-Optimal FrontMulti-objective optimization produces a pareto-optimal set of solutions.Each point in this set represents a particular trade-off between the incom-mensurate objectives. The animator, system designer, or possibly an au-tomated system can choose the most suitable solution from amongst thesetrade-offs at run-time. We use a multi-objective optimization for producingenergy optimal jumps with a variety of jump heights. The competing objec-tives in our case are the control effort and the height achieved for the jump.Once computed, any point on the pareto front can be chosen on-the-fly.We build on and adapt the (1+1) Covariance Matrix Adaptation EvolutionStrategy for Multi-Objective Optimization ((1+1) MOO-CMA-ES)[31] togenerate a parameterized family of controllers. We begin by describing thekey elements of the original algorithm.474.4. Optimization Framework(1+1) CMA-ES(1+1) CMA-ES is an elitist selection based CMA. One parent produces oneoffspring with a normal multivariate distribution as x ∼ N (m,σ2C). Themultivariate distribution C and the global step size σ are adapted in eachgeneration of the algorithm. In a single objective setting the success of anoffspring depends on the fact that if the offspring has a better fitness valuethan its parent. This success-based update is different from the path lengthcontrol strategy of the standard CMA.Non-Dominated SortingConsider an M dimensional multi-objective optimization problem. Let f(x)be a vector comprising M objective functions.f(x) = (f1(x), f2(x), ..., fM (x)) (4.7)where, x ∈ X is the vector of free parameters. The elements of X can bepartially ordered using the concept of non-dominance.x ≺ x′ ⇐⇒ ∀i ∈M : fi(x) ≤ fi(x′) ∧ ∃j : fj(x) < fj(x′) (4.8)We can then define a pareto set as follows:{x | @x′ ∈ X : x′ ≺ x} (4.9)If X is the set of all samples then let ndom1(X) represent the non-dominatedset of X and define dom0(X) = X. We can then define the level of domi-nance in a recursive fashion.doml(X) = doml−1(X)\ndoml(X), l ∈ {1, 2, ...} (4.10)ndoml(X) = ndom(doml−1(X)), l ∈ {1, 2, ...} (4.11)An element is given a rank l if it belongs to ndoml(X).rank(x,X) = l ⇐⇒ x ∈ ndomlX (4.12)The time complexity for this sorting for N elements has been noted to beO(MN 2) in [23]484.4. Optimization FrameworkFigure 4.5: Hypervolume sorting. The blue point is a reference point chosento be worse off than all possible samples in all objectives. The red regionsshows the area (Lebesque measure in 2D) of rectangles (hypercuboids in 2D)specified by a sample on the pareto-optimal front and the reference point.The green region denotes the area contribution (hypervolume contributionin 2D) of the green non-dominated sample on the pareto-optimal front.Hypervolume SortingThe hypervolume measure was introduced by [84]. According to [31] and [17]this can be defined as the Lebesque measure (∧) of union of hypercuboids.Sxref (X) = ∧(∪x∈ndom(X){F}) (4.13)F = {(f1(x′), f2(x′), ..., fM (x′))|x ≺ x′ ≺ xref} (4.14)where xref is a reference point which is worse than each objective functionvalue. The hypervolume contribution of x is then defined as follows:∆s(x,X) = Sxref (X)− Sxref (X\x) (4.15)A pictorial representation of the hypervolume measure for two objec-tive functions is provided in Figure 4.5. Boundary elements are given an494.4. Optimization Frameworkinfinite value for their hypervolume measure. Boundary elements are identi-fied by the fact that their hypervolume contribution changes by moving thereference point whereas for non-boundary elements the hypervolume contri-bution remains unchanged. [31] then defines the following relation based onthis ranking and the non-dominance level :x ≺s,X x′ ⇐⇒ rank(x,X) < rank(x′, X)or[(rank(x,X) = rank(x′, X)) ∧∆s(x, ndomrank(x,X)(X)) > ∆s(x′, ndomrank(x,X)(X))] (4.16)The goal of multi-objective optimization is to find a diverse set of so-lutions in order to generate a set of good trade-offs between the variousincommensurate objectives. To apply (1+1) CMA-ES to multi-objectiveoptimization we maintain a population of size 2µ. This population in thesteady state comprises of µ pairs of parents and their offsprings. The µparents are mutated at each generation to produce an offspring each. Theresulting population is then sorted using the criteria defined in equation4.16. To perform this sorting the population is first sorted using the non-dominance criteria and then the non-dominated subset of the populationis sorted using the hypervolume contribution criteria to maximize disper-sion. The second sorting criteria is required because as the optimizationprogresses more samples are in the top-most non-dominated set than theones in the next level of non-dominated sets.4.4.3 Re-sampled Multi Objective Optimization (RMOO)We now describe a modified version of the standard (1+1) MOO CMA-ESalgorithm which is used to effectively optimize a diverse set of solutionsautomatically spanning natural and supernatural domains on the pareto-optimal front.ResamplingJumping in place for an articulated character is a highly constrained prob-lem. Failure can arise from multiple reasons: the character can land incor-rectly and fall over, or the character can land outside the landing region.These cases result in large failure penalties of fixed magnitude, fp, to bereturned during the objective function evaluation. An offspring producing504.4. Optimization Frameworksuch behavior is deemed as being unsuccessful in terms of not producinga better fitness value than its parent. Since (1+1) CMA-ES uses a suc-cess based path update rule, each such failure contributes to reduction ofthe global step size σ over time. This can cause possibly good samples tobe discarded from the pareto-optimal front in favor of samples which were‘better’ offsprings in recent generations. In the long term this can causepremature convergence. Such issues can be significantly diminished by pro-viding a sampling that is guaranteed to have a minimal degree of success.We found allowing a maximum of 3 bad samples per generation to be thebest compromise between computational performance and the quality of thepareto-front obtained. The re-sampling is performed in a multi-threadedfashion with each unsuccessful sample spawning a set of new threads whichrun the simulations in parallel with other such threads until the given sam-pling success criterion is met. Figure 4.6 provides a comparison of strategieswith and without the use of resampling to meet the success criterion.The resampling strategy produces a significantly better pareto optimalfront at a computational cost. Through our experiments we found thatmaking the success criteria tighter significantly increased the computationalcost while the pareto-optimal front generated was quite similar in qualityto that produced using the proposed success criteria. On the other handrelaxing the success criteria from its current value degraded the quality ofpareto-optimal front significantly. We believe that this success criteria wouldwork well for other controller tasks as well but this has been left for futurework.Scaling of Fitness ValueWe want the character to maximize the utilization of its internal energy fortask achievement. If the character relies merely on external forces for taskachievement without utilizing its internal energy to a maximum possible ex-tent, it would appear that the character just gets lifted off by an externalforce while trying to achieve a really high jump. We want the motion toappear as natural as possible even in the supernatural domain and hencewe require the character to exert itself as much as possible in the supernat-ural domain. One way to achieve this is by scaling external force penaltywith an appropriate wse. If wse is made large enough then the characterfavors usage of its internal joint torques over external forces towards taskachievement. However, since the selection of pareto-optimal front dependson the hypervolume contribution of each sample point on the front, this514.4. Optimization Framework(a) Re-sampling with a success rate of 13/16(b) No re-samplingFigure 4.6: Comparison between re-sampling strategies524.4. Optimization FrameworkFigure 4.7: The effect of a large penalty weight wSE but no scaling onpareto-optimal front distribution. Only two controllers are produced in thenatural regime of the pareto-optimal front. The controllers on the pareto-optimal front for a height > 2.4m use an external force assist.scaling causes most of the samples in the natural domain to be lost sincetheir hypervolume contribution turns out to be low. This can be seen inthe pareto-optimal front obtained by using a large wse in Figure 4.7. Thecharacter used for this simulation can jump up to nearly 2.3-2.5m high with-out any help from external forces. Since the hypervolume contribution ofsample points in this region is low, only 2 points get picked in this region.The graph has been plotted on a log scale for consistency with the otherplots, however the hypervolume contribution is computed on a linear scale.Moreover, if the weight, wse, is not extremely large (there’s a limit to howlarge the weight can be because any evaluation must not exceed fp), somepoints obtained in the possibly natural region still have some supernaturalcomponent, albeit a small one. We would like to have zero external forcecontribution for points which correspond to jump heights in the naturalregion. We found an objective function of the form log(wjeeje + wseese) toperform particularly well. wse/wje = 5000 was used for all experiments. Thelog form works well firstly because the spread of fitness values produced isalmost uniform across both natural and super-natural domains. Secondly,534.5. Resultssince wse is large, a clear demarcation is observed between natural and super-natural samples. A high level description of the RMOO algorithm is givenin Algorithm 1.Data: Successful input low height jumping controller, µResult: µ sample controllers on the Pareto-Optimal FrontbeginInitialize µ parents using the parameters from the input controller;Evaluate the input controller for objectives f1 and f2 and assignthese as the current fitness values to all the parents;while not converged doCopy the parameters from each of the µ parents to µ childrenrespectively;Mutate the parameters of each of the µ children;Evaluate and set the fitness of the µ children in amulti-threaded fashion;while number of bad samples > 3 doAssign the threads with copies of unsuccessful parents;Mutate the parameters of all copies;Perform multi-threaded evaluation of all the copies;endPerform non-dominated sort;Perform hypervolume sort;Choose the top µ candidates to be parents for the nextgeneration;Update CMA strategy parameters for all parents;if change of fitness for all samples <  thenset converged;endendendAlgorithm 1: A high level description of RMOO for a working exampleof in-place jumps4.5 ResultsWe apply the algorithms discussed in Section 5.4 i.e., BSOO and RMOO,to build a family of pareto-optimal controllers for a standing jump motion544.5. Resultsdeveloped for a physics based character. Figure 4.8 compares the pareto-optimal front obtained from the BSOO and the RMOO algorithms. TheRMOO strategy is able to discover solutions which could not be found withthe BSOO algorithm. This was possibly due to the BSOO running intolocal minima. The solution space topology is better explored through a con-certed effort from the multiple (1+1) CMA algorithms running in parallel.Also, RMOO is faster than the BSOO optimization technique by a factorof nearly 2. It took nearly 5-6 hours for a multi-threaded code running on3.2Ghz 8 core Intel Xeon based machine to generate the results shown inFigure 4.8 using RMOO with µ = 16. On the other hand it took nearly 13hours to generate the results using BSOO strategy for 16 samples.The log-scaled effort objective works well in producing a pareto optimalfront that spans the natural and supernatural regimes with a reasonablespacing between points on the front. The two regions have different be-haviors as can be seen from Figure 4.8 (a). The usage of external forcesin the supernatural region produces a different response of energy fitnesswith respect to height than that produced in the natural region with noexternal forces available. The controllers lying in the natural region (lessthan 2.5m for this example) use precisely 0 external forces whereas those inthe supernatural region rely on non-zero external forces to achieve the task.As expected, the reliance on external forces increases as the jump heightincreases. We color code the characters in the illustrations in this chapterin order to distinguish the two motion regimes. Brown characters representnatural motions while blue characters represent supernatural motions.When optimizing within the bins using a single objective function strategy,spatial coherence in the objective function space is not exploited. Naivelygenerating samples using a single objective optimization produces signifi-cant redundancy in the sampled set which could have been effectively usedto communicate information (solution points) to the neighboring binned op-timizations. This is likely one of the principle reasons for the performancegains shown by the RMOO strategy.Since, in our framework the character prefers to use internal forces to helpachieve the required jump height, even when in the supernatural regime, thetake-offs for the supernatural jumps and the highest natural jumps remainvery similar in style as seen in Figure 4.9. As the jump height increases thecharacter exerts more internal energy for task achievement. The internal554.5. Results(a) Pareto-optimal front obtainedfrom RMOO. Green circles con-nected by lines represents thepareto-optimal front. Black circlesare the samples generated duringthe optimization procedure.(b) Samples of energy optimaljumps of various heights obtainedfrom BSOO are shown as green cir-cles connected by lines. Black cir-cles are the combined samples gen-erated during each binned optimiza-tion procedure.(c) Overlaid results of pareto-optimal front obtained from RMOO(green) and samples generated from BSOO (red)Figure 4.8: Plots of samples generated from BSOO and plots of pareto-optimal fronts obtained from RMOOenergy used for a jump should be maximized for the highest possible naturaljump. In order to perform supernatural jumps the character should still564.6. Conclusionexert the same amount of internal effort but it can also take help fromexternal forces to achieve an even higher jump. There was no explicit rewardterm in the optimization to make the take-offs look similar. We tabulate thevalues of samples on the pareto-front for the two optimization techniques inTables 4.2 and 4.3.sample#InternalJointEnergyMetric(N2m2)Super-naturaltake-offgravity(m/s2)Super-naturalforce ap-plicationtime (s)0 10.953 -9.8 01 15.776 -9.8 02 19.6615 -9.8 03 22.674 -9.8 04 26.8889 -9.8 05 33.0792 -9.8 06 46.7629 -9.8 07 52.9262 -8.8 0.648 47.2799 -8.374 0.6529 49.2547 -7.8 0.59310 49.9069 -7.053 0.59111 48.9008 -6.08 0.58512 49.4255 -5.121 0.59113 52.3201 -4.173 0.61014 55.3722 -2.921 0.60515 71.4787 -2.022 0.682Table 4.2: Results using RMOO4.6 ConclusionA multi-objective framework for optimizing physics-based character anima-tion is an interesting and viable approach for designing controllers. Access toa set of choices on the pareto-optimal front allows the character to performdifferent motions for different task objectives at run-time. Controllers span-ning the entire pareto-optimal front across natural and supernatural regionscan be automatically generated from single run of optimization by an appro-574.6. Conclusionsample#InternalJointEnergyMetric(N2m2)Super-naturaltake-offgravity(m/s2)Super-naturalforce ap-plicationtime (s)0 9.495 -9.8 01 5.316 -9.8 02 6.715 -9.8 03 13.6204 -5.4 0.2664 12.248 -2.46 0.3785 16.2731 -2.641 0.4746 19.5386 -0.442 0.4247 19.1376 0.2697 0.4928 21.1071 1.3538 0.5139 22.8364 1.1774 0.62110 25.4831 3.2524 0.55611 26.4307 3.2291 0.56612 22.0267 2.5725 0.761413 27.016 3.55 0.72914 33.5313 4.2319 0.70615 40.1909 3.831 0.816Table 4.3: Results using BSOO584.6. Conclusion(a) Shortest natural jump(b) Highest natural jump(c) Intermediate height supernatural jump(d) Highest supernatural jumpFigure 4.9: Natural and supernatural motions. The snapshots of the mo-tions are created by capturing the pose at the beginning of each phase. Thesnapshots for each of the standing in-place jumps are offset horizontally forvisualization purposes. 594.6. Conclusionpriate design of objective functions. A similar approach could be applied toother problem domains such as generating energy-optimal locomotion gaitsof varying speeds.Since the algorithm produces a spread of fitness values for each objectivefunction dimension, care needs to be taken while designing the objectivefunction. Hard penalties must be used for objective function terms whosespread is not desired. Although providing hard penalties for objective func-tion terms which are not required in the spread produces problems for there-sampling process. The samples producing failures for these objectivefunction terms are given a constant failure penalty. This does not provideenough useful information for making the optimal covariance matrix updatewhich adds to the computational cost of the algorithm. As future work,we expect that there remain further improvements that can be achieved forcomputing pareto optimal fronts. The generated motions can be made tolook more natural by providing more natural inputs and/or by developingsome data driven effort metrics which are more consistent with observedmotions.60Chapter 5Task Based Locomotion5.1 IntroductionAnimated human locomotion is an integral component of games, filmsand training simulations. Believable movements accentuate immersion forall these applications. Motion capture data plays a critical role in mostmethods for synthesizing locomotion. Locomotion is commonly abstractedas a sequence of alternate left-and-right footsteps, with each foot plant as-sumed to be “clean”, i.e., free of sliding and twisting during the contactphase. These assumptions allow a multitude of motion-captured steps tobe blended with each other, as well as enabling them to be sequenced withrelative ease.These common locomotion models also come with significant limitations,however. First, a close examination of stepping motion reveals a richer vo-cabulary of stepping actions that includes side steps, heel pivots, toe pivots,and intentional foot sliding that are precluded by common foot contact mod-els and locomotion strategies. Second, the motion data used in locomotionmodels is typically captured in the context of a given task (including “freewalking”) and this context is usually lost or discarded. As a result, the datamay be used in new contexts for which it is less appropriate. Third, manylocomotion systems do not model the natural structure of many steppingpatterns, which for many tasks may involve side-stepping for moving to ad-jacent locations, using partial-turns with the side-steps for more distant lo-cations, and full-turns followed by locomotion for even larger displacements.Fourth, locomotion patterns often exhibit coarticulation effects, where theplanning of the locomotion steps for the current task can also be influencedby the subsequent task. For example, a brief task to the right followed bya task to the left may involve taking only a partial side-step to the right inorder to be within reach for the task, i.e., perhaps underlining a word on awhiteboard, and then proceeding with steps to the left.615.2. OverviewFigure 5.1: Task-specific locomotion involving writing on a whiteboard,moving a box, and sitting on a box. The motion exhibits side-stepping,heel pivots, foot pivots, turns, and steps.We develop a model that aims to address these deficiencies. Our model isbased on a simple-and-efficient procedure that develops a foot stepping planusing captured example steps, whose locations and orientations are thenfurther refined using online optimization. This approach avoids some ofthe complexities and data requirements that may be required by statisticalmodeling approaches. We use skinned rendered models for illustrating allour motions, rather than the commonly-used stick-based or ellipsoid-basedfigures that may otherwise mask problems with the final motion quality.Our task-based locomotion prototype works in real-time in a modern gameengine (Unreal Engine 4) and is demonstrated on four tasks: writing ona white board, picking and placing boxes, sitting-on and standing-up-fromboxes, and turning around in-place. Figure 5.1 shows an example sequencewith these tasks and a visualization of the underlying foot step plan that iskey to our method.We expect that the template plus optimization approach that we introducein this Chapter can also be abstracted and applied to more generally. Footsteps in our templates are motion features that are specific to locomotion.These could be replaced by other task relevant features for other classes ofmotions.5.2 OverviewGiven a sequence of task types and task locations as input, our systemsynthesizes the full-body animation required to move between the tasks usingnatural, task-specific locomotion patterns. Figure 5.2 shows an overview ofour system, which we now review in more detail.625.2. OverviewFigure 5.2: System Overview.Our primary goal is develop high-quality motions for transitioning be-tween tasks, where a task is defined as a location in the world where thecharacter needs to accomplish something. We focus on four types of tasks,including writing on a board, moving boxes between locations, sitting on abox, and turning around in a specific location. In Figure 5.2(bottom), thetask locations are marked using small spheres. A typical locomotion tran-sition involves a pair of tasks, namely moving from a current-task locationto a next-task location, marked with red and blue spheres, respectively. Inaddition to the task type and task location, we shall also later define anduse a task effort attribute. This will allow for coarticulation effects to bemodeled, such as choosing to temporarily step-and-reach towards a task lo-cation instead of taking further steps to place the body directly in front ofa task location.635.2. OverviewFootstep planning lies at the heart of our locomotion model, with the planconsisting of footstep locations and orientations that are to be achieved atthe end of each step. As seen in the planning component of Figure 5.2, thefootstep plan is developed successively over several motion phases. Specifi-cally, the transition between task locations is modeled as an exit task phase,a locomotion phase, and an enter task phase. Transitions between a pairof distant task locations will use all three of these phases, as shown in Fig-ure 5.4(a). Shorter transitions may pass directly from exit task to entertask, or, if sufficiently close, directly to the enter task phase, as shown inFigure 5.4(b) and (c), respectively.For each successive motion phase, an initial footstep plan is created byinstantiating one of the template plans from a template library, which isspecific to both the current task type and motion phase. The templatelibrary is developed from the example motion capture data as an offlinepre-processing step. The selection of the most suitable template is basedon the quality of fit for a given template to the requirements of the currenttask, as will be described in the subsequent section. The footstep plan fromthe instantiated template are then optimized to satisfy a footstep-basedobjective function, which in general terms aims to reach a goal locationwhile remaining close to the underlying template example and satisfyingsmoothness criteria.Given the optimized foot step plan, we then generate specific foot and roottrajectories. Since each foot step in the template is associated with a specificsegment of motion capture data, we use the associated feet and root trajec-tories and apply a smooth spatial warp in order to exactly achieve the givenmotion plan. Lastly, we generate full body motion from the reconstructedfoot step and root trajectories, and the task description. Specifically, we be-gin with poses extracted from the motion segment associated with footsteptemplate and then use full-body inverse kinematics as applied to the root,hands, and feet in order to reconstruct final poses that satisfy the desiredtask constraints.A more detailed summary of our method is given in Algorithm 2. We willrefer to specific steps in this algorithm by line number in the remainder ofthe chapter. We now move on to providing more details on the core stepsof our method.645.3. Template-based Footstep Plans(a) Writing task entry (b) Box Sitting entry (c) Box Lifting entry(d) Writing task exit (e) Box Sitting exit (f) Box Lifting exitFigure 5.3: Footstep plans for various task entries and exits.5.3 Template-based Footstep PlansA key aspect of our method is the use of example data to model the patternof foot steps to be used during each motion phase. We will refer to these asfoot step strategy (FSS) templates. Each FSS template further consists ofindividual steps. In this section we describe in detail how motion phases andsteps are defined, and how a suitable FSS template is retrieved as the firststep of planning the foot steps for any given motion phase. Importantly,the FSS templates are task-specific. For example, the templates used fortask entry and exit for writing on a white board are markedly different fromthose for sitting on a box, picking up a box, or placing a box as shown inFigure 5.3 and detailed in Table 5.1.5.3.1 Phases, Steps, and TemplatesMotion Phases: Given a task transition, as described by a current taskand a next task, the transition motion is modeled using three motion phases:task exit, locomotion, and task entry, as illustrated in Figure 5.4. Not allphases need to exist in any given synthesized transition motion. If one ofthe starting foot locations already lies within the enter radius for the nexttask, then only an entry phase is used, as shown in Figure 5.4 (c). Thistypically results in a side-stepping strategy. Otherwise, an exit phase is firstplanned, using an appropriately selected FSS template. If one of the plannedsteps resulting from that template passes within the enter radius, the enter655.3. Template-based Footstep PlansFSS (Entries & Ex-its)FootStep CategoriesWrite entry (Fig.5.3(a))Walk (1), Walk(2), TurnAndStep(3),ToePivot(4)Sit entry (Fig. 5.3(b)) Walk (1), Walk(2), Walk(3), TurnAndStep(4),ToePivot(5), ToePivot(6)Lift entry (Fig. 5.3(c)) Walk (1), Walk(2), Walk(3), TurnAndStep(4),TurnAndStep(5)Write Exit (Fig. 5.3(d)) HeelPivot (1), TurnAndStep(2), Walk(3)Sit Exit (Fig. 5.3(e)) HeelPivot (1), TurnAndStep(2), Walk(3)Lift Exit (Fig. 5.3(f)) HeelPivot (1), TurnAndStep(2), Walk(3),Walk(4)Table 5.1: Foot Step Strategiesphase is deemed to begin at that point in time, as seen in Figure 5.4 (b),and commonly results in a partial-turn-and-step strategy. Most typically,however, a locomotion phase is needed to plan foot steps from the end ofthe exit phase until one of the planned locomotion steps lies within theenter task radius. The motion phases are generated in sequence, i.e., exit,locomotion, and entry, with an optimization step (§5.4) being applied afterthe template instantiation step for each of these motion phases.Step segmentation: The foot-step templates are constructed from ex-ample motion data, which is first segmented into individual steps as follows.Each step starts when either the swing foot looses firm contact with theground or it enters a sliding motion. A step ends when the swing foot re-establishes firm contact with the ground or it comes to rest for the case whenit is undergoing a sliding motion. We categorize the observed foot steps inthe following categories: Heel Pivot, Toe Pivot, Side Step, Turn and Step,Walk, Forward Step During Task and Backward Step During Task. Thedetailed interconnectivity of this rich step vocabulary is illustrated in Fig-ure 5.5.A step is modeled using a tupleq = (ns, ne, nsa, nea, tag, phase) (5.1)where ns is the start frame of the step, ne is the end frame of the step,nsa is the start frame of the airborne portion of the step, nea is the end665.3. Template-based Footstep Plans(a) Turn and walk strategy(b) Partial turn and stepstrategy(c) Side step strat-egyFigure 5.4: Motion phases and stepping strategies for several instances ofwriting-task transitions.frame of the air borne portion of the step, tag is the category of the footstep(see Figure 5.5), and phase is task phase (exit, locomotion, or enter) of thesegment. For a sliding motion such as a toe pivot or a heel pivot, nsa andnea are set to be the same as ns and ne respectively as no distinction ismade between air borne and sliding motion phases. We denote the set of allsteps usingQ = {qi}. (5.2)Motion templates: Sequences of steps from the example motions areassigned as follows to the three motion phases. Foot steps taken immediatelyafter performing a task are tagged as belonging to the exit phase, until itis determined that the swing foot performs a walking like motion. This isquantified using the foot yaw angle relative to the root link as falling belowa given threshold during the course of a step and marks the beginning ofthe locomotion phase. Once the small yaw angle condition ceases to betrue, this marks the transition to the enter phase. For examples that neveruse a locomotion phase, steps are evenly split betweeen the exit and enterphases if the number of footsteps exceed an empirically defined threshold (4footsteps). For foot step patterns less than this threshold, all the foot stepsare deemed to belong to the enter phase of the target task.675.3. Template-based Footstep PlansFigure 5.5: A state diagram showing possible transitions between variousfootstep styles. The color coding used here for each footstep style is used inthe results in the rest of this chapter and in the video.An exit template is defined as:sexit = (qexit1:N ,Γ) (5.3)where q1:N is a sequence of steps that begin with an exit phase, xc and xnare the respective world-frame locations of the current and next tasks, qexit1:Nis {qi|qi ∈ q1:N ∧ qi(phase) = exit}, and Γ is the task transition descriptor.Similarly, an entry template is defined as:sentry = (qentry1:N ,Γ) (5.4)where, qentry1:N is {qi|qi ∈ q1:N∧qi(phase) = entry}, and Γ is the task transitiondescriptor.A task transition is modeled as a tuple of the form :Γ = (c,xnc ,xcn,xnf , αc, αn, αf ) (5.5)c is the task category, xnc is the start location of the current task in thecoordinate frame of the next task, xcn is the end location of the next taskin the coordinate frame of the current task, xnf is the start location of thefollowup task in the coordinate frame of the next task, αc is the effortrequired for the current task, αn is the effort required for the next task, andαf is the effort required for the followup task.Modeling Effort: Low effort tasks exhibit a high degree of co-articulationwith any follow-up tasks. For example, from our experiments, a low ef-fort task of just tapping a specified location on a whiteboard followed by a685.3. Template-based Footstep Plans(a) Task Exit(b) Task EntryFigure 5.6: Reconstructing task-specific footstep templates. The most suit-able template is selected for the given task phase based on similarity of thetransition.higher effort writing task, also at a pre-specified location, has been foundto exhibit co-articulation behavior. When a low effort task is on the waytowards performing a high effort task, the low effort task can be performedin the “passing”. On the other hand, if a low effort task lies in the oppositedirection of a high effort task, then leaning towards the lower effort taskwhile quickly executing it and moving onto the high effort task, is observed.Hence, low effort task often exhibit preparation for a followup high efforttask. Reconstructing such co-articulation increases realism of task-specificlocomotion, but, is neglected in state of the art techniques. We proposea framework which preserves this co-articulation while choosing templates.We model the degree of effort required for a task by classifying a task as alow effort or a high effort task. We use α to represent effort requirement fora task, and α can take a value of 0 or 1 representing a low effort or a higheffort task respectively.5.3.2 Template RetrievalAt runtime, suitable templates need to be found for a given task transition.In the simplest scenario, a suitable exit template is found by looking for thetemplate that is most similar to the current required transition, Γ. Here,similarity implies a similar relative location of the next task, as measured inthe coordinate frame which has its origin at the current task, Analogously,695.3. Template-based Footstep Plans(a) Template Selection Exit (b) Template Selection EntryFigure 5.7: Template Selection Mechanism. A writing task exit and entrytemplate selection scenario is shown above. Task categories can be differentfor entry and exit template selection in a more general case.a suitable entry template is found by looking for the example template thatbest matches the relative current location of the character as measured in thecoordinate frame which has its origin at the next task. Figure 5.7 illustratesthese ideas, which capture the key elements of the most basic case, wherea character is in double stance during the actual tasks, and is effectivelypaused at the desired location to execute the task. In this example, thedesired task transition occurs between two writing tasks. However, noticethat in Figure 5.7(a), while planning for task exit, the most suitable sequenceof steps for the desired task transition contains a turn-around task duringtask entry. However, as the exit template belongs to writing task category,this template still satisfies the selection criteria for an exit template. Thishighlights the fact the sequence of steps from which the templates are chosendon’t have to exactly satisfy the task category criteria for both entry andexits. This allows us to minimize the amount of data required.Beyond the common case just described, there are a number of scenariosthat require the choice for task entry and the following task exit to becoupled together. For example, certain tasks such as turn-around alwaysend with a single stance, and hence this coupling becomes important in sucha scenario. Also, an entry template for a low effort task such as a writinglow effort task might end with a single stance. We show an entry and exittemplate, in Figure 5.8(a), while planning for two successive transitions. Inthese cases there is an additional requirement for matching the last footstep of the entry template with the first footstep of exit template. Forexample, an entry template ending with a left stance should be followed705.3. Template-based Footstep Plansby a right stance foot step in the exit template to be compatible, and viceversa. Hence, for such cases it is not sufficient to just choose an entry orexit template using the most suitable template according the similarity ofthe required transition Γ. Instead, we recover k most suitable templates forentry and exit phases, sexit1:k and sentry1:k , for use in template selection (Eqs. (5.6)and (5.7)) using distance metrics defined in Eqs (5.8), (5.9) and (5.10). Thedistances are computed between the queried task transition, Γq, and theassociated task transition, Γ, for each of the footstep strategy template beingcompared.τ exit(Γq) = sexit1:k (5.6)τ entry(Γq) = sentry1:k (5.7)We can then choose a combination of compatible entry and exit templates.This is done by minimizing the sum of distances for exit and entry templateswith respect to the respective task descriptors using the distance metricdefined in Eqs. 5.8 and 5.9. An example is shown in Figure 5.8(b) wheresame colored boxes indicate compatible footstep strategies for entry and exitacross a task. Entry strategy 2) and Exit strategy 1) are chosen as the finalpair because they give the least sum of distance metrics for a compatiblepair of footstep strategies. We use k = 5 in all our experiments.A distance metric is used to rank the suitability of templates from thelibrary with respect to the queried transitions. In order to achieve this wepartition the exit and entry task descriptors for all the entry and the exit tasktemplates respectively using a kd tree. This allows for an efficient retrieval ofthe footstep templates from the database which have task transition similarto the queried task transition. We use the following distance metric forrecovering the k-most suitable templates :Dexit(Γ,Γ′) = wddn + we|αc − αc′| (5.8)dn is ||xcn−xcn′||, xcn′ is the vector pointing towards the next task as measuredfrom the coordinate frame of the current task belonging to a task transitionin the template library (Γ′), and, wd = 0.001 and we = 0.999 are weightingfactors.Dentry(Γ,Γ′) = wddc + we|αn − αn′| (5.9)715.3. Template-based Footstep Plansdc is ||xnc − xnc ′||, xnc ′ is the vector pointing towards the current task asmeasured from the coordinate frame of the next task belonging to a tasktransition in the templateThe use of Dexit and Dentry as the distance metrics allows for select-ing task-aware footstep templates for both exiting and entering tasks. Forexample, in Figure 5.4(a) we show a schematic of how the exit task foot-step strategy shown in the blue rectangle is significantly different from thatshown in Figure 5.4(b). In this example, using the relative distance of thenext task with respect to current task enables reconstructing such variedfootstep strategies which preserve the context of the tasks these strategiesarise from.Since low effort tasks exhibit a high degree of co-articulation with anyfollow-up tasks, hence, in order to accommodate follow-up tasks in ourframework we use a combined transition vector using the current task tran-sition and follow-up task transition. This combined vector is used for both,storing templates in the kd-tree, and later, for choosing the most suitabletemplate while planning a low effort task, as shown in Figure 5.6(b). Thechoice of entry template is effected by the co-articulation associated witha follow-up task. Hence, the modified distance metric for task entry whileplanning for co-articulation takes the following form :Dcentry(Γ,Γ′) = wddc + wfdf + we(|αn − α′n|+ |αf − α′f |) (5.10)df is ||xnf − xnf ′||, xnf is the vector pointing towards the followup task tran-sition to Γ, as measured in the coordinate frame of the next task, xnf′ is thecorresponding vector for the followup task transition to Γ′ in the template li-brary, αf is the effort required for the followup task, αf′ is the correspondingvalue for effort requirement of the followup task from the template library,and wf = 0.0005 is the weight for the term corresponding to the followuptask.The use ofDcentry as the distance metric for task entry allows for generatingmotions which can correctly reconstruct co-articulated templates present inthe data. For example, as shown in Figure 5.15(a), reconstructing motionfor low effort tasks can perform a task in the “passing” when a low efforttask is on the way towards performing a high effort task. Similarly, for ascenario where a low effort task lies on the left of the character followed by725.3. Template-based Footstep Plans(a) Task entry for turn-around endswith a single stance.(b) Template Combination Selec-tion. Distance metric values forqueried templates are shown above.Figure 5.8: Choice of the best pair of templates for entry and exit fromk-most suitable footstep templates.a high effort task on its right, the reconstructed motion uses a short sidestep, while leaning over to the left, in order to perform the quick low efforttask as shown in Figure 5.15(b) before proceeding to the high effort task onits right. Hence the character already prepare for the followup high efforttask on the right while performing the low effort task on the left. Suchco-articulation increases realism of task-specific locomotion and is neglectedin state of the art techniques.Having detailed the template selection mechanism, we can now describethe entire template-based footstep planning process. Using the task tran-sition (Γ), we can select the most suitable footstep strategy template fortask exit Eqs. (5.6) and (5.7) as described in Lines 5 and 9 of Algorithm 2.Using the current character configuration and the selected template we canthen generate a footstep plan for the exit task phase. Next, the locomotionphase is planned for. Locomotion task phase uses footstep segments fromthe tagged with a “Walk” category. The first segment used for locomotion issuch that its length is most similar to last foot step in the exit task phase, ifa turn and step exists. Otherwise, we use a “preferred” left or right footstepmotion segment for generating a locomotion footstep plan (Line 6). Plan-ning for locomotion phase is terminated when enter task criteria is satisfiedi.e. one of the planned locomotion foot step lies withing the enter task ra-dius of the next task. Finally, we select the most suitable footstep strategytemplate using Eq. (5.4) and the distance metric define in Eqs. (5.8), (5.9)and 5.10 for the enter task phase (Lines 7 and 2).735.4. OptimizationFigure 5.9: Comparison of unoptimized task-specific footstep plan with anoptimized plan. The character is directed to exit from a writing task on theleft, represented by the sphere, and sit on a box, located on the right.While the footstep plan provided by the templates is a good starting point,further optimization is required for several reasons. The template-basedfootstep plan is limited in its ability to produce ”good” footsteps, as shownin Figure 5.9(a), due to the use of a sparse template library. Also, transitionsbetween exit, locomotion and entry phases need special care to generatenatural foot stepping behaviors. Please refer to the video for an example ofa motion reconstructed from an un-optimized footstep plan. Hence, we needto further optimize the generated template-based footstep plan to producean optimal footstep plan, as shown in Figure 5.9(b). The optimized plansatisfies the desired task description and helps in the reconstructing of anatural full bodied motion.5.4 OptimizationAn online optimization procedure is used to adapt the template-based foot-step plan to satisfy constraints such as desired heading location, orientation,smooth step length variation, etc. We perform the optimization in variousphases such as exit task, locomotion, enter preparation, and enter task phaseas shown in Figure 5.10. Using this task-aware motion phase structure alongwith the set of objective functions described below allows for the synthesisof generalized task-specific footstep plans.In order to optimize the template-based footstep plan to the desired lo-cation and task constraints, we optimize for data prior, step smoothness,heading orientation, distance from goal and average local step location ob-jectives (Eq. 5.11). We use BFGS as the optimization strategy of our choice.Analytic gradients are provided for the objective functions to the optimiza-745.4. OptimizationFigure 5.10: Optimization phases.tion routine. We now describe each of the objective function used in theoptimization procedure.f = wdfd + wsfs + wofo + wdgfdg + wlfl (5.11)5.4.1 Data Prior ObjectiveTo produce a footstep plan which satisfies task constraints while remainingas close to the data as possible, we use the following objective function whichpenalizes deviation from the data:fd =n∑i=1((xsubpi − xsubpid )2 + wa ∗ (ysubpi − ysubpid )2 + wθ ∗ (θi − θdi )2) (5.12)where, xsubpi and ysubpi are the components of the step vector in the coordi-nate frame of the template (i.e. x axis pointing from the first footstep in thetemplate to the last foot step or the next goal location in case the templateconsists of a single footstep, and z axis pointing along world up direction)for the current footstep. And xsubpidand ysubpidare the relative components ofthe ith footstep and the related footstep from data in the same coordinateframe. wa is a weight for penalizing the step length component in the y axisof the template coordinate frame. We set wa = 2 in all our experiments.wθ is also a weighting factor for the yaw term comprised of the optimizedfootstep and footstep from the data (θi and θdi ). We set wθ = 0.4 in all ourexperiments.755.4. Optimization5.4.2 Smooth Step ObjectiveIt is desirable to have a smooth variation in the generated footstep lengths ashumans show a preference for minimizing energy expenditure by minimizingoverall acceleration during the course of locomotion. This can be achievedby a smooth variation of step lengths. Hence, the smooth step objectivestakes the following form:fs =n∑i=1((Li − Li−1)2) (5.13)where, Li and Li−1 are the step lengths of footsteps ith and (i− 1)th respec-tively.5.4.3 Average Feet Orientation ObjectiveThis objective tries to orient the footsteps in the heading direction. How-ever since the original structure of the footstep strategy template must bepreserved. This can be achieved if the contribution of this objective remainslow for the initial footsteps in the template and increases with each footstepin the plan. Hence, the optimized template should gradually deviate fromthe original template as the strategy executes, i.e. footsteps at the begin-ning of the optimized template maintain orientations similar to the originaltemplate whereas the later footsteps gradually show a preference for thedesired heading direction via their orientation. This is encapsulated in theobjective function as follows:fo =n∑i=1αn−i ∗ (θi − θg)2 (5.14)where, θi is the orientation of the ith footstep and the θg is angle the nexttask makes with the first footstep in the current template. We use an α = 0.9in all our experiments.5.4.4 Distance From Goal ObjectiveFor the Enter Preparation phase, which we describe in more detail below,we incrementally penalize the footsteps for their distance from the first pre-dicted footstep location for the enter task footstep strategy template for theenter task phase.fdg =n∑i=1αdgn−i ∗ dei 2 (5.15)765.4. Optimizationwhere dei is the distance between the coordinate of the ith footstep and thefirst footstep in the enter task footstep strategy template, and αdg = 0.9 isa weighting factor.5.4.5 Average Feet Location ObjectiveDuring the optimization process we want to minimize stray from the headingdirection of the current template (vector between the first and last footstepof the current template or between the first footstep of the template andthe next task location in case there’s only a single footstep in a template).Hence, we penalize the sum of distance of the optimized footstep templatefrom the heading direction (d) via the following objective:fl =n∑id2 (5.16)Transitioning from locomotion to enter task phase requires some prepara-tion during the locomotion phase. During locomotion, when one of theplanned foot steps satisfies entry criteria for transitioning to enter taskphase, we select the most suitable template for entering task. However,the enter task footsteps are planned in the local coordinate frame of thedesired next task. Hence, in order to match the last locomotion footstepwith the first enter task footstep, additional locomotion foot steps mightbe introduced. For example, if the enter task template starts with a rightswing footstep, and the locomotion phase also terminates with a right swingfoot step, then we introduce an additional left swing footstep in the plannedlocomotion footsteps. Additionally, if the transition footstep length betweenenter task footsteps and the last locomotion footstep is larger than a thresh-old, two additional locomotion footsteps are introduced at this point.We then define the Enter Preparation phase. Since, the last few footstepsbelonging to either locomotion or exit task phase must be optimized toensure that smooth stepping is observed during transition to enter taskphase, the last nr planned footsteps belonging to either locomotion or exittask phase, where nr = 3 in all our experiments, are then considered tobe in a Enter Preparation phase as shown in Figure 5.10. The footsteps inthe Enter Preparation phase are re-optimized. This produces a locomotionfootstep plan that gradually prepares for the upcoming footstep strategytemplate associated with task entry.775.5. Full Body Motion GenerationFigure 5.11: Feet trajectory by warping closest motion segment from data.Grey footsteps and curves represent original swing foot trajectory. Blackfootsteps and curves represent warped swing foot trajectory. Root motionis also warped using a transformation of the swing foot warp along with atransformation to match the incoming root trajectory.Since, during each phase various objectives have different relative impor-tance, the optimization uses different weights during each phase for the ob-jective functions discussed above. Our system uses empirically determinedphase dependent weighting. The weights used are the same for all our resultsand do not require additional turning for used with different task categories.We tabulate the weights used for each phase in Table 5.2.5.5 Full Body Motion GenerationTask-aware structure of the generated footstep plan and its optimality withrespect to previously described objective functions (see Section 5.4) allowsfor the use of simple and computationally efficient full body motion recon-Objective Function ExitTaskLocomotionEnterTaskPrepEnterTaskData Prior 0.4 0.4 0.4 0.4Smooth Step 0.1 0.2 0.1 0.1Avg. Orientation 0.4 0.2 0.1 0.0Distance From Goal 0.0 0.0 0.3 0.5Avg. Location 0.1 0.2 0.1 0.0Table 5.2: Weights for objective functions.785.5. Full Body Motion GenerationFigure 5.12: The IK system takes feet, root and task related transforms asend effectors. The algorithm is warm started with a data-driven pose.struction techniques while still achieving high fidelity for the reconstructedmotion. We further simplify full body motion reconstruction via the use of amulti-stage reconstruction approach to generate full body motion from thefootstep plan which can be described at a high level as follows. As shownin Figure 5.2, we begin by generating feet trajectories using the footstepplan and the motion segments associated with each of these footsteps. Rootmotion trajectories are then produced by suitably warping associated rootmotion trajectories. Feet and root trajectories along with task constraintsare then used to drive the end efforts for the full body IK procedure to warpthe full body motion from the associated segment from the database to thedesired footstep plan and task requirements. We will now describe the fullbody motion generation technique in more detail.Full body motion reconstruction begins by first reconstructing feet trajec-tories from the generated footstep plan. Each footstep in the generated planhas an associated task phase, category (heel pivot, toe pivot, side step, turnand step, forward/backward step or walking step) and an associated motionsegment from the library of templates. We spatially warp the swing foottrajectory from the associated motion segment to the start and end loca-tions and orientations specified by the task-specific footstep plan. A footstepcan either be sliding or have an air-borne component. For footsteps whichhave an associated air borne phase, we only perform the warping duringthe air-borne phase. However, since sliding footsteps do not have an asso-ciated air borne phase, they can be warped for the entire duration of thefootstep. Since, the optimization produces a footstep plan which is opti-mal with respect to objective functions such as smooth step variation anddata preservation, natural footstepping motion is produced via the afore-mentioned warping technique (Figure 5.11).795.6. ResultsThe warping of feet trajectories to match the footstep plan should alsoeffect the reconstructed root motion. We apply half the swing foot warp tothe root motion in order to compensate for the swing foot motion warpedtrajectory. We also use root motion warping to match incoming root motiontrajectory in order to create smooth transitions between successive segmentsof the reconstructed motions. This when combined with foot trajectorywarping creates believable transitions between motion segments. The feetand root transforms along with task constraints are used as end effectors toan iterative IK algorithm with joint constraints and the ability to handlearbitrary joint hierarchies [6]. The IK algorithm is warm started with a posesampled from our database of poses associated with the current footstep andtask constraint (Figure 5.12).5.6 ResultsWe will now describe our motion capture setup and the results obtained us-ing our algorithm. Please refer to the accompanying video for a full overviewof the results we generate using our current prototype.Motion capture data was collected using 8 Vicon MX40 [2] cameras torecord the motion at a frequency of 60 Hz which was then down-sampledto 30Hz for use in our system. We used 53 markers for capturing the actorwhile performing various tasks along with 4 markers to capture objects suchas the box the actor was required to lift and move. The weight of the boxin our setup was 10 Kg.5.6.1 Task CategoriesAs the use of task-specific templates is key in the planning of a task-specificmotion, access to a library of templates, which encompass a wide varietyof tasks, allows for successfully planning task-specific motions. In order tobuild this library of templates, we explore a representative set of categorieswith unique associated templates for entry and exit behavior.We begin by motion capturing a desired set of tasks detailed in this sec-tion. The motion capture sessions are designed for capturing characteristicfoot step strategies exhibited while entering and exiting tasks. We showa representation of the various motion capture setups used in Figure 5.13.Here, the starting point for a task is depicted by an orange circle, while each805.6. Results(a) Multiple column task(Front View).(b) Multiple angles of en-tries and exits (Top View).(c) Sideways task (TopView).Figure 5.13: Representative setups for recording various entry and exitstrategies.of the next task location is shown by blue circles or circular arrows. Thestraight arrows pointing between task locations represent the transitions be-tween the respective task. Capturing data for each of the task categoriesutilizes one or more of these motion capture setups. We describe the de-tails regarding the data collected using the motion capture setups depictedin Figure 5.13 for each of the task categories in Table 5.3. We will nowdescribe each of the task category in more detail.Writing : We choose to explore writing as a representative task cat-egory within our framework as a writing task captures footstep strategiescharacteristic of many similar everyday tasks. Also, the notion of effort canbe easily established via specifying the duration or the style of writing to beperformed.Task Category Motion Capture Setup Data Size in mins.Writing a,b,c 3Move Boxes b,c 1.5Box Sitting b 0.85Turn Around b N/ATable 5.3: Description of task categories. The motion capture setups aredescribed in Figure 5.13.815.6. Results(a) Writing task. (b) Box Sitting task.(c) Box Moving. (d) Turn Around Task.Figure 5.14: Task Categories.Motion capture was performed to record the behavior of an actor whileperforming writing tasks using the setups shown in Figure 5.13(a) and5.13(b). Using the optimized form of a foot step plan generated using thetask-strategy templates constructed from these data allows for reconstruct-ing exit and entry strategies from any angle for a writing task.In order to collect data for a high effort form of the writing task, motioncapture was performed for a high effort task where the actor was askedto draw several circles at pre-specified locations on the whiteboard usingthe setup shown in Figure 5.13(a). This figure depicts a front view of thetask setup. Here the actor starts off by writing at the location specifiedby the first circle on the left in the figure, and then progressively writingin each of the subsequent columns followed by returning to performing awriting task at the location of the starting circle. Another set of data wererecorded using the setup shown in Figure 5.13(b) which shows a top viewof setup used. Here, the circle represents the actual task location, and thearrows represent turnaround locations. We also utilize footstep strategiesassociated with turning around as specified later in this section.825.6. ResultsFor capturing low effort tasks the actor was instructed to either tap withina box (low effort), or draw several circles (high effort) in a pre-specifiedsequence. The task setup shown in Figure 5.13(c) was used for recording thisdata. The actor was asked to start performing the task at the first specifiedlocation followed by tasks specified at the subsequent locations. Hence,the actor was always aware of both the next and the follow-up tasks whileperforming the current task. This allowed for the capture of co-articulationbehavior which can arise while performing low effort tasks.In total, we collected nearly 3 minutes of motion captured data for thewriting task. An example of a result generated for a writing task is shownin Figure 5.14(a)Box Sitting : Sitting on a box requires unique foot step strategies whichmarkedly different from those observed in a writing task, both while enteringand exiting the task. For example, for a sitting task, when approaching thetarget, the actor must first turn around before entering a sitting position.Hence, due to the unique nature of associated foot step strategies, we includeit as a separate category within our task-specific framework.All sitting tasks are considered to belong to a high effort category due tothe lack a of a quick and low effort style associated with sitting on a box.We used nearly 50 seconds of motion capture data for the sitting on a boxtask. We show an example of reconstructed footstep plan and generatedpose for this task in Figure 5.14(b)Lift and Move Boxes : Tasks such as lifting and moving boxes alsohave unique associated footstep strategies . The inherent high effort natureof such tasks, coupled with a high effort locomotion associated with carryingheavy boxes generates unique foot stepping strategies. According to ourobservation, foot slides and pivots are barely present while performing ahigh effort task such as lifting and moving heavy objects. The use of contextin which these tasks were captured while reconstructing motions allows forreconstructing these associated characteristics using our framework.Nearly 1.5 minutes of motion capture data were used for the lifting andmoving boxes task. The weight of the box the actor was required to movewas 10 kg. Motion capture setups shown in Figure 5.13(b) and Figure 5.13(c)835.6. Resultswere used for this task. We show an example of reconstructed footstep planand generated pose for this task in Figure 5.14(c)Turn Around : A task-specific framework can also be used to performcertain motion tasks. We demonstrate this via a turn-around task. Nodata was explicitly collected for this task, however, during the process ofcapturing some of the above tasks, footstep strategies for turn-around werealso captured due to nature of the motion capture setups. For example,footstep strategies for a turn around task were captured when a setup suchas the one shown in Figure 5.13(b) was used for a writing task or a boxsitting task. We can represent the act of turning around as a task withinour system. Hence, the task-based structure can be also effectively used forsuch motion tasks.In Figure 5.14(d), the character is shown executing a turn around task.Here, the character exits from a writing task on the left of the whiteboardand walks towards the turn around task, represented by a circular arrow, inorder to execute it.A sparse database of foot step strategy templates results from the motioncapture data collected for various aforementioned task categories. This li-brary of templates is used to build an initial rough plan for task achievementusing template-based footstep planning as described in Section 5.3. The re-sulting plan is then fine tuned to the desired task and stepping requirementsusing the optimization procedure described in Section 5.4 to generate thefinal task-specific footstep plan. This is then used to generate the resultingfull body motion for the desired task pairs.5.6.2 Task EffortFootstep strategies while entering and exiting next task can be influencedby certain parameters of the next task such as duration, force and precisionrequirements, etc. along with the relative location of the follow up task.This influence results in co-articulation between tasks. In order to allow forsynthesis of co-articulated footstep strategies we parameterize the notion oftask effort as follows. Tasks which seemingly have higher degree of force andprecision requirements or require longer duration to execute, are deemed tobe high effort in our system, whereas those with low requirements in theseparameter domains are deemed to be low effort. We manually perform thisclassification on an empirical basis and assign a value of 1 for high effort and845.6. Results0 for low effort tasks in the task transition (Γ) for the current, next and thefollow up task.People exhibit varying degree of co-articulation between tasks dependenton the effort required for the next task and the relative location of thefollow-up task. For example, in our observation, if the relative location fora low effort next task, such as tapping on a whiteboard with a marker, isin the same general direction of the follow-up task, then often the low efforttask is performed “in the passing”, as shown in Figure 5.15(b). However,if the follow-up task location is in the opposite direction of the direction ofapproach for the next task, then the entry strategy can be markedly differentas shown in Figure 5.15(a). In the shown example, the character performs asmall left side step while leaning left to execute the low effort task, followedby turning and walking towards the high effort task on the right. Hence,planning for task effort requires knowledge of both, the effort required forthe next task, and the location of the follow-up task task with respect tothe next task as described in Section 5.3.2.855.6. ResultsInput : List of task descriptors (location, category, effort)Data: Motion capture clips segmented into steps tagged with one ofthe labels : Heel Pivot, Toe Pivot, Side Step, Turn and Step,Walk, Forward Step During Task and Backward Step DuringTaskOutput: Set of footstep location and orientations, associated motionsegments, full body motionAlgorithm TaskSpecificLocomotion()Initialize a library of “Footstep Strategy Templates’’ for task exitsand task entries from the following information for each exampletask pair :• Sequence of motion segments exiting current example task andentering next example task• Associated task descriptor for exit and enter task1 Compute task entry radii for each task category.2 foreach Task pair Ti−1 and Ti do3 known : Foot plants from executing task Ti−1.4 Compute task description vectors for queried exit and entertask.5 PlanTaskExitPhase()6 PlanLocomotionPhase()7 PlanTaskEntryPhase()8 Generate warped feet motion and root motion using thefootstep plan and associated motion segments.9 Generate full body motion using poses from the associatedmotion segments and full-body inverse kinematics as appliedto the root, hands and feet.endProcedure PlanTaskExitPhase()Select exit task footstep strategy template.Optimize()Procedure PlanLocomotionPhase()repeatGenerate locomotion step.Optimize()until Until within task radius of TiRe-optimize last three footsteps to better match enter taskfootstep requirements.Algorithm 2: A high level description of the task-specific motion planningalgorithm.865.7. DiscussionProcedure PlanTaskEntryPhase()Select enter task footstep strategy template.;Optimize();Procedure Optimize()Optimize the newly generated template-based footsteps for thecurrent phase with respect to the following objective functions :;• Data Prior• Smooth step length variation between consecutive steps• Orientation towards the task• Minimize distance from goal as the plan progresses• Average feet location to lie on the line between start and goalAlgorithm 2: A high level description of the task-specific motion planningalgorithm (continued).Effect of anthropometry : Our system generates foot steps whichare commensurate with the anthropometry of the character being used forthe motion planning. For example, for the same next task location, a tallercharacter (Figure 5.16(b)) uses a side step instead of a partial turn and steplike the shorter character (Figure 5.16(a)). In the Figure, the blue sphererepresents the next task to be executed and the red sphere represents thecurrent task. Hence, using the same task description generates two differentfoot step plans which are appropriate for two characters with different an-thropometry respectively. This difference in foot step plans arises from theuse of task-specific footstep templates. The footsteps inherently capture theanthropometry of the character via motion retargeting applied to the inputanimations for each character. Such difference in modes of task execution,i.e. sidestepping v.s. partial turn and step, for a similar task, cannot beachieved via simple motion graph based approaches.5.7 DiscussionWe believe that each of the task categories we have presented in this chapterare representative of a large set of tasks. We would like to extend these cat-egories by building a generalized model for selection of footstep templates,875.7. Discussion(a) Next task on the right side of a goalwhile approaching the goal from the left(b) Next task on the right side of a goalwhile approaching the goal from the rightFigure 5.15: A comparison of some of the different strategies for entering alow effort task depending on the relative location of the next task.for task exit and entry. This can be achieved by parameterizing all the pos-sible tasks withing a category of task. These task parameters can be one ormore of the following : duration, force requirements, precision etc. Buildingcategories of tasks, which are then parameterized within the category, willallow for a more generalized task-specific motion planning framework.Task-specific framework can also be used for planning more dynamic,parkour style, motions. Exploring the application of the task-specific frame-work for more dynamic motions is an interesting future direction. Parkourmotions such as vaulting over a wall or rolling on the ground can be con-sidered to be task-specific motions as the require specific entry and exitstrategies which involve a greater degree of ground contact with the bodyrather than just with feet. The inclusion of hands, arms or other bodyparts as a possible step in the template-based plan can possibly allow forgenerating task-specific motions in these domains.Interactions with other characters can also be thought of just another formof task-specific motions. Simultaneous planning of multiple characters usingthe phase-based structure described in this chapter is a possible solution forgenerating realistic multi-character animation sequences.We would also like to expand the task-specific planning framework tofurther explore weight shifts people usually perform while executing standingtasks. Shifts of center of pressure while performing tasks will create an evenricher variety in the solution space for task-specific motion planning.885.8. Conclusion(a) Shorter Character (b) Taller CharacterFigure 5.16: Effect of character anthropometry of synthesized task-specificfoot step plan.The use of inverse reinforcement learning to build polices for task achieve-ment is an interesting future direction. Building a policy would allow us togenerate reasonable response in unexplored states by performing either in-terpolation or some form of nearest neighbor queries to determine the actionsto be taken in these unexplored states in order to drive the character backtowards the known state subspace. This can be used to provide a frame-work for automatic tuning of weights for the various objective functionsused in our framework. However, the high dimensionality and the mixtureof discrete and continuous parameters poses a challenge for applying suchtechniques to this problem domain.5.8 ConclusionIn this chapter we have proposed a system which can generate animationsequences for task-specific locomotion for an arbitrary sequence of tasks.Tasks are specified by choosing the desired location, orientation and thecategory of task to be performed. The synthesized footstep pattern repli-cates natural behaviors and strategies humans undertake during locomotion,performing tasks and switching between them. The system allows for a di-rectable approach for generalizing style in a task-specific framework. Wehope that the use of a task-specific framework will help enhance the realismof synthesized animations and further generate interest in this promisingresearch direction.89Chapter 6Conclusion6.1 Summary and ContributionsIn this thesis we have presented work for generating both directable andexploratory approaches for introducing style in kinematic and physics-basedframeworks. Generalization of this style to novel scenarios has also beenexplored. These approaches highlight the importance of style in animation.Ranging from artistic intent to idiosyncrasies of the performing actor, amultitude of factors contribute to inherent styles in motions. Our researchhas resulted in a number of scientific contributions, which we summarizehere.In Chapter 3, we have presented a framework for diversity optimizationas applied to motion reconstruction and character anthropometry for simu-lated skills. We developed objective functions specifically tailored to favordiversity in order to explore both shape and motion diversity via the useof a round robin covariance matrix adaptation optimization strategy. The“distances” between motions and anthropometry were measured via the useof the proposed pose similarity metrics. The specification for a motion taskis usually under-constrained. Most approaches find a single optimal solutionby defining an energy optimal solution, and hence reconstructing the same.In contrast, we generate a set of as diverse as possible motions, which allsatisfy the desired motion task. This allows the user to explore a set ofautomatically synthesized and maximally diverse solutions, in the processof designing a particular desired mode of task achievement.In Chapter 4, we have investigated several possible algorithms for com-puting character controllers that span a pareto-optimal front. We extendedexisting multi-objective optimization techniques to the problem domain ofcharacter controllers. We developed techniques to perform resampling ofgenerations during the process of optimization using multi-objective CMAas the optimization strategy of choice. Also, we developed scaling of theobjective function in order to produce a more uniform pareto-optimal front906.2. General Discussion and Future Directionsin order to produce meaningful sorting using a hypervolume of samples. Fi-nally, we also developed a framework to produce supernatural motions whichare as close as possible to natural motions. The supernatural motions wererealized via the use of external forces on an as needed basis.In Chapter 5, we presented a method for reconstructing believable step-ping behaviors as observed during task execution and switching betweentasks. Humans exhibit a wide variety of stepping styles such as side-stepping,partial and full turn-and-steps, etc. while interacting with the environment.We developed an extended vocabulary of footsteps for expressing the richspace of observed foot stepping behaviors while performing tasks. Usinga library of footstep templates, which specify footstep strategies while ex-iting and entering tasks, we developed a phase based planning algorithm.Our model of choosing the relevant footstep strategies depends on the nextand the followup task depending on the effort requirements of the next taskto be executed. This mirrors stepping behaviors observed in real life ex-periments. The rough template-based footstep plan then gets optimized inan online fashion to produce smooth and natural stepping behaviors. Wethen described a full body motion reconstruction approach which uses thisgenerated footstep plan to generate the final task-specific fullbody motion.6.2 General Discussion and Future DirectionsWe now make a number of general observations about the current andfuture work in the areas of this thesis. For observations and future workthat are more specifically tied to the individual contributions, please referto the conclusions of Chapter 3, 4 and 5.6.2.1 Crowd Sourcing ControllersDeveloping a common language for authoring and sharing physics-basedcharacter controllers will help make physics-based control more approach-able. This common platform should allow users to “download” skills andintegrate them within their existing systems. Similarly, there should be asystem which manages the repertoire of skills for physics-based characters.When a new skill gets uploaded, this system would determine the best tran-sition between the newly uploaded skill and the currently existing library ofskills. Hence, we can slowly expand the set of skills exhibited by a virtualphysics-based controller by using crowd sourced authoring.916.2. General Discussion and Future Directions6.2.2 OptimizationThe use of optimization allows for automatic recovery of a desired optimalmotion and styles in both kinematic and physics-based domains. Objectivefunctions are tailored to produce optimality with respect to criteria such astask achievement, motion or step smoothness, energy expenditure etc. Weused variants of CMA as the optimization of strategies of choice for boththe physics-based style exploration and generalization approaches describedin this thesis. CMA generates a predetermined number of offsprings at eachgeneration of the evolutionary optimization strategy. However, measuringthe objective value for each of these samples entails running an entire sim-ulating and then evaluating the result to determine the fitness value forthe evaluated sample. Naively sampling by using a Gaussian, in highlyconstrained and dynamic domains found in character controllers, results inmost of the sampled points to produce failure with regards to the charactercontroller. More work is required to generate samples, directly through thesampling process, which lie within the success region of the controller.6.2.3 Discrete and Continuous PlanningPlanning in a problem domain comprised of mixed discrete and contin-uous variables is challenging. In order to make choice of footstep styleswhich entails both discrete and continuous variables, we have employed anearest neighbor lookup strategy in order to determine the most suitabletemplates for footstep strategies as described in Chapter 5. The choice offootstep is discrete whereas the warping associated with the chosen footstepis a continuous variable. This is then followed by an optimization of thegenerated template-based plan and warping the choice of footstep styles tothe optimized path. Although, this approach has found to work well in thetargeted problem domain, more generalized models need to be developed foruse in such scenarios. This will enable us to produce a more principled so-lution which will be applicable to a generalized problem domain comprisedof mixed discrete and continuous variables. More work is required in orderto develop more efficient techniques for planning in such scenarios.6.2.4 Physics-based Characters in Games andVisualizationsThe use of physics-based character in real time applications like games andvisualizations is a very promising use case for physics-based control. How-ever, due to limited processing power available in many video games, espe-926.2. General Discussion and Future Directionscially for background characters, physics-based approaches have seen limiteduse so far. However, with the increase of processing capabilities of modernhardware, physics-based control is starting to make its way into some recentvideo games titles. GPU-based simulation can perhaps help alleviate someof the performance concerns by moving physics calculations from the CPUto dedicated hardware, hence freeing the CPU for other computations.We hope that the framework and tools we provide will help expand thevocabulary used to both express and develop style in motion. Also, wehope that the introduction of tools which allow artistic control over stylegenerated in both kinematic and physics-based animation will help furtherimprove the state of art in real time animations.With the rise of Indie game development and Indie movie making, theneed for user friendly tools for creating high quality animations is higherthan ever. Directable style can be of great affect for democratizing highquality animation content creation. The described techniques will advancethe state of art in the field of animation content creation, and it will stimulatefurther research in the directions similar to the ones we have pursued.We would like to thank the anonymous reviewers and editors for theirvaluable comments on the manuscripts published at various venues. Wewould also like to thank NSERC and GRAND for providing funding forthese projects.93Bibliography[1] Honda ASIMO. http://asimo.honda.com/. Accessed: 06-18-2014.[2] Vicon. http://www.vicon.com/. Accessed: 31-05-2015.[3] Shailen Agrawal, Shuo Shen, and Michiel van de Panne. Diverse mo-tion variations for physics-based character animation. In Proceedingsof the 12th ACM SIGGRAPH/Eurographics Symposium on ComputerAnimation, SCA ’13, pages 37–44, New York, NY, USA, 2013. ACM.[4] Shailen Agrawal, Shuo Shen, and Michiel van de Panne. Diverse mo-tions and character shapes for simulated skills. IEEE Transactions onVisualization and Computer Graphics, 99(PrePrints):1, 2014.[5] Shailen Agrawal and Michiel van de Panne. Pareto optimal control fornatural and supernatural motions. In Proceedings of Motion on Games,MIG ’13, pages 7:29–7:38, New York, NY, USA, 2013. ACM.[6] Andreas Aristidou and Joan Lasenby. Fabrik: A fast, iterative solverfor the inverse kinematics problem. Graphical Models, 73(5):243 – 260,2011.[7] Philippe Beaudoin, Stelian Coros, Michiel van de Panne, and PierrePoulin. Motion-motif graphs. In Proceedings of the 2008 ACM SIG-GRAPH/Eurographics Symposium on Computer Animation, SCA ’08,pages 117–126, Aire-la-Ville, Switzerland, Switzerland, 2008. Euro-graphics Association.[8] M. A. Borno, M. de Lasa, and A. Hertzmann. Trajectory optimizationfor full-body movements with complex contacts. IEEE Trans. Visual-ization and Computer Graphics, 2013. in press.[9] Box2D. Box2d v2.2.1, http://box2d.org/.[10] M. Brand and A. Hertzmann. Style machines. In Proc. ACM SIG-GRAPH, pages 183–192. ACM, 2000.94Bibliography[11] Matthew Brand and Aaron Hertzmann. Style machines. In Proceed-ings of the 27th Annual Conference on Computer Graphics and Interac-tive Techniques, SIGGRAPH ’00, pages 183–192, New York, NY, USA,2000. ACM Press/Addison-Wesley Publishing Co.[12] J. Chai and J.K. Hodgins. Constraint-based motion optimization usinga statistical dynamic model. ACM Transactions on Graphics (TOG),26(3):8, 2007.[13] S. Chenney and D.A. Forsyth. plausible solutions to multi-body con-straint problems. In Proc. ACM SIGGRAPH, pages 219–228. ACM,2000.[14] J. Chestnutt, M. Lau, G. Cheung, J. Kuffner, J. Hodgins, andT. Kanade. Footstep planning for the honda asimo humanoid. InRobotics and Automation, 2005. ICRA 2005. Proceedings of the 2005IEEE International Conference on, pages 629–634, April 2005.[15] Min Gyu Choi, Jehee Lee, and Sung Yong Shin. Planning biped loco-motion using motion capture data and probabilistic roadmaps. ACMTransactions on Graphics (TOG), 22(2):182–203, 2003.[16] CMLabs. Vortex simulator, v5.1.[17] C.A.C. Coello and G.B. Lamont. Applications Of Multi-Objective Evo-lutionary Algorithms. Advances In Natural Computation. World Scien-tific Publishing Company Incorporated, 2004.[18] A. Coman and H. Mun˜oz-Avila. Generating diverse plans using quan-titative and qualitative plan distance metrics. In Proc. of AAAI, pages946–951, 2011.[19] S. Coros, A. Karpathy, B. Jones, L. Reveret, and M. Van De Panne.Locomotion skills for simulated quadrupeds. ACM Transactions onGraphics (TOG), 30(4):59, 2011.[20] Stelian Coros, Philippe Beaudoin, and Michiel van de Panne. General-ized biped walking control. ACM Transctions on Graphics, 29(4):Arti-cle 130, 2010.[21] M. Da Silva, Y. Abe, and J. Popovic´. Simulation of human motion datausing short-horizon model-predictive control. In Computer GraphicsForum, volume 27, pages 371–380. Wiley Online Library, 2008.95Bibliography[22] Martin de Lasa, Igor Mordatch, and Aaron Hertzmann. Feature-BasedLocomotion Controllers. ACM Transactions on Graphics, 29(3), 2010.[23] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitistmultiobjective genetic algorithm: Nsga-ii. Evolutionary Computation,IEEE Transactions on, 6(2):182–197, 2002.[24] T. Geijtenbeek and N. Pronost. Interactive Character Animation UsingSimulated Physics: A State-of-the-Art Review. Computer GraphicsForum, 31(8):2492–2515, December 2012.[25] Michael Gleicher. Retargetting motion to new characters. In Proceed-ings of the 25th Annual Conference on Computer Graphics and Inter-active Techniques, SIGGRAPH ’98, pages 33–42, New York, NY, USA,1998. ACM.[26] Keith Grochow, Steven L. Martin, Aaron Hertzmann, and ZoranPopovic´. Style-based inverse kinematics. In ACM SIGGRAPH 2004Papers, SIGGRAPH ’04, pages 522–531, New York, NY, USA, 2004.ACM.[27] N. Hansen. The CMA evolution strategy: a comparing review. In J.A.Lozano, P. Larranaga, I. Inza, and E. Bengoetxea, editors, Towards anew evolutionary computation. Advances on estimation of distributionalgorithms, pages 75–102. Springer, 2006.[28] E. Hebrard, B. Hnich, B. O Sullivan, and T. Walsh. Finding diverseand similar solutions in constraint programming. In Proc. AAAI 2005,volume 20, pages 372–377, 2005.[29] Jessica K. Hodgins, Wayne L. Wooten, David C. Brogan, and James F.O’Brien. Animating human athletics. In SIGGRAPH ’95, pages 71–78.ACM, 1995.[30] E. Hsu, K. Pulli, and J. Popovic´. Style translation for human motion. InACM Transactions on Graphics (TOG), volume 24, pages 1082–1089.ACM, 2005.[31] Christian Igel, Nikolaus Hansen, and Stefan Roth. Covariance matrixadaptation for multi-objective optimization. Evol. Comput., 15(1):1–28,March 2007.96Bibliography[32] Antony W. Iorio and Xiaodong Li. Solving rotated multi-objectiveoptimization problems using differential evolution. In Australian Con-ference on Artificial Intelligence, pages 861–872, 2004.[33] Manmyung Kim, Kyunglyul Hyun, Jongmin Kim, and Jehee Lee. Syn-chronized multi-character motion editing. In ACM SIGGRAPH 2009Papers, SIGGRAPH ’09, pages 79:1–79:9, New York, NY, USA, 2009.ACM.[34] L. Kovar, M. Gleicher, and F. Pighin. Motion graphs. ACM Transac-tions on Graphics, pages 473–482, 2002.[35] J. Kuffner, S. Kagami, K. Nishiwaki, M. Inaba, and H. Inoue. Onlinefootstep planning for humanoid robots. In Robotics and Automation,2003. Proceedings. ICRA ’03. IEEE International Conference on, vol-ume 1, pages 932–937 vol.1, Sept 2003.[36] Jr. Kuffner, J.J., K. Nishiwaki, S. Kagami, M. Inaba, and H. Inoue.Footstep planning among obstacles for biped robots. In IntelligentRobots and Systems, 2001. Proceedings. 2001 IEEE/RSJ InternationalConference on, volume 1, pages 500–505 vol.1, 2001.[37] Taesoo Kwon and Jessica K. Hodgins. Control systems for human run-ning using an inverted pendulum model and a reference motion capturesequence. The ACM SIGGRAPH / Eurographics Symposium on Com-puter Animation (SCA 2010), 2010.[38] A. Lamouret, M. Van De Panne, et al. Motion synthesis by example. InEurographics Workshop on Computer Animation and Simulation, pages199–212, 1996.[39] Joseph Laszlo, Michiel van de Panne, and Fiu Eugene. Limit cyclecontrol and its application to the animation of balancing and walking.In SIGGRAPH ’96, pages 155–162. ACM, 1996.[40] M. Lau, Z. Bar-Joseph, and J. Kuffner. Modeling spatial and temporalvariation in motion data. In ACM Transactions on Graphics (TOG),volume 28, page 171. ACM, 2009.[41] Jehee Lee and Sung Yong Shin. A hierarchical approach to inter-active motion editing for human-like figures. In Proceedings of the26th Annual Conference on Computer Graphics and Interactive Tech-niques, SIGGRAPH ’99, pages 39–48, New York, NY, USA, 1999. ACMPress/Addison-Wesley Publishing Co.97Bibliography[42] Kang Hoon Lee, Myung Geol Choi, and Jehee Lee. Motion patches:building blocks for virtual environments annotated with motion data.In ACM Transactions on Graphics (TOG), volume 25, pages 898–906.ACM, 2006.[43] Yoonsang Lee, Sungeun Kim, and Jehee Lee. Data-driven biped control.ACM Trans. Graph., 29:129:1–129:8, July 2010.[44] C.K. Liu and Z. Popovic´. Synthesis of complex dynamic charactermotion from simple animations. In ACM Transactions on Graphics(TOG), volume 21, pages 408–416. ACM, 2002.[45] Libin Liu, KangKang Yin, Michiel van de Panne, Tianjia Shao, andWeiwei Xu. Sampling-based contact-rich motion control. ACM Tran-sctions on Graphics, 29(4):Article 128, 2010.[46] A. Macchietto, V. Zordan, and C.R. Shelton. Momentum control forbalance. In ACM Transactions on Graphics (TOG), volume 28, page 80.ACM, 2009.[47] J. Marks, B. Andalman, P.A. Beardsley, W. Freeman, S. Gibson,J. Hodgins, T. Kang, B. Mirtich, H. Pfister, W. Ruml, et al. De-sign galleries: A general approach to setting parameters for computergraphics and animation. In Proc. SIGGRAPH, pages 389–400. ACMPress/Addison-Wesley Publishing Co., 1997.[48] Paul Merrell, Eric Schkufza, Zeyang Li, Maneesh Agrawala, andVladlen Koltun. Interactive furniture layout using interior design guide-lines. In ACM Transactions on Graphics (TOG), volume 30, page 87.ACM, 2011.[49] Jianyuan Min and Jinxiang Chai. Motion graphs++: A compact gen-erative model for semantic motion analysis and synthesis. ACM Trans.Graph., 31(6):153:1–153:12, November 2012.[50] Jianyuan Min, Huajun Liu, and Jinxiang Chai. Synthesis and editing ofpersonalized stylistic human motion. In Proceedings of the 2010 ACMSIGGRAPH Symposium on Interactive 3D Graphics and Games, I3D’10, pages 39–46, New York, NY, USA, 2010. ACM.[51] U. Muico, Y. Lee, J. Popovic´, and Z. Popovic´. Contact-aware nonlin-ear control of dynamic characters. In ACM Transactions on Graphics(TOG), volume 28, page 81. ACM, 2009.98Bibliography[52] Yoshihiko Nakamura and Hideo Hanafusa. Inverse kinematic solutionswith singularity robustness for robot manipulator control. Journal ofdynamic systems, measurement, and control, 108(3):163–171, 1986.[53] R.F. Nunes, P.G. Kry, and V.B. Zordan. Using natural vibrations toguide control for locomotion. In Proceedings of the ACM SIGGRAPHSymposium on Interactive 3D Graphics and Games, pages 87–94, 2012.[54] K. Perlin. Real time responsive animation with personality. IEEETrans. on Visualization and Computer Graphics, 1(1):5–15, 1995.[55] Marc H. Raibert and Jessica K. Hodgins. Animation of dynamic leggedlocomotion. ACM SIGGRAPH Computer Graphics, 25(4):349–358,July 1991.[56] Charles Rose, Bobby Bodenheimer, and Michael F. Cohen. Verbs andadverbs: Multidimensional motion interpolation using radial basis func-tions. IEEE Computer Graphics and Applications, 18:32–40, 1998.[57] Alla Safonova and Jessica K. Hodgins. Construction and optimal searchof interpolated motion graphs. ACM Trans. Graph., 26(3), July 2007.[58] Hubert P. H. Shum, Taku Komura, Masashi Shiraishi, and ShuntaroYamazaki. Interaction patches for multi-character animation. In ACMSIGGRAPH Asia 2008 Papers, SIGGRAPH Asia ’08, pages 114:1–114:8, New York, NY, USA, 2008. ACM.[59] Kwang Won Sok, Manmyung Kim, and Jehee Lee. Simulating bipedbehaviors from human motion data. ACM Trans. Graph., 26, July 2007.[60] B. Srivastava, S. Kambhampati, T. Nguyen, M. Do, A. Gerevini, andI. Serina. Domain independent approaches for finding diverse plans.In Proc. IJCAI, pages 2016–2022. Morgan Kaufmann Publishers Inc.,2007.[61] Deepak Tolani, Ambarish Goswami, and Norman I. Badler. Real-timeinverse kinematics techniques for anthropomorphic limbs. GraphicalModels, 62(5):353 – 388, 2000.[62] Yao-Yang Tsai, Wen-Chieh Lin, Kuangyou B. Cheng, Jehee Lee, andTong-Yee Lee. Real-time physics-based 3d biped character animationusing an inverted pendulum model. IEEE Transactions on Visualiza-tion and Computer Graphics, 16:325–337, March 2010.99Bibliography[63] C.D. Twigg and D.L. James. Many-worlds browsing for control of multi-body dynamics. In ACM Transactions on Graphics (TOG), volume 26,page 14. ACM, 2007.[64] R. Ursem. Diversity-guided evolutionary algorithms. Parallel ProblemSolving from NaturePPSN VII, pages 462–471, 2002.[65] Ben J. H. van Basten, Sybren A. Stu¨vel, and Arjan Egges. A hy-brid interpolation scheme for footprint-driven walking synthesis. InProceedings of Graphics Interface 2011, GI ’11, pages 9–16, School ofComputer Science, University of Waterloo, Waterloo, Ontario, Canada,2011. Canadian Human-Computer Communications Society.[66] Michiel Van De Panne. From footprints to animation. Computer Graph-ics Forum, 16(4):211–223, 1997.[67] M. Vondrak, L. Sigal, J. Hodgins, and O. Jenkins. Video-based 3dmotion capture through biped control. ACM Transactions on Graphics(TOG), 31(4):27, 2012.[68] J. Wang and B. Bodenheimer. An evaluation of a cost metric forselecting transitions between motion segments. In Proc. 2003 ACMSIGGRAPH/Eurographics Symposium on Computer Animation, pages232–238, 2003.[69] Jack M. Wang, David J. Fleet, and Aaron Hertzmann. Optimizingwalking controllers. ACM Trans. Graph., 28:168:1–168:8, December2009.[70] Jack M. Wang, David J. Fleet, and Aaron Hertzmann. Optimizingwalking controllers for uncertain inputs and environments. ACM Trans.Graph., 29:73:1–73:8, July 2010.[71] Jack M Wang, Samuel R Hamner, Scott L Delp, and Vladlen Koltun.Optimizing locomotion controllers using biologically-based actuatorsand objectives. ACM Transactions on Graphics (TOG), 31(4):25, 2012.[72] J.M. Wang, D.J. Fleet, and A. Hertzmann. Multifactor gaussian pro-cess models for style-content separation. In Proceedings of the 24thinternational conference on Machine learning, pages 975–982. ACM,2007.100Bibliography[73] Xiaolin Wei, Jianyuan Min, and Jinxiang Chai. Physically valid sta-tistical models for human motion generation. ACM Trans. Graph.,30(3):19:1–19:10, May 2011.[74] D.J. Wiley and J.K. Hahn. Interpolation synthesis of articulated figuremotion. Computer Graphics and Applications, IEEE, 17(6):39–45, Nov1997.[75] Andrew Witkin and Zoran Popovic. Motion warping. In Proceedingsof the 22Nd Annual Conference on Computer Graphics and Interac-tive Techniques, SIGGRAPH ’95, pages 105–108, New York, NY, USA,1995. ACM.[76] Jia-chi Wu and Zoran Popovic´. Terrain-adaptive bipedal locomotioncontrol. ACM Transactions on Graphics, 29(4):72:1–72:10, Jul. 2010.[77] Katsu Yamane, James J. Kuffner, and Jessica K. Hodgins. Synthe-sizing animations of human manipulation tasks. ACM Trans. Graph.,23(3):532–539, August 2004.[78] Katsu Yamane and KW Sok. Planning and synthesizing superheromotions. Motion in Games, pages 254–265, 2010.[79] Yuting Ye and C. Karen Liu. Optimal feedback control for characteranimation using an abstract model. ACM Trans. Graph., 29:74:1–74:9,July 2010.[80] K. K. Yin, K. Loken, and M. van de Panne. Simbicon: Simple bipedlocomotion control. ACM Transactions on Graphics, 26(3):105, 2007.[81] KangKang Yin, Stelian Coros, Philippe Beaudoin, and Michiel van dePanne. Continuation methods for adapting simulated skills. ACMTrans. Graph., 27(3), 2008.[82] KangKang Yin, Kevin Loken, and Michiel van de Panne. Simbicon:Simple biped locomotion control. ACM Trans. Graph., 26(3):Article105, 2007.[83] YouTube. Monty python,“Ministry of Silly Walks”. https://www.youtube.com/watch?v=9ZlBUglE6Hc.[84] Eckart Zitzler and Lothar Thiele. Multiobjective optimization usingevolutionary algorithms a comparative case study. 1498:292–301, 1998.101

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0221364/manifest

Comment

Related Items