UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Biomechanical simulation of the hand musculoskeletal system and skin Li, Duo 2013

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2013_spring_li_duo.pdf [ 22.17MB ]
Metadata
JSON: 24-1.0052208.json
JSON-LD: 24-1.0052208-ld.json
RDF/XML (Pretty): 24-1.0052208-rdf.xml
RDF/JSON: 24-1.0052208-rdf.json
Turtle: 24-1.0052208-turtle.txt
N-Triples: 24-1.0052208-rdf-ntriples.txt
Original Record: 24-1.0052208-source.json
Full Text
24-1.0052208-fulltext.txt
Citation
24-1.0052208.ris

Full Text

Biomechanical Simulation of the Hand Musculoskeletal System and Skin by Duo Li  B.Sc., Beijing Institute of Technology, 2009 M.Sc., The University of British Columbia, 2013  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in The Faculty of Graduate Studies (Computer Science)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) March 2013 c Duo Li 2013  Abstract This thesis presents a biomechanically based hand simulator. We make contributions at two different levels: hand motion and hand appearance. We first develop a musculotendon simulator, and apply this simulator to an anatomically based hand model. Anatomically based hand simulation is challenging because the tendon network of the hand is complicated and it is highly constrained by the skeleton of the hand. Our simulator employs the elegance of the Eulerian-Lagrangian strand algorithm, and introduces a 2D planar collision approach to efficiently eliminate unnecessary degrees of freedom and constraints. We show that with our method, we obtain the coupling between joints automatically, and achieve the storage of energy in tendons for fast movements. Also, by injuring a tendon, we are able to obtain simulations of common finger deformities. Although the musculotendon based hand simulation produces natural hand motion, hand animation is usually observed at the skin level. We present a novel approach to simulate thin hyperelastic skin. Real human skin is a thin tissue which can stretch and slide over underlying body structures such as muscles, bones, and tendons, revealing rich details of a moving character. Simulating such skin is challenging because it is in close contact with the body and shares its geometry. We propose a novel Eulerian representation of skin that avoids all the difficulties of constraining the skin to lie on the body surface by working directly on the surface itself. Skin is modeled as a 2D hyperelastic membrane with arbitrary topology, which makes it easy to cover an entire character or object. We use triangular meshes to model body and skin geometry. The method is easy to implement, and can use low resolution meshes to animate high resolution details stored in texture-like maps. Skin movement is driven by the animation of body shape prescribed by an artist or by another simulation, and so it can be easily added as a post-processing stage to an existing animation pipeline. We demonstrate realistic animations of the skin on the hand using this approach. We also extend it to simulate other parts of human and animal skin, and skin-tight clothes.  ii  Preface The methods and the systems presented in this thesis were mainly described in co-authored research papers: • Shinjiro Sueda, Duo Li, Dinesh K. Pai, Anatomically-based Dynamic Simulation of the Hand, 2012 (in preparation) • Duo Li, Shinjiro Sueda, Debanga Raj Neog, Dinesh K. Pai, Thin Skin Elastodynamics, 2013 (submitted to ACM SIGGRAPH 2013) Chapter 3 was mainly based on the first paper. The algorithm in this section was designed by Dr. Shinjiro Sueda with the supervision of Dr. Dinesh Pai. I designed and implemented most parts of the musculotendon simulator system, and modeled the hand model with discussion with Dr. Shinjiro Sueda and Dr. Dinesh K. Pai. The strand modeling tool in Section 5.4.1 was mainly developed by Crawford Doran. Dr. Shinjiro Sueda and I developed the interface to integrate this tool into our system. Chapter 4 was mainly based on the second paper. The algorithm in this section was mainly designed by Dr. Shinjiro Sueda, Dr. Dinesh K. Pai, and myself. The skin simulator was developed by myself. Debanga Raj Neog and I developed the parameterization tool (Section 5.4.2) together.  iii  Table of Contents Abstract  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iii  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iv  List of Tables  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  v  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  vi  Preface  Acknowledgments  . . . . . . . . . . . . . . . . . . . . . . . . . . . vii  1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . .  1 3 4  2 Related Work  5  . . . . . . . . . . . . . . . . . . . . . . . . . . . .  3 Musculotendon Simulation . 3.1 Hand Modeling . . . . . . . 3.1.1 Challenge . . . . . . 3.2 Eulerian-Lagrangian Strand 3.3 Plane Nodes . . . . . . . . 3.4 Results . . . . . . . . . . .  . . . . . . . . . . . . . . . . . . . . . . .  4 Thin Skin Elastodynamics . . 4.1 Spaces . . . . . . . . . . . . 4.2 Kinematics and Discretization 4.3 Jacobians . . . . . . . . . . . 4.3.1 Computation of Γ . . 4.3.2 Computation of Π . . 4.3.3 Computation of F . . 4.4 Elastic Forces . . . . . . . .  . . . . . . . . . . . . . . . . . . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  . . . . . .  8 8 8 10 12 15  . . . . . . . .  . . . . . . . .  . . . . . . . .  . . . . . . . .  . . . . . . . .  . . . . . . . .  . . . . . . . .  . . . . . . . .  . . . . . . . .  . . . . . . . .  . . . . . . . .  . . . . . . . .  . . . . . . . .  . . . . . . . .  . . . . . . . .  19 19 21 21 22 22 23 24 iv  Table of Contents 4.5 4.6 4.7  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  25 27 29 29 30 32 39  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  . . . . . . . . . . . . . .  . . . . . . . . . . . . . .  . . . . . . . . . . . . . .  . . . . . . . . . . . . . .  . . . . . . . . . . . . . .  . . . . . . . . . . . . . .  . . . . . . . . . . . . . .  40 40 41 43 43 44 44 44 45 46 47 48 48 49  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  51  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  53  4.8  Dynamics . . . . . . . . . . . . . . . . . . . . . Coupling of Skin and Body Movement . . . . . Parameterization of the Skin . . . . . . . . . . 4.7.1 Local Parameterization and Constraints 4.7.2 Global Parameterization and Transition Results . . . . . . . . . . . . . . . . . . . . . . 4.8.1 Performance Analysis . . . . . . . . . .  5 Implementation Details . . . . . . . . . . . 5.1 System Overview . . . . . . . . . . . . . 5.2 Musculotendon Simulator . . . . . . . . . 5.2.1 Degree of Freedom Related Classes 5.2.2 Point Related Class . . . . . . . . 5.2.3 Surface Related Classes . . . . . . 5.2.4 Curve Related Classes . . . . . . . 5.2.5 Constraint Related Classes . . . . 5.2.6 Dynamical Object Related Classes 5.2.7 The Driver Class . . . . . . . . . 5.3 Skin Simulator . . . . . . . . . . . . . . . 5.4 Modeling Tool . . . . . . . . . . . . . . . 5.4.1 Strand Modeling Tool . . . . . . . 5.4.2 Parameterization Tool . . . . . . . 6 Conclusion  Appendices A Modeling Details . . . . . . . . . . . . . A.1 Dorsal Interosseus (DI) . . . . . . . . A.2 Palmar Interosseus (PI) . . . . . . . . A.3 Flexor Digitorum Profundus (FDP) . A.4 Flexor Digitorum Superficialis (FDS) A.5 Extensor Digitorum Communis (EDC) A.6 Intrinsic Medial (IntMed) . . . . . . . A.7 Extrinsic Lateral (ExtLat) . . . . . . A.8 Lumbrical (LUM) . . . . . . . . . . . A.9 Oblique Ligament (OL) . . . . . . . .  . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  61 63 64 65 66 67 68 69 71 72  v  List of Tables 4.1 4.2  Notations of the state varialbes . . . . . . . . . . . . . . . . . Statistics of examples . . . . . . . . . . . . . . . . . . . . . .  A.1 Constraint planes A.2 Nodes of DI . . . A.3 Nodes of PI . . . A.4 Nodes of FDP . . A.5 Nodes of FDS . . A.6 Nodes of EDC . A.7 Nodes of IntMed A.8 Nodes of ExtLat A.9 Nodes of LUM . A.10 Nodes of OL . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  21 39 62 64 65 65 66 67 69 70 71 72  vi  List of Figures 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9  Anatomy of the finger . . . . . . . . Two types of constraints . . . . . . . Illustration of Eulerian coordinates . Plane node . . . . . . . . . . . . . . Fixed plane versus blending plane . . Plane node example . . . . . . . . . Joint coupling with oblique ligament Tapping and flicking . . . . . . . . . Deformity . . . . . . . . . . . . . . .  . . . . . . . . .  . . . . . . . . .  . . . . . . . . .  . . . . . . . . .  . . . . . . . . .  . . . . . . . . .  . . . . . . . . .  . . . . . . . . .  . . . . . . . . .  . . . . . . . . .  . . . . . . . . .  . . . . . . . . .  . . . . . . . . .  . . . . . . . . .  9 9 10 12 14 15 16 17 18  4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11  Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . The decomposition of D❳ . . . . . . . . . . . . . . . The translation component in the transition function The rotation component in the transition function . Torus Deformation . . . . . . . . . . . . . . . . . . . Cylinder Twisting . . . . . . . . . . . . . . . . . . . Head movement . . . . . . . . . . . . . . . . . . . . . Hand flexing . . . . . . . . . . . . . . . . . . . . . . Tight cloth sliding . . . . . . . . . . . . . . . . . . . Torus shaking . . . . . . . . . . . . . . . . . . . . . . Dinosaur walking . . . . . . . . . . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  . . . . . . . . . . .  20 23 31 32 33 34 34 36 37 38 38  5.1 5.2 5.3 5.4 5.5 5.6 5.7  Pipeline . . . . . . . . . . . . . . . . . . . . . . . . The layers of the musculotendon simulator . . . . . The class diagram of the musculotendon simulator The class diagram of the skin simulator . . . . . . Strand Modeling Tool . . . . . . . . . . . . . . . . Musculotendon modeling Tool: Panels . . . . . . . Chart Segmentation Tool . . . . . . . . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  . . . . . . .  40 41 42 47 48 49 50  A.1 Overview of the tendons and bones of the index finger . . . . A.2 Overview of the constraint objects . . . . . . . . . . . . . . .  61 62  . . . . . . .  vii  List of Figures A.3 Legend of the nodes . . . . . . . . . A.4 Dorsal interosseus (DI) . . . . . . . . A.5 Palmar interosseus (PI) . . . . . . . A.6 Flexor digitorum profundus (FDP) . A.7 Flexor digitorum superficialis (FDS) A.8 Nodes of EDC . . . . . . . . . . . . A.9 Nodes of Intrinsic medial (IntMed) . A.10 Nodes of Extrinsic lateral (ExtLat) . A.11 Nodes of Lumbrical (LUM) . . . . . A.12 Nodes of Oblique ligament (OL) . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  . . . . . . . . . .  63 63 64 65 66 67 68 69 71 72  viii  Acknowledgments I would like to show my deep gratitude to my supervisor, Professor Dinesh K. Pai, for his invaluable guidance, patience and support throughout all my thesis work. Without his enlightening instructions, kindness, and patience, I could not have completed my thesis. I would also like to thank Professor Robert Bridson for his invaluable advice as the second reader of this thesis. I would like to extend my acknowledgment to my colleagues and friends in the Sensorimotor Systems Lab. In particular, I would like to thank Dr. Shinjiro Sueda for his endless guidance and suggestions. I learned a lot from him, in terms of both research and other aspects. I appreciate Debanga Raj Neog for being such a wonderful collaborator, Dr. Guillaume Millet for proof-reading my thesis, Ye Fan, Dr. Sang Hoon Yeo, Dr. David Levin, Dr. Bin Wang, Martin Lesmana, Misha Fain, Ben Humberston, Garrett Jones, Joshua Litven, Cole Shing and other lab members for their advice and collaboration. I would also like to express my gratitude to my friends at UBC, who shared their happiness with me and accompanied me to walk through all the challenges. Last but not least, I would like to dedicate this thesis to my parents, for their tireless supports, cares, trusts, and encouragements throughout all my life.  ix  Chapter 1  Introduction The hand is one of the most important interfaces which connect humans with the outside world. It attracts attention in many fields, including biomechanics, computer graphics, and other communities. After the face, the hands are the most expressive, complex and visible part of a character’s anatomy, and, more than even the face, have the most complex and visible interactions with other objects. Therefore, getting their motion photorealistically ”right” demands way more attention than has bet been give. However, simulating the hand is challenging due to its complicated anatomical structure. In this thesis we decouple the reality of the hand into two levels: hand motion and hand appearance. The hand motion level mainly focuses on generating natural hand movement, while the hand appearance level aims to offer detailed and realistic skin animation. Accordingly, our system consists of a musculotendon based hand simulator, and an Eulerian elastodynamic skin simulator. Musculotendon Simulation. The hand shows complex motions due to its complicated anatomical structure. Although simulations of hands and grasps have consistently received attention in the graphics community, modeling and simulating the dynamics of the complex tendon network of the hand has remained relatively unexplored. Much of the previous simulation techniques for the hand have been based on rigid links with either joint torques [Kry and Pai, 2006; Liu, 2009; Pollard and Zordan, 2005] or with line-of-force muscles [Albrecht et al., 2003; Johnson et al., 2009; Tsang et al., 2005]. In the real hand, however, the motions of the digital phalanges are driven through tendons passing through a complex network of sheaths and pulleys by muscles originating in both the hand and the forearm. This design has an important effect on the compliance and coupling of the fingers. In our approach, we model each tendon rather than using joint torques or moment arms, and this gives us two important advantages: coupling and energy storage. With our approach, we get proper joint coupling for free. As a simple example, let us consider the coordinated motion of the joints, shown in 1  Chapter 1. Introduction Figure 3.2b. The two distal joints of the finger (PIP and DIP) flex and extend in a coordinated fashion not because of the synchronized activation signals computed by the brain, but because of the arrangement of tendons and ligaments in the finger. In our simulator, pulling on a single tendon (flexor digitorum profundus) achieves this result, whereas with a torquebased approach, we would need to manually coordinate the torques at these two joints. Another example is in the coupling between the extensor tendons in the back of the hand. Although this lack of complete independence between the fingers is partly due to the neural control, mechanical coupling has been hypothesized as having a significant role [Lang and Schieber, 2004]. Some of the results of our simulation that illustrate this mechanical coupling are shown in Figure 3.8a. The other advantage of using musculotendons is that we can produce fast and explosive motions by storing energy in stretched tendons. When we flick a coin or a marble, we lock the finger tip,activate the muscles, causing the tendons, which are highly stiff, to stretch, and then unlock the finger tip, releasing the stored potential energy. Such effects are difficult to obtain with joint torques, which would require high-gain torques that need to be tuned manually. We are also able to simulate hand deformities, which has applications not only in surgical planning and medicine, but also in computer graphics. Increasing number of virtual character designs are based on deformities or injuries, and the ability to procedurally produce anatomically based abnormal characters may prove useful, since obtaining the data can be a challenge. Elastodynamic Skin. Athough anatomically based musculotendon simulation builds a promising base for the hand motion, the overall animation can be odd if the skin simulation has not been carefully taken into account. For example, the skin around the neck slides smoothly during head movement, while it stays at rest during swallowing. Although the skin mesh is animated carefully as in Figure 4.7c with a standard skinning method, the whole animation fails to capture the reality because the skin texture moves inappropriately. That might be the reason that important details such as veins, wrinkles, and tattoos are usually ignored in character animation. An anatomically based approach is appealing for skin simulation. Of course, carefully simulating the skin with a mass spring system [Albrecht et al., 2003; Terzopoulos and Waters, 1990; Wilhelms and Gelder, 1997] or FEM, as well as the underlying soft tissue [McAdams et al., 2011; Sueda et al., 2008; Teran et al., 2003] can capture detailed skin motions. However,  2  1.1. Contributions such methods have to pay a certain price. To avoid confusion, we will refer to the subcutaneous soft tissues such as muscles, fat, and tendons that give a character its 3D shape as the body and reserve the word skin to refer to the thin anatomical skin that covers the body. This anatomical skin in humans and animals is a thin membrane, 2-3 mm thick in most areas, and is especially thin on visible areas such as the face and hands. Skin has evolved to be in close contact with the body and yet not impede movement by sliding and stretching over the body with little resistance. Our goal is to capture these essential features of skin in an efficient and robust simulation. However, the close contact between skin and body presents serious difficulties for the usual methods used for simulating cloth and other thin structures. These methods discretize the thin structure into a mesh that has to interact with a separate mesh representing the body, presenting significant challenges for detecting and handling contact between these intimately osculating meshes. Our simulation decouples skin motion from the underlying body deformation. Our simulation is oblivious to the choice of body mesh deformation model . The vertex positions can be driven by any means, including linear or dual-quaternion skinning, interactive mesh editing, motion capture, or even another physically-based simulation. We directly attach a 2D Eulerian simulation upon the deforming body mesh. Since the skin is automatically constrained to move on the body and not separate from it in such an Eulerian setting, contact processing can be easily eliminated. With our proposed method, the skin movement looks more realistic (lower figures in Figure 4.7) even with the same mesh deformation.  1.1  Contributions  In all, our contributions include the following: • We have developed a comprehensive biomechanical system including rigid body dynamics, Eulerian-Lagrangian strand based musculotendon simulator, and Eulerian skin simulator. Our system also includes strand modeling tools and rendering interfaces. • We have proposed and implemented a novel Eulerian skin simulation algorithm. This algorithm can be easily generalized to simulate other type of constrained deforming surface, such as cloth.  3  1.2. Thesis Outline • We have built a musculotendon hand model based on real hand physiology. This model can be used for further biomechanics or graphics research of the hand.  1.2  Thesis Outline  The thesis is organized as follows. In Chapter 2, we review some related work in hand movement and skin simulation. Chapter 3 describes our musculotendon based hand simulation; Chapter 4 illustrates our novel skin simulation algorithm; we provide an overview of our biomechanics simulation system and include the implementation details in Chapter 5; we conclude this thesis in Chapter 6. Finally, we include the details of the hand model in Appendix A.  4  Chapter 2  Related Work Previous works on the hand in computer graphics have focused on geometry acquisition and retargeting of the whole hand [Huang et al., 2011; Kurihara and Miyata, 2004; Ying et al., 2007] , musculotendon simulation based on kinematic muscle paths and moment arms [Albrecht et al., 2003; Tsang et al., 2005], and grasping and interaction with the environment [ElKoura and Singh, 2003; Kry and Pai, 2006; Liu, 2008; Pollard and Zordan, 2005]. Since we aim to build a complete biomechanically based hand simulator including both realistic hand movement and realistic hand appearance, our field research lie into two parts: musculotendon based hand simulation and skin simulation. Musculotendon Simulation. In our work, we focus on the movement of the fingers and not of the hand and the forearm. Since the bones are represented as rigid bodies in our simulator, it should not be difficult to append the tendinous hand from our simulator to an arm composed of rigid body links, or to apply the control strategies for grasping presented in these previous works. Our method is most closely related to the biomechanical strand model introduced by Sueda et al. [2008]. We extend the constrained strand framework from Sueda et al. [2011] to model tendons, which gives us more robust handling of constraints between tendons and bones. Without the constrained strand framework, even relatively simple scenarios, such as the flexor pulleys shown in Fig. 3b, are difficult to model, since the discretization of the tendon strand cannot always match the discretization of the pulley constraints. As far as we know, all previous dynamic strand models, including Sueda et al. [2008], suffer from this difficulty. The regions between the pulleys in Figure 3.2a is also difficult to model, since constraining the degrees of freedom (DoFs) in these areas may rigidify the strand Biomechanical simulators have been used successfully in computer graphics on many other parts of the body, such as the face [Sifakis et al., 2005], the neck [Lee and Terzopoulos, 2006], and the upper body [Lee et al., 2009]. These, and other simulators from the field of biomechanics, cannot fully han5  Chapter 2. Related Work dle the complex routing constraints present in the hand, and the dynamic coupling of musculotendons and bones. In line-of-force models [Damsgaard et al., 2006; Delp et al., 2007; Garner et al., 2000,?; Johnson et al., 2009; Valero-Cuevas et al., 2007], muscles and tendons are simulated quasistatically as kinematic paths, with no mass and inertia, and do not fully account for the shape of the biomechanical structures. More importantly for hand simulation, handling of routing constraints, including branching and kinematic loops, is difficult with these methods. Simulators based on solid mechanics models, such as spline volumes [Ng-Thow-Hing, 2001], finite volume method [Teran et al., 2003, 2005], or finite element method, [Blemker and Delp, 2005; Chen and Zeltzer, 1992; Kaufman et al., 2010; Sifakis et al., 2005; Zhu et al., 2001], are also not ideal for hand simulation, since the musculotendons of the hand are thin and anisotropic, and would require many disproportionately small elements. Various dynamic models [Spillmann and Teschner, 2009]; Bergou et al. [2010, 2008] from the computer graphics community could potentially be used for musculotendon simulation, but these models were designed for use in free-floating configurations, and do not work well in the highly-constraining situations present in the hand. A hybrid approach has also been used, where the muscles are abstracted as piecewise linear actuators that drive the volumetric muscle based on solid-mechanics [Lee et al., 2009; Teran et al., 2005]. Our method could be used to replace the underlying linear muscle model of these approaches. Skin Simulation. Because of its importance, a large body of research in computer animation is related to skin. We discuss only the closest work here. Due to space limitations we can not discuss work aimed at simulating generic solids, fluids, and cloth, but we refer to the closely related techniques in the rest of the paper, in context. Broadly speaking, skin models can be classified as kinematic, example based, or physically based. Kinematic skin. Perhaps the standard approach to skinning is to compute vertex position of the skin mesh based on the configuration of the skeleton. The most widely used approach is linear blend skinning (or skeletalsubspace deformation) [Magnenat-thalmann et al., 1988] and its refinements (e.g., [Kavan et al., 2008]). Example based skin. Example based skinning is attractive since it can provide rich details from physical measurements [Beeler et al., 2011; Huang et al., 2011], animator input [Lewis et al., 2000; Mohr and Gleicher, 2003], 6  Chapter 2. Related Work physical simulation [Kry et al., 2002], mesh animations James and Twigg [2005], or motion capture [Park and Hodgins, 2006]. The main difficulties with these approaches are the complexity of acquiring the required data and poor generalization, especially to characters that only exist in our imagination. Physically based skin. With few exceptions, previous work in this area focuses on simulating the deformation of soft tissues of the body, and not on how skin moves on the body. Notable examples include generic soft tissues [McAdams et al., 2011], muscles [Teran et al., 2003], and tendons [Sueda et al., 2008]. These methods are complementary to our work, and could provide the detailed subcutaneous shapes on which our skin slides. One of the most popular approaches to skin modeling is through passive mass-spring networks attached to muscles that drive the deformation. The seminal work of Terzopoulos and Waters [Terzopoulos and Waters, 1990] on facial animation included deformable skin modeled with a mass-spring network. Similar approaches have been applied to simulating the skin of animals [Wilhelms and Gelder, 1997], human hands [Albrecht et al., 2003], and for general texture mapping [Maillot et al., 1993]. More sophisticated finite element models have also been used. Membrane models have been used to simulate wrinkles [Wu et al., 1996], and for performance driven animation [Choe et al., 2001]. Thin shell models [Grinspun et al., 2003; Qin and Terzopoulos, 1996] extend membrane models by including resistance to bending; this could be useful for simulating thicker skin. Gourret et al. [1989], Sifakis et al. [2005] and Lee et al. [2009] model the skin as volumetric soft tissue along with muscles. Using a volumetric approach is advantageous or required for many applications, e.g., skin surgery [Sifakis et al., 2009]. To our knowledge, no prior work in graphics has simulated skin or any thin elastic solid in the Eulerian setting that is the key to our method. The Eulerian setting is standard for simulating fluids, but has only recently been used for simulating solids in 1D [Sueda et al., 2011] and 3D [Levin et al., 2011]. Our method significantly extends this work to include 2D manifolds of arbitrary topology, allows arbitrary triangular meshes instead of regular grids, and integrates easily with standard animation pipelines.  7  Chapter 3  Musculotendon Simulation 3.1  Hand Modeling  The hand has a very complex anatomical structure. To simplify the modeling, we ignore the complicated thumb and mainly focus on the rest of the four fingers. Figure 3.1 shows the complexity of the tendon network of the finger. From proximal to distal, the four bones of the finger are metacarpal, proximal, middle, and distal. They are connected by the metacarpophalangeal (MCP), proximal-interphalangeal (PIP), and distal-interphalangeal (DIP) joints. The three long tendons are extensor digitorum communis (EDC), flexor digitorum superficialis (FDS), and flexor digitorum profundus (FDP). These tendons originate from outside the hand, and are hence called extrinsic. The index and pinky fingers have other extrinsic tendons which we ignore since their functionality are similar to EDC. The three intrinsic musculotendons are the lumbrical (LUM), dorsal interosseus (DI) and palmar interosseus (PI). Each finger has a slightly different insertion arrangement of these three intrinsic musculotendons. We simplified the model by using the arrangement in the index finger for all fingers and omitting the symmetric components on the ulnar side. The oblique ligament (OL), which spans the PIP and DIP joints, help synchronize these two joints when the FDP is pulled. The two crossing tendons, extrinsic lateral (Ext-Lat) and intrinsic medial (IntMed) transfer tension from extrinsic extensors and intrinsic extensors to the PIP and DIP.  3.1.1  Challenge  Anatomically, tendons and bones are deeply coupled in the hand. Therefore, constraint handling is one of the main challenges in building a musculotendon simulator for the hand simulation. Typically, there are two types of constraints: crossing constraint and sliding constraint.  8  3.1. Hand Modeling  Figure 3.1: Anatomy of the finger  (a) Crossing constraint: FDP tendon (b) Sliding constraint: oblique ligament  Figure 3.2  9  3.2. Eulerian-Lagrangian Strand  Figure 3.3: Illustration of Eulerian coordinates with a stretched elastic band. (top) In a purely Lagrangian simulator, the strand material, which can be visualized as the texture, is fixed to the nodes. (middle) If we move the middle node to the left, then the material is compressed in the left segment and stretched in the right segment, with respect to the previous configuration. (bottom) If we relax the assumption that the material is fixed to the nodes (i.e., Eulerian node), then the material will start to flow from left to right. Crossing constraint. A tendon usually moves freely in the axial direction, but is constrained throughout its length by crossing a series of sheaths and pulleys. We call this type of constraints as crossing constraint. For example, FDP tendon originates near the base of the finger, cross all the joints to the distal bone. The pulleys (sheaths) near bones and joints hold this tendon in place (A-1 through A-5 in Figure 3.2a). With a purely Lagrangian approach, constraining the tendon to go through these pulleys would require the tendon to be discretized very finely near the start and end of each of the pulleys. This incurs extra degrees of freedom (DoFs) and hence slows down the whole simulation. We tackle this type of constraint with Eulerian-Lagrangian strands in Section 3.2. Sliding constraint. Meanwhile, some tendons are also able to slide freely upon the bone surface. For example, the oblique ligament is constrained on the surface of the middle bone of the finger (the cubic node in Figure 3.2b). Accurate collision handling between the whole tendon and the bone is always quite expensive and collision detection is often the bottleneck of a physicallybased simulation. We handle this type of constraint with plane nodes in Section 3.3.  10  3.2. Eulerian-Lagrangian Strand  3.2  Eulerian-Lagrangian Strand  Our musculotendon simulator is based on the Sueda et al. [2011]. Here we briefly restate the key components of Eulerian-Lagrangian strand simulation. The detailed derivation can be found in Sueda et al. [2011]. Following the notation of Sueda et al. [2011], a strand is represented as a series of nodes, each of which contains both Lagrangian and Eulerian coordinates: (xi ; si ). The Lagrangian coordinates, xi , represent the world position of the node, and the Eulerian coordinate, si , is the material coordinate of the strand at that node. See Figure 3.3 for an illustration. The Lagrangian coordinates encode the geometric path of the strand, and the Eulerian coordinate encodes the actual strand material at the node. This separation of the geometric path and the strand material is critical for a musculotendon simulation because it decouples the constraints from the dynamics. Sueda et al. [2011] can use a variety of generalized coordinates for the Lagrangian coordinates. xi can be a 3 × 1 vector if it moves freely in 3D space, a 2 × 1 vector if it is constrained to move in a 2D plane, a 1 × 1 vector if it is constrained to lie in a curve, or it has 0 Dofs if it is fixed to some configuration. Although reduced coordinates approach decrease the size of Dofs, it also complicates the implementation by introducing complicated parameterization. Hence, we use the maximal coordinates approach. The Lagrangian coordinates xi for all the nodes are 3 × 1 vectors. We then apply constraints to restrict the motion of the nodes relative to certain parent objects. For example, we can apply point-frame constraint to attach a node to a rigid body. Suppose the rigid body has configuration Er and twist φ, for a ˙ its local coordinates xl node with Lagrangian coordinate x and velocity x, with respect to the rigid body are xl = Er−1 x.  (3.1)  Then, the aforementioned point-frame constraint can be written as Rr Γ(xl ) −I3  φ x˙  =0  (3.2)  where Rr is the rotation component of Er , I3 is a 3 × 3 identity matrix, Γ = ([xl ], I3 ) is a 3 × 6 matrix for computing the point velocity with the local coordinates of the point and the frame twist where the square bracket [·] denotes the skew-symmetric matrix. One can refer Li et al. [1994] for the details about the Γ matrix. 11  3.3. Plane Nodes Following the notation of Sueda et al. [2011], we use G to represent ˙ the constraint matrix Rr Γ(xl ) −I3 , and encode the velocity vector as q. Hence, Equation 3.2 can be rewritten as Gq˙ = 0  (3.3)  With the generalized mass matrix M , all different kinds of constraints G, force vector f , and the state velocity vector q˙ which encodes all the Dofs including both Lagrangian Dofs and Eulerian Dofs, Eulerian-Lagrangian strand solves a KKT system as M G  GT 0  q˙ λ  =  hf + M q˙0 , 0  (3.4)  where q˙0 is the velocity vector at the previous time step, λ is the Lagrange multiplier, and h is the time step size. The system may drift away from the constraint manifold over time since the constraints are solved at the velocity level. A post-stabilization step [Cline and Pai, 2003] can be performed after the velocity step. Back to the hand modeling problem, with the Eulerian nodes, we need just the right number of nodes, since we can place the nodes exactly at the start and end of the pulleys, and let the Eulerian coordinates handle the dynamics of the tendon through the pulleys. Also, because of this separation, we can damp the lateral motion of the tendon without affecting the motion in the axial direction.  3.3  Plane Nodes  Although general collision detection itself is not an easy problem, we take advantage of the fact that tendons are always in close contact with other anatomical structures, and we know a-priori where the routing constraints on the tendons are going to be. Hence, we can simplify the collision problem by placing constraints to the potential collision region. With the EulerianLagrangian discretization of the strand (tendon), the Eulerian part is oblivious to collision detection and resolution, while the collisions need to be handled only by the Lagrangian part of the discretization. Actually, this is a major advantage for tendon simulation, since most of the movement is in the axial direction, which means that the Lagrangian part does not move in space too much, and is often completely stationary with respect to a rigid body. 12  3.3. Plane Nodes  Figure 3.4: Constraint planes defined around a bone. The intersections of the bone meshes with the planes are shown as green polygons. The nodes are constrained to lie on the planes, and to not intersect the green polygons. The key idea is to reduce each collision problem to a 2D problem by constraining the Lagrangian coordinates of each strand node to lie on transverse planes defined on the bones, and do not penetrate bone surface. These planes can be fixed to a single rigid body, or be blended between two rigid bodies. Each plane intersects its parent rigid body as a planar polygon (green polygon in Figure 3.4). Hence, for each node on the transverse planes, there are two constraints to be applied. The first one keeps the node point on its parent plane. Similar to Equation 3.2, suppose the plane has configuration Ep , velocity φp , and normal np , for a node with Lagrangian coordinate x and velocity x˙ in world space, we first transform the point position from world space to plane space as xl = Ep−1 x, then the aforementioned point-plane constraint can be written as np T Rp Γ(xl ) −I3  φ x˙  = 0,  (3.5)  where Rp is the rotation component of Ep . The second constraint makes sure the nodes do not penetrate the bone surface, or rather the 2D cross-section of the bone at the plane. We first perform 2D collision detection between the nodes and the obtained planar polygon to collect the nodes that are colliding. For example, in Figure 3.4 three strand nodes, shown as green dots, are colliding with a green planar polygon in each plane. We then compute the collided polygon normals at these collision points (red arrows in Figure 3.4). We denote the collided polygon normal as nc , and the collided point position as x with velocity x˙ 13  3.3. Plane Nodes  (a)  (b)  Figure 3.5: Fixed plane versus blending plane: (a) Orthogonal views showing the planes from the side. Without blending planes around a joint may form an unnatural intersection, affecting the routing of strands. (b) With blending planes, the two red planes are interpolated properly between the two blue planes. in world space. We can easily apply an equality constraint nc T x = 0,  (3.6)  nc T x > 0,  (3.7)  or an inequality constraint  between the rigid bodies and the nodal positions. Since the node is constrained inside a plane, we call it a plane node. Fixed Plane. A plane itself does not have Dofs. Instead, it attaches to a parent rigid body. If a plane is totally fixed to a rigid body, we call it fixed plane. A fixed plane moves rigidly along with the movement of the parent rigid body. Hence, we can precompute the intersection polgyon between this plane and its parent rigid body. Blending Plane. Fixed planes work well away from a joint, but if a node is near a joint, we must rotate the planes to prevent the planes from intersecting each other. Figure 3.5a shows how, if we simply transform the planes rigidly, they may intersect, causing the paths of the tendons to reverse in some cases. To mitigate this problem, here we introduce the blending plane. Unlike the fixed plane which is totally fixed with respect to the configuration of one rigid body (bone), as in Figure 3.5b, the configuration of the blending plane (shown in red) is a a combination of the configurations of its two parent planes (shown in blue). Here we do not detect collision 14  3.4. Results  (a)  (b)  Figure 3.6: Plane node example: (a) Two tendons being routed around bones. (b) When the finger is flexed, the lateral tendon falls toward the palm of the hand. between blending planes. Thus, the neighbor planes still might intersect with each other theoretically. However, this is a problem only if we place the neighbor bones very close intentionally. Blending plane provides certain flexibility compared with fixed plane. As a tradeoff, we have to re-generate the intersection polgyon in each time step. We use quaternion based blending for the rotation and linear blending for position. Unlike Gilles et al. [2011], we do not use dual quaternions, but linear interpolate rotation and position separately, since we want the location of the plane origin to be always between the two parent planes. As an example, Figure 3.6a shows two tendons sliding over the bones. When the finger is extended, the lateral tendon lies on top of the bone, but when the finger is flexed (Figure 3.6b), the tendon properly slides in the planar direction to the side of the joint. Additional constraints are used to emulate the connective tissues that constrain the tendon from completely sliding off the bone.  3.4  Results  We implemented our system in C++. Experiments were run on a commodity PC with an Intel Core i5 750 processor and 4GB of memory. The bone geometry was obtained from a CT scan, and reconstructed using custom software. The joint axes were obtained from motion capture data, and the tendon paths were created manually with the 3D modeling software Blender, based on standard textbook models in the literature [Kapandji, 2007]. To show the movement of the tendons, we did not render skin in these results.  15  3.4. Results  (a)  (b)  (c)  (d)  Figure 3.7: Synchronized joint motion with the oblique retinacular ligament, shown in red. (a-b) Without the ligament, pulling on the tendon flexes the DIP fully, before the PIP. (c-d) With the ligament, the same tendon flexes the finger in a much more natural, synchronized fashion. We will discuss skin simulation in Chapter 4. We justify our musculotendon based hand simulator with several applications. The parameters and details of the hand model can be found in Appendix A. We directly use the trajectory tracking controller from Sueda et al. [2008]. We only specify the trajectory of the fingertips, and the simulator automatically computes the coupled motion of the fingers. The bottleneck of the system is in the sparse QP solve to step the system forward in time. For our biggest scene involving all four fingers that are actively controlled, the total computation time is around 5 minutes per 1 second of simulation, using the time step of 0.2 ms. The QP solve takes about 70% of the time. The activation controller takes about 21%, mass filling and quasistatic update step takes about 7%, and collision detection takes less than 1%. Joint coupling. With a tendon-based simulator, we obtain natural joint coupling for free. Figure 3.7 demonstrates how pulling on a single tendon produces synchronized flexion in two joints. A single tendon (FDP, shown in blue), with no other tendons and ligaments present, pulls on the distal phalanx of the index finger. In this situation, the joints flex by rolling: first, the distal joint (DIP) fully flexes, and then the proximal joint (PIP) starts 16  3.4. Results  (a) Tapping the keyboard  (b) Flicking  Figure 3.8 to flex. During this motion, the angle of DIP joint should be roughly double that of the angle of PIP joint. Without the oblique ligament (Figure 3.7a) included, DIP bends first (Figure 3.7b). Our anatomically based tendon network can capture such effects automatically. With the oblique ligament (OL, shown in red) added, which spans both the DIP and the PIP, pulling on the FDP causes both joints to flex in a natural, synchronized fashion. Coupling also occurs between joints from different fingers. In Figure 3.8a, each finger is used to press down on the keyboard, which is simulated together with the hand. Because the extrinsic muscles (FDP, FDS, EDC) are coupled across the fingers by inter-tendinous bands [Leijnse et al., 1992], moving one finger causes the other fingers to move as well, even though the activation controller was given only the target trajectory of a single finger. Energy storage. When we flick a coin or a marble, we lock the finger tip, activate the muscles, causing the tendons, which are highly stiff, to stretch, and then unlock the finger tip, releasing the stored potential energy. To illustrate energy storage, we simulate the finger flicking a small box with and without first locking the finger (Figure 3.8b). Once the finger has reached the flexed configuration, we turn off the controller, and then fully activate the extensor muscle, which causes the tendon to stretch. As expected, the box fingertip speed at contact is significantly faster (∼ 4x) if the finger is first locked, because of the energy stored in the extensor. Deformities. By changing some of the tendon parameters, we can make a natural-looking deformity of the hand. We simulated two common deformities: boutonniere and swan-neck [Zancolli, 1979]. 17  3.4. Results  (a) Boutonniere deformity  (b) Swan-neck deformity  Figure 3.9: Deformity In the boutonniere deformity, the DIP hyper-extends, and the PIP remains locked in a flexed position (Figure 3.9a). We simulate this injury by cutting the insertion of the EDC into the middle phalanx. Once the lateral bands (intrinsic and extrinsic lateral) fall below the rotation axis of the PIP, no muscle can extend the joint, causing the joint to remain flexed unless fixed externally. The swan-neck deformity has the exact opposite joint configuration: hyper-extension of the PIP and flexion of the DIP (Figure 3.9b). There are many causes of this deformity, and among them is the elongation of the oblique ligament (OL). In the simulator, we first lengthen the OL and then pull on the EDC to extend the PIP. Once the OL rises above the axis of rotation of the PIP, pulling on the FDP causes the DIP to flex and PIP to hyper-extend.  18  Chapter 4  Thin Skin Elastodynamics Although musculotendon based hand simulation produces natural hand motion, the hand animation is usually observed at the skin level. We elaborate the novel elastodynamic skin simulation algorithm in this section. The hand motion generated by the previous musculotendon based hand simulation can be considered as an input of our skin simulator. However, one should also note that our skin simulation algorithm is agnostic to the underlying hand motion algorithm.  4.1  Spaces  Simulating skin is challenging because skin is in close contact with the body and shares its geometry. However, the body material and the skin material are distinct. Before we delve into the derivation of the elastic forces, let us first define the notation for the difference spaces involved (Figure 4.1). Here we show an example of elastic skin that completely covers the surface of a torus. First look at the right hand side of the figure. Physical space is the familiar 3D space in which objects live; its points are denoted x. Body space is the reference 3D space in which we embed the surface of the body upon which the skin slides, at time t = 0; points on the surface are denoted X. We assume that the surface is represented as a triangle mesh, for generality. A triangle of the mesh is shown in the figure. While meshes are convenient, it is useful to also have a parameterization of the surface. Body atlas is a collection of 2D coordinate charts that parameterize the body surface; its points are denoted u. We assume that we have an invertible map, π : u → X, between points in the atlas and points on the surface. Intuitively, the atlas corresponds to the familiar texture space used in graphics, generalized to shapes of arbitrary topology. π is called the parameterization. π −1 is called the coordinate map and corresponds to the familiar texture map. The skin shares the shape of the body but is made of distinct material that can slide relative to it. Therefore, in parallel to the body, we represent the skin using a distinct 3D skin space in which the skin surface is embedded 19  4.1. Spaces  Figure 4.1: Five spaces involved in representing elastic skin, and the mappings between them. See text for explanation. A few Jacobians are also shown, in red. in a stress free state1 and its 2D skin atlas. Points in these spaces are denoted ❳ and ✉, respectively. Because the skin and body have the same geometry, we can use the same map π to parameterize them. Please keep in mind that the skin and body spaces are at “rest” in two different interpretations of that word, each appropriate for what we need these spaces for: the skin space is stress free; the body space is at its initial strain. The skin space is used to measure the elastic deformation and hence stresses in the skin; the body space is used to measure the change in shape of the body during the animation. This formulation allows the skin to have an initial strain at t = 0, given by the initial vertex values ✉. It is important to remember that a mesh is a topological data structure, and any geometry is due to the values associated with the vertices of the 1 This is the most common scenario which we focus on, but some materials may not be embeddable in a stress free state, particularly after plastic deformation. For extensions to handle such cases see, for example, [Bargteil et al., 2007; Wicke et al., 2010].  20  4.2. Kinematics and Discretization x X  ❳ u  ✉  body vertex position body vertex position at rest reference body vertex position skin vertex position reference skin vertex position  varies by external input constant varies during simulation constant varies during simulation  Table 4.1: Notations of the state varialbes mesh. For example, the map π −1 induces an equivalent triangulation of the body atlas. We can therefore think, equally well, that the mesh exists in any of the spaces shown. A single mesh triangle, and its counterparts in the spaces, are shown in the figure.  4.2  Kinematics and Discretization  The motion of the skin is defined by two maps. The body’s movement in space is specified by the body motion φ : X → x. This is an abritrary motion and deformation of the mesh, either specified by an artist or generated by another animation system. It is an input to our system. This map is given by its nodal values, xi at vertex i, and linearly interpolated in between to produce a piecewise linear function φ, as is standard in finite element analysis. The maps π and π −1 are similarly represented after discretization by the nodal values X i and ui . The skin’s movement relative to the body is accounted for by the skin motion, ψ : X → ❳, which is computed by our simulation. We represent it indirectly, using the skin map, κ : X → ✉; then ψ = π ◦ κ. The reason we go through ✉ is that it allows us to time-step the motion of the skin in 2D coordinates, rather than in 3D, thereby avoiding all the difficulties of constraining the skin to lie on the body surface. The skin map is discretized using nodal values, ✉i , and interpolated to produce produce a piecewise linear κ, as before. With a slight abuse of notation for the sake of reducing clutter, we denote an array of stacked points, e.g., (✉0 , . . . , ✉n )T , using the same symbol we use for points, e.g., ✉. The intent is clear from the context.  4.3  Jacobians  Before we delve into the derivation of the elastic force and dynamics, let us first define the Jacobians we need for later simulation. Although as many 21  4.3. Jacobians as 5 spaces are involved, only Γ = ∂x/∂u  (4.1)  Π = ∂X/∂u  (4.2)  F = ∂x/∂ ❳  (4.3)  are needed for our derivation. We assume all our maps are piecewise linear and the Jacobian matrices of these maps are constant on each triangle. Then we can compute them using vertex values at the domain and range of the map.  4.3.1  Computation of Γ  We employ the elegant method of Teran et al. [2003] to construct the Jacobians. For each triangle with vertex positions xi , i = 0, 1, 2, the edge vectors di = xi − x0 , i = 1, 2, are constructed. We assemble di into a matrix Dx as Dx = x1 − x0 x2 − x0 .  (4.4)  Similarly, we can get Du = u1 − u0 u2 − u0 .  (4.5)  Hence, Γ can be constructed as ΓDu = Dx .  (4.6)  Du is square because the points X i have only 2 coordinates. Hence, we can easily invert Du , and obtain −1 Γ = Dx Du .  (4.7)  We will use Γ to construct the generalized mass matrix in Section 4.5. Note that, since u are constant over the whole simulation (Table 4.1), we can precompute D✉−1 for all the triangles.  4.3.2  Computation of Π  Just as the mapping π is shared by both ✉ and ❳ as well as u and X, the Jacobian Π is shared by the two pairs of spaces as well. In Section 4.4, we will elaborate how to use Π to project the force from skin space to skin atlas with respect ✉ in each vertex. Here we only discuss the construction of Π. 22  4.3. Jacobians  𝕏0  Figure 4.2: The decomposition of D❳ Since u and X are fixed over time, we use u and X to precompute the mapping π via a spatial hashing grid [Teschner et al., 2003]. In the initialization, we build a 2D grid for each chart which stores all the collided triangles in cells. During run-time, any queried coordinate ✉ can be located into a distinct cell of a chart grid and further a triangle in the body atlas. Since uj and X j (j ∈ [1, 2, 3]) of each vertex of the obtained triangle are fixed and uj always lie in the same chart, we can easily compute ❳ with barycentric coordinates. We assume constant deformation gradient per triangle in body atlas, finally Π can be simply computed as the deformation gradient of the obtained triangle, which is DX Du −1 similar to Equation 4.4.  4.3.3  Computation of F  The Jacobian F plays dual roles in our framework. It is not only the Jacobian of the composite map φ ◦ ψ −1 which connects physical space to skin space, but also serves as the deformation gradient of the skin simulation. We can define F as F D❳ = Dx , (4.8) where D❳ is in a similar manner as Equation 4.4. However, computation of F is slightly complicated. Unlike the square matrix Du , D❳ is a 3 × 2 matrix because the points ❳i have 3 coordinates. And hence we cannot directly invert D❳ as Equation 4.7. Instead, we compute a reduced 3 × 2 deformation gradient F¯ in later simulation. Although ❳i has 3 coordinates, it just represents a 2D membrane in the 3D embedded space. Therefore, a 3 × 2 matrix is enough to capture the essential deformation of the triangle. def Let D❳ = Q❳ R❳ be the thin QR decomposition of D❳ . Then F¯ = F Q❳ and Equation 4.8 can be rewritten as F D❳ = F Q❳ R❳ = F¯ R❳ = Dx . Therefore the reduced deformation gradient is −1 F¯ = Dx R❳ .  (4.9)  23  4.4. Elastic Forces Intuitively, the columns of Q❳ form the axes of an orthonormal frame for the plane of the triangle, with origin at ❳0 . The point with coordinates ¯ . In skin space it has 3D coordinates ❳ = in this new frame is denoted ❳ ❳0 + Q❳❳¯ . The columns of R❳ are the edge vectors of the triangle in the new frame, and R❳ is the 2 × 2 reduced differential in this frame. Although here we use QR decomposition to capture the essential shape information of D❳ , a polar decomposition works as well. In our case, the choice of orthogonal basis does not affect the results up to roundoff error.  4.4  Elastic Forces  We represent the configuration of the skin and simulate its dynamics using ✉. It is therefore tempting to also compute the elastic forces and the deformation gradient in the skin atlas. However, this is not ideal. First, since we represent shapes of arbitrary topology using a collection of charts, so vertices of a skin triangle may lie in different charts, making triangle computations difficult. Second, material properties of the skin are difficult to specify in terms of ✉ due to metric distorsions caused by π −1 , which could be any coordinate map; it is more natural to specify these properties on the rest shape of the skin, in terms of ❳. For these reasons, we compute the elastic forces in skin space, and transform them to the skin atlas using the transpose of the Jacobian Π (Equation 4.1). The reduced deformation gradient F¯ is used to compute the reduced Green strain ¯ = 1 (F¯ T F¯ − I) = 1 (QT F T F Q❳ − I) = QT EQ❳ . E ❳ 2 2 ❳  (4.10)  ¯ has the same non-zero eigenvalues as the standard Green strain E, Thus E but simply ignores its irrelevant null space. We assume that skin is a hyperelastic membrane, with strain energy ¯ In this paper we use the standard Saint Venant-Kirchhoff function W (E). model. The strain energy of the triangle is given by ¯ = A❳ W (E)  λ ¯ 2 2 (trE)  ¯ 2) + µtr(E  (4.11)  where A❳ is the area of the triangle in skin space (equal to 12 det R❳ ), λ and µ are the Lam´e constants. Other types of materials can be used if desired [Volino et al., 2009]. The contribution of triangle j to the total force at vertex i is f¯ij = −∂Wj /∂ ❳¯i . We then transform it to skin space by Q❳j , and finally to the 24  4.5. Dynamics body atlas by using the transpose of the triangle’s Jacobian −1 T −T T −T T T ∂ ❳/∂uT = (D❳ Du ) = Du D❳ = Du R❳ Q❳ .  (4.12)  Here the triangle index is suppressed for clarity, and the thin QR decomposition of D❳ is substituted. The transformed force in the body atlas simplifies to −T T −T T ¯ fij = Duj R❳j QT❳j Q❳j f¯ij = Duj R❳j fij .  Since our time stepping happens in f ij from ❳ to ✉ with Jacobian Π as  (4.13)  ✉ (Section 4.5), we finally transform  f ij = ΠT fij  (4.14)  The total vertex force f i is the sum of the contributions f ij from incident faces f i . Note that we compute W symbolically as a function of the reduced ver¯ i , and derive the forces acting at vertex i of the triangle tex coordinates ❳ by symbolic differentiation using MapleTM [Monagan et al., 2005]. Other derivatives required for implicit integration (see Section 4.5), are also computed symbolically. This is quite efficient, as shown in Section 4.8, since these computations are performed per triangle, with small matrices. We initially considered optimizing this computation based on additional physical insights, but found it was not worthwhile since it takes a small fraction of the total simulation time. The procedure is also very general, and could be used for any hyperelastic material model that can be expressed algebraically in terms of the reduced deformation gradient.  4.5  Dynamics  The configuration of the skin is completely determined by the vertex values ✉, and the external input x. Therefore the space ❯ of all ✉ values at mesh vertices is a skin configuration space, and the standard Lagrangian procedure can be used to write the dynamics of the skin entirely in terms of the stacked vector ✉ and its derivatives. Motion of the entire skin is equivalent to the motion of a point in ❯. The high dimensional space ❯ should not be confused with the 3D Euclidean space ❳ in which the skin is embedded. However, there is still the important choice to make whether the value at mesh vertex i, ✉i is the coordinate of a fixed point on the skin (Lagrangian discretization) or of the skin material that is currently at a fixed point on the 25  4.5. Dynamics body (Eulerian discretization). In the Lagrangian setting the mesh would be fixed in the skin and moves on the body during simulation; in the Eulerian setting the mesh is fixed on the body and moves on the skin. We choose an Eulerian discretization. The generalized mass (or inertia) matrix M is computed as follows. Let ✈ be the material velocity of the skin point ✉ (which is at body location u in the Eulerian setting). Its velocity in space is then x˙ = Γ✈ where Γ = ∂x/∂u (see Figure 4.1). Note that the Jacobian from the body atlas is needed here; the skin has no material velocity in the skin atlas. Then, the kinetic energy of the skin is def  T =  1 2  Au  ρ✈T ΓT Γ✈ dAu =  1 T ✈ 2  ρΓT Γ dAu Au  ✈  The matrix in parentheses is the generalized mass M . ρ is areal density and A is area. The Jacobian Γ (Equation 4.1) is constant within each triangle. We approximate the velocity to be constant in each triangle j, with ✈j = 1 i∈vert(j) ✈i . Then the mass contribution of triangle j is 3 Mj = ♠j ΓTj Γj .  (4.15)  Here ♠j is the mass of the skin in triangle j; this scalar value is easy to compute using the density and vertex values in skin space (already available from the elasticity computation in Section 4.4). Vertex inertias are computed by adding 1/3 the inertia of each incident triangle. These 2 × 2 vertex inertias are assembled into the global blockdiagonal inertia matrix M . After time discretization using the linearly implicit method widely used in graphics [Baraff and Witkin, 1998], we get M + h2  ∂f ∂✉  ✈˜(k+1) = M ✈(k) + h(f (k) + b(k)).  (4.16)  ˜ (k+1) to remind ourselves that this is the material We use the tilde in ✈ velocity of the point that was at a mesh vertex at step k, and we will need an advection step to arrive at the final Eulerian velocity ✈(k+1) . Also, h is the size of the time step, f is the elastic force (Section 4.4). b are body forces due to gravity or other phenomena (including the user) applied in the physical space, and transformed to the body atlas by ΓT . Note that M depends on the skin configuration, which gives rise to a “quadratic velocity 26  4.6. Coupling of Skin and Body Movement vector” term in the dynamics as well. This is not significant for typical skin movements, so we ignore it. The integration scheme is stable, but adds time-step dependent numerical damping, which is not a signficant problem for skin movement. If desired, other types of integration schemes can be used without modifying the algorithm. Next, the velocity field is advected using the stable semi-Lagrangian method [Feldman et al., 2005; Stam, 1999]. More sophisticated methods developed in fluid mechanics could be used if needed [Lentine et al., 2011]. We are agnostic to the particular choice, and assume that an adequate advection routine is available such that for any material quantity q q = advect(✈, h, q˜). Using this method we advect the velocity, to obtain velocity is integrated by advecting skin positions.  (4.17)  ✈(k+1).  Finally, the  ✉(k+1) = advect(✈, h, ✉˜ (k+1)). 4.6  (4.18)  Coupling of Skin and Body Movement  The motion of the body influences the motion of the skin in complex ways. This includes non-penetration constraints and viscous resistance to sliding. In animal skin the connective tissue fibers anchoring skin to subcutaneous structures have highly nonlinear elastic behavior; the stress-strain curve has a low force “toe” region in the physiological range, but becomes very stiff beyond that region. Our goal is not to model these biological tissues in detail, but to capture some of their essential features in an efficient simulation. In addition, we would like to provide sufficient modeling flexibility so that an artist can choose to have elastodynamic skin only on a region of interest of the body, and have a small number of parameters to control the behavior of skin relative to the body. The main input to the simulation is the (arbitrary) deformation of the body mesh, x∗(k) at time step k, provided by the user. Skin points that are not explicitly constrained as above are still influenced by implicit contact constraints. Therefore we consider x∗(k) a target vertex displacement ∆x = x∗k − xk−1  (4.19)  which is modified to a feasible displacement ∆x that enforces these constraints approximately in each time step. 27  4.6. Coupling of Skin and Body Movement  Algorithm 1 Algorithm Overview 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27:  // Initialization Build lookup map between x and u and Π // Simulation loop while simulating do Move mesh vertices x via external driver //Dynamics coupling for all verts xi do Get feasible displacement ∆x Update xi with ∆x end for for all triangles i with verts j do, Compute ∆✉ij end for Update ✉ by advecting ∆✉ //Elastic force for all verts i do Look up ❳i and Πi with ✉i end for for all triangles with verts i do Compute elastic force f ij Compute Mj end for // Time integration Form KKT system and solve for u˙ Advect the velocity u˙ to obtain ✈(k+1) . Advect skin positions with ✈(k+1) end while  // Equation 4.20  // Equation 4.17  // Equation 4.13 // Equation 4.15  // Equation 4.21 // Equation 4.17 // Equation 4.18  28  4.7. Parameterization of the Skin ∆x is decomposed into a normal component ∆xn and a tangential component ∆xt . To enforce non-penetration and non-separation constraints, ∆xn is left unchanged, so that the skin tracks the body in the normal direction. In the tangential direction, there is some skin damping and also a limit on the amount of strain (∆max ) that models the biphasic behavior of collagen in skin. We obtain a corrected displacement ∆¯ xt = ∆x t min(||∆x ||, ∆ ) and scale it with a parameter ζ to get t max ||∆xt || ∆x = ∆xn + ζ∆¯ xt  (4.20)  ζ is a user specified inverse damping parameter: ζ = 1 approximates infinite friction so that the skin material sticks tightly on the body mesh; ζ = 0 makes the sliding motion highly underdamped (there is still resistance to motion due to tension from neighboring vertices and artificial damping from implicit integration). x in time step k is updated with ∆x instead and transformed to skin ∆✉ = Γ† ∆x . Note that since we compare the target shape to the actual skin shape at each time step in Equation 4.19, all errors are eventually corrected and there is no constraint drift.  4.7  Parameterization of the Skin  We can choose different parameterization strategies according to the application.  4.7.1  Local Parameterization and Constraints  If the required deformation domain has a limited region, we directly conduct the simulation on the region of interest with suitable boundary conditions. Attachments at vertex positions are particularly easy to enforce in the Eulerian setting, since the constraints are colocated with state variables, as observed by Sueda et al. [2011]. At the velocity level, the constraints on vertex i are of the general form G✉˙ = g. This method can model a variety of important conditions. When the skin is fixed to the body at vertex location ui , the velocity ✈k = 0. This is enforced by setting G = I and g = 0. More generally, we can constrain a skin vertex to not move along the normal a to a constraint curve, but allow it to slide along the curve. In this case, the constraint is aT ✈k = 0 and is enforce by setting G = aT and g = 0.  29  4.7. Parameterization of the Skin Skin velocity is obtained by solving the KKT equations for the constrained dynamical system M ∗ GT G 0  ✈˜(k+1) λ  =  f∗ , g  (4.21)  where λ is the vector of Lagrange multipliers, M ∗ = M + h2 ∂f /∂ ✉, and f ∗ = M ✈(k) + h(f (k) + b(k) ) (compare Equation 4.16).  4.7.2  Global Parameterization and Transition  While a history of parameterization of surfaces is outside the scope of this paper, Hormann et al. [2007]; Sheffer et al. [2006] provide good surveys of this field. We segment the base object surface into overlapped rectangular charts similar to Dong et al. [2006], and consider the whole surface as a collection of all the charts. Each chart itself topologically equals to a planar disk. To ease the transition between different charts, the surface is segmented into large quadrangular patches. The domain of each patch is [0, 1] × [0, 1]. We assume that the charts only overlap at their boundaries. The boundary vertices have multiple coordinates, while all other vertices only belong to one chart, in which the vertices have valid local coordinates that fall into the range [0, 1] × [0, 1]. Using quadrangular charts, we can easily define the transition from one chart to another with purely kπ/2 rotation and translation. Without loss of generality, we define the transition function Φij which transits a coordinate from chart i domain to chart j domain. Now we will briefly discuss the transition between the patch domain following the work of Stam [2003]. We first consider the translation part Pij . Suppose we define the edge index counter-clockwise as in Figure 4.3, we have 4 cases of translation function Pk : P0 (u, v) = (u, v + 1),  (4.22)  P1 (u, v) = (u − 1, v),  (4.23)  P2 (u, v) = (u, v − 1),  (4.24)  P3 (u, v) = (u + 1, v),  (4.25)  where the subscript k of Pk denotes the edge index. Similarly, suppose the adjacent edge in patch i is edge 1, then there are only four different possible adjacent edges in patch j (Figure 4.4). The rotation part Rij of the transition function can be defined as: 30  4.7. Parameterization of the Skin  v u  𝑃2 2 𝑃3  v  𝑃1  3 v  1  u  v  u 0  u 𝑃4  v u  Figure 4.3: The edges of the center patch has been ordered in counterclockwise order from 0 to 3. Translation function Pk (k = 0, · · · , 3) translate the coordinates to corresponding adjacent patches.  R0 (u, v) = (u, v),  (4.26)  R1 (u, v) = (v, 1 − u),  (4.27)  R2 (u, v) = (1 − u, 1 − v),  (4.28)  R3 (u, v) = (1 − v, u),  (4.29)  We can determine which rotation function Rk to use from the edge indices ei and ej of corresponding chart i and j as: k(ei , ej ) = (4 + ei − (ej + 2)%4)%4  (4.30)  Finally, the transition Φij is defined as Φ(u, v) = Rk(s,t) (Ps (u, v))  (4.31)  where s is the transition edge index in chart i, t is the adjacent edge index in chart j, and we generate rotation function index k from Eqn. 4.30. 31  4.7. Parameterization of the Skin  𝑅0  2 3 v  2 3 v  1 u 0 𝑅2  2 3 v  1 u 0  1  3 v  1  u 0  u 0  0 u  2  1  v 3 2  𝑅1  2  3 v  u 0  2 v 3 𝑅3  1 u 0  1  3 v 0 u  2 1  Figure 4.4: The rotation function are restricted to the four cases shown. Based on the above transition, we can easily transit from one chart to another. We obtain the global parametrization with this transition function using the approach of Dong et al. [2006]. We then plug in this transition function into the simulation (Section 4.5) in two steps. First, we transit all the coordinates to chart 0 space to build the degrees of freedom (Dofs) of the simulation. The vertices on the shared boundaries only have one distinct coordinate in one chart. The total number of Dofs matches the number of the vertices times two. We call such coordinates ✉g , which means coordinates in global configuration (chart 0) per vertex. In the rest of the simulation, the position integration (Eqn.4.16) of texture coordinates all happens in the ✉g . Second, after each time step, we transit all the ✉vg to a valid local coordinates which falls into the chart domain [0, 1]×[0, 1]. We call such transited local coordinates ✉l . We use these ✉l to locate the reference material point in ❳ space with the coordinates correspondence Π as Section 4.3.2. Although we made our choice of the parameterization as Dong et al. [2006], one should note that any other base complex based parameterization with a well-defined transition function between charts(e.g. triangle [Kho-  32  4.8. Results  (a)  (b)  (c)  (d)  Figure 4.5: Torus Deformation: (a) Torus at rest. (b) Global parameterization. (c) Deformation with our method. (d) Deformation with standard skinning method dakovsky et al., 2003; Pietroni et al., 2010], quadrangle [Carr et al., 2006; Dong et al., 2006; Kalberer et al., 2007], hexagon [Ray et al., 2006] and etc.) would be easily fitted into our framework.  4.8  Results  We implemented our system in C++, and ran the simulations on a 2.66 GHz Intel Core i5 computer with 4 GB of RAM. The code is single-threaded and uses csparse [Davis, 2006] for solving the sparse linear system. We use 1 millisecond time steps for all the demos. ζ is set as 1 in all the demos except torus shaking and dinosaur walking to get a highly damped quasi-static effect as real skin/tight cloth does. ζ is set as 0 in the dinosaur walking example, while varies between 0 and 1 in the torus shaking example. Torus deformation. Fig. 4.5 shows our simulation result on a closed surface. The torus is parameterized into four adjacent charts as shown in Fig. 4.5(b). Using the traditional skinning, non-uniform deformations usually causes large, undesirable distortion of the texture, as shown in shown in Fig. 4.5(d). However, our simulation result shows that sliding of surface 33  4.8. Results  (a) With method  standard  skinning (b) Fixed border constraints on both sides  (c) Left side: fixed border constraints, right side: sliding constraints  Figure 4.6: Cylinder Twisting material uniformly distributes the distortion over the entire surface as shown in Fig. 4.5(c). Cylinder twisting. The cylinder example in Fig. 4.6 shows the ability of our method to deal with deformation with different boundary constraints. If only the right half of the cylinder is twisted, the traditional skinning produces deformation only on the right hand side of the cylinder. Using our method, the deformation can be naturally propagated into the left half of the cylinder depending on the constraint condition assigned by user. If the right-end boundary of the skin material is fixed to the body mesh, our method produces a uniform twist of the entire skin material as shown in Fig. 4.6(b). When the right-end boundary is allowed to slide along the body mesh, the body mesh slides independently to the skin material and therefore does not produce any twist, as shown in Fig. 4.6(c). Head movement. Our method can simulate realistic skin deformations of various body parts. We first present the realistic animation of the head including the Adam’s apple (laryngeal prominence) as a convincing example. Since Adam’s apple slides under the skin, it produces visually interesting deformations of the neck: when we lift the head and open the mouth, the skin slides along the neck, while the Adam’s apple does not move much. On the other hand, when we swallow, the Adam’s apple moves up and down while the skin remains relatively static. Although these deformation behaviors are critical for visually realistic head animation, they cannot be simulated 34  Skinning  Our Method  4.8. Results  (a) Lifting head  (b) Mouth opening  (c) Swallowing  Figure 4.7: Head movement: the three figures in the top row show the results of our proposed method, and the bottom three figures are obtained using standard skinning method.  35  4.8. Results  (a)  (b)  (c)  Figure 4.8: Hand flexing using the traditional skinning method. Using our method, we can directly simulate these realistic behaviors with minimum manual editing. Fig. 4.7 compares the results of (a) head lifting, (b) mouth opening, and (c) swallowing using our method (top row) and the traditional skinning (bottom row). We draw a tattoo on character’s neck to highlight the skin deformation. As expected, our method can simulate realistic skin deformations described above, while the traditional skinning produces opposite results; no skin deformations during head lifting and mouth opening, and large skin distortion during swallowing. Hand flexing. Our method is not limited to the deformation of texture pixels; it can be used by other sources such as the normal map. In the hand example shown in Fig. 4.8, the wrinkles and veins are represented in normal map. When we flex the fingers sequentially from pinky to index finger, the wrinkles and veins of the dorsal side first slide toward the pinky finger (direction of the arrow in Fig. 4.8(b)) and then move toward the index finger (direction of the arrow in Fig. 4.8(c)).  36  Skinning  Our Method  4.8. Results  (a) Rest pose  (b) Lifting arm  (c) Shrug  Figure 4.9: Tight cloth slides on the trunk during movement. The upper figures are simulated with our method while the bottom figures are obtained using standard skinning.  37  4.8. Results  (a) ζ = 0  (b) ζ = 0.3  (c) ζ = 0.6  (d) ζ = 1  Figure 4.10: Torus shaking: the skin slides on the torus during rigid motion. Different ζ shows different body movement influence. The black and white layer denotes the rest skin pose of the torus, while the red and white layer means the deformed torus skin state. When ζ = 1, the two layers are overlapped perfectly, which means the body movement of the torus does not influence the skin of the torus. However, when ζ is set to be 0, the two layers differ significantly which indicates the influence of the body movement on the skin of the torus. Tight cloth sliding on trunk. Sliding of skin material along the body mesh can be efficiently applied to the animation of a character wearing a tight clothes. Without separately simulating the deformation of thin cloth shell on skin, our method can generate realistic cloth deformations by simply making the cloth material slide freely along the body mesh. Fig. 4.9 compares the results of our method to standard skinning for upper body animation. When the character lifts his left arm or shrugs, the whole cloth is naturally stretched in our method (Fig. 4.9(b)and (c) top), while in the traditional method only the left part of the cloth is stretched for arm lifting (Fig. 4.9(b) bottom) and only the shoulder part is stretched for shrugging (Fig. 4.9(c) bottom). Since we are not introducing an additional simulation step for clothing, the deformation can be computed very efficiently: with 3320 Dofs and 278 constraints, the whole computation costs 164.04ms per frame. Torus shaking. Since different types of skin interact with the underlying body in different ways, our method gives uses the freedom to choose the mechanical behavior of the skin by simply adjusting the ζ variable. If ζ equals to 1, the skin rigidly sticks to the body mesh and does not produce any sliding motion. If ζ equals to 0, the sliding behavior of skin is heavily affected by the dynamics of the body mesh. Fig. 4.10 shows an example of torus shaken by sinusoidal motion. When ζ equals to 0, our method can 38  4.8. Results  Figure 4.11: Dinosaur walking simulate a visually realistic dynamic effect of skin material that lags and overshoots the motion of the body mesh. Dinosaur walking. The dynamic coupling described above can be efficiently used in character animation to produce a visually realistic jiggling of skin material. The dinosaur example provided in the video shows that the skin jiggles when the dinosaur suddenly stops moving.  4.8.1  Performance Analysis  Table 4.2 tabulates performance statistics for above examples. As shown in the overall computation time (Tps. in table), all the examples illustrated in this paper can be simulated in near-real-time, where the computation time is approximately proportional to the Dofs. Significantly, the computation cost of dynamic coupling is relatively small: computation time for torus shaking example that uses the dynamic coupling is not significantly increased compared to the torus deformation example.  39  4.8. Results  Example  Faces  Dofs  Cons.  Time per. Time for Time for frame (ms) linear solver(ms) the rest (ms) torus 1152 1152 0 22.44 14.73 7.71 cylinder 1276 1320 44 33.72 25.67 8.05 head 1593 1724 258 50.75 39.95 10.8 hand 445 516 118 9.00 5.83 3.17 cloth 3162 3320 278 164.04 144.75 19.29 torus* 1152 1152 0 23.37 14.45 8.92 dinosaur 461 560 198 6.27 2.73 3.54 (Cons. = Constraints, torus: torus deformation, torus*: torus shaking) Table 4.2: Statistics of examples  40  Chapter 5  Implementation Details 5.1  System Overview  Figure 5.1 illustrates the pipeline of our system. In the initialization step, we build the biomechanical hand model with the strand modeling tool and initialize the skin domain with the parameterization tool. We set the parameters with XML configuration files. After that, our two simulators load the modeling and configuration files and simulate the motions. Finally, we render the generated motion in an external renderer.  Figure 5.1: Pipeline Our modeling tools are based on Blender, an open source 3D modeling software package. The generated model is stored in XML format. We then use pugixml to parse the XML files. We will illustrate the details of these tools in Section 5.4. In the simulation step, the musculotendon simulator is located at up41  5.2. Musculotendon Simulator stream of the pipeline compared to the skin simulator. The musculotendon simulator (Section 5.2) generates a sequence of hand motions and the skin simulator (Section 5.3) can take that sequence as input to generate detailed skin movements. However, we want to restate that our skin simulation algorithm itself is not restricted to be connected, any mesh deformation can be considered as the driver of the skin simulation. We write out the final animation as obj sequences for meshes or XML files for the strand and again employ Blender to do the final rendering.  5.2  Musculotendon Simulator  Figure 5.2: The layers of the musculotendon simulator The musculotendon simulator consists of several layers. It includes a rigid body dynamics component, a strand simulation component, a contact handling component, a solver component, and a basic controller to control the hand movement by computing the muscle activation. Our contact handling algorithm is based on the staggered projections algorithm from Kaufman et al. [2008]. Our controller is based on the velocity level muscle activation controller from Sueda et al. [2008]. The solver component depends on two external libraries. We use csparse Davis [2006] to solve the linear KKT system and employ MOSEK [ApS, 2009] to solve the quadratic program. All the above dynamics are abstracted as DoFs (Degrees of freedom), i.e. generalized coordinates, and constraints. We solve the overall system with a KKT matrix as in Equation 4.21. Figure 5.3 illustrates the main classes in the musculotendon simulator. To make the diagram more readable, we ignore some helper classes and omit the association lines in this figure. 42  5.2. Musculotendon Simulator  SurfacePlane::SurfacePlaneBlending Constraint::PointPointConstraint -point : Point*[2] Curve::CurveLine  +setIndex() +fill() Object::Curve  Constraint::PointFrameConstraint  -umin : double -umin : double +update()  -frame : DofFrame* -point : Point* +setIndex() +fill()  -dofFrame : DofFrame* -Elocal : double[16] +update()  -dofFrames : DofFrame*[2] -Els : double[16][2] -blendCoeff : double +update() +fill() SurfacePlane::SurfacePlaneFixed -dofFrame : DofFrame* -El : double[16] +update() +fill()  Curve::CurveCircle -dofFrame : DofFrame* -Elocal : double[16] -radius : double +update()  Surface::SurfacePlane  Constraint::PointCurveConstraint -curve : Curve* -point : Point* +setIndex() +fill()  +update()  Object::Point  Object::Constraint  Constraint::PointSurfaceConstraint -surface : Surface* -point : Point* +setIndex() +fill()  Object::Surface  -dofL : Dof* -xw_W : double[3] -vw_W : double[3] +update() +setIndex() +fill()  -nGe : int -nGi : int +setIndex() +fill()  -rigids : RigidCuboid*[] +update()  Object::Scene Constraint::Joint  Constraint::Contact -rigids : Rigid*[2] -pos : double[3] -nor : double[3] +setIndex() +fill()  -rigids : Rigid*[2] -jToW : double[16] -rows : int[6] -limits : double[6][2] +setIndex() +fill()  DynObj::Particle -point : Point* -mass : double  Object -gid : int -name : char* +display() +writeXML() +readXML()  Object::DynObj Object::Dof -mode -nx : int -nv : int -x : double* -v : double* +update() +setIndex() +fill() +integratePos()  +setIndex() +fill()  DynObj::Rigid -frame : DofFrame* -mass : double[6] -density : double  DynObj::Strand -nodes : Node[] -segs : Segment[]  Rigid::RigidCuboid -whd : double[3]  Rigid::RigidObj -mesh  Scene::SceneXML  Object::Segment -nodes : Node*[2] -density : double -radius : double -materialProps -geomProps_W  -dofs : Dof* -points : Point* -dynobjs : DynObj* -constraints : Constraint* -extF : double[3] -nx : int -nv : int -x : double* -v : double* -h : double -M_W : cs* -f_W : double* -Ge_W : cs* -ge_W : double* -Gi_W : cs* -gi_W : double* -step() -update() -setIndex() -fill() -dynamics() -integrate() #create() #prefill() #postfill()  +parseConfig()  Object::Node -dofE : Dof* -point : Point* -geomProps_W  Dof::DofFrame  SceneXML::SceneProject  -Ei_W : double[16] +integratePos()  +create() +prefill() +postfill()  Figure 5.3: The class diagram of the musculotendon simulator. This diagram is drawn as UML. We omit the association relations to make this diagram be more readable.  43  5.2. Musculotendon Simulator  5.2.1  Degree of Freedom Related Classes  Dof. The Dof class contains the DoFs (degrees of freedom) of a dynamic object. A typical Dof includes the 6 DoFs of a rigid body, a 3 DoFs of a Lagrangian strand node, or a 1 DoF of purely Eulerian node. We separate out the DoFs from the dynamic object, since several objects can share the same DoF. For example, the crossing point of two strands in contact can be described by a single 3 DoFs object. The Dof class includes several key variables. The mode variable denotes the type of this DoF. There are three DoF modes: FREE, CONSTRAINED, and REMOVED. A free DoF is the normal DoF that is free to move about. Most DoFs are of this type. A constrained DoF is a DoF with an equality constraint applied to it. A typical use for this is for scripting a rigid body to move with a certain velocity. A removed DoF is similar to a constrained DoF, but rather than applying a constraint, we remove the DoF from the equations of motion. This can be used for constraining the rigid body to not move, or for making a strand node a Lagrangian node by removing its Eulerian degree of freedom. A Dof object contains pointers to its position (variable x) and velocity vectors (variable v), which are stored in Scene class. It must know how to integrate itself and also how to apply the appropriate constraint if it is CONSTRAINED. nx and nv denote the size of the x and v vectors which belong to this Dof object. DofFrame. As a subclass of Dof, a DofFrame is a 6D frame in space and corresponds to the degrees of freedom of a rigid body. We also store the inverse of its current configuration as E i for convenience. In DofFrame, the size of the position vector, nx, equals to 16; while the size of the velocity vector, nv, equals to 6. We integrate the 6 × 1 velocity vector, or the twist, into the 4 × 4 configuration with rodrigues formula [Li et al., 1994].  5.2.2  Point Related Class  Point. A Point is a location in 3D space. Unlike a particle, it has no mass. From its associated Dof object, each point needs to know how to compute its world position, xw. Sueda et al. [2011] employs reduced coordinates, with which the generalized velocity, vg and the material Jacobian, Jg are needed to calculate the world velocity vw of the point. In that case, xw is always of size 3 × 1, but the sizes of vg and Jg vary case by case. For example, if the point is 44  5.2. Musculotendon Simulator restricted to move on a 2D surface, then vg is a 2 × 1 vector and Jg is a 3 × 2 matrix. The world velocity of the point is given by the product vw = Jg * vg. However, we use maximal coordinates with different point constraints (e.g., PointSurfaceConstraint versus the aforementioned surface point). Hence, our Jg is always 3 × 3 Identity matrix and vw is identical with vg.  5.2.3  Surface Related Classes  Surface. Surface usually serves as the constraint for the strand node. It has at least one parent DofFrame and can cross an arbitrary number of rigid bodies. We store the parent DofFrame pointer as baseFrame and the array of crossing rigid bodies as rigids. In this system we mainly focus on planar surface, which corresponds to SurfacePlane. Of course, other types of surface could be embedded into our system very easily. Just as we discussed in Section 3.3, there are two types of planar surface: SurfacePlaneFixed moves rigidly with its parent DofFrame object, while SurfacePlaneBlending has two parent DofFrame objects and its motion is a blended combination of the configuration of the two parent DofFrame objects. We set the blending coefficient in the initialization step with the default value of 0.5.  5.2.4  Curve Related Classes  Curve. Similar to the Surface, Curve is another type of geometry object which is usually used as the constraint geometry for a strand node. The subclasses CurveLine and CurveCircle respectively define a curve with line segment shape or with a circle shape.  5.2.5  Constraint Related Classes  Constraint. Constraint is an abstract class which defines the limitations for the Dof. We denote our state vector as q and the constraint matrix as G. The constraint can be equality such that Gq = g, or inequality such that Gq < g, where g is a vector which is usually set as zero. Hence, we store the size of the constraint as nGe for equality constraint and nGi for inequality constraint. Joint. is a typical inherited Constraint class. It defines the relations for a pair of linked rigid bodies. Therefore, the variable rigids store the pointers to the related rigid bodies. Besides, we also store the location of the joint  45  5.2. Musculotendon Simulator in world space as a 4 × 4 configuration jToW. There are 6 potential DoFs that can be constrained with the joint object, the first three denote the rotation component while the later three denote the translation component. We store the joint type and joint limitations in variable rows and variable limits. For example, for the ball joint which only permits rotation motions, the last three entries of rows array are set to be constrained. Contact. defines the constraint introduced by collision. It usually happens between a point and a surface. Hence, the contact point contains a normal vector nor and a 3D position pos. We also store the pointers of the two contact rigid bodies in rigids. The reader can refer to Kaufman et al. [2008] for further details about the contact algorithm. PointPointConstraint, PointCurveConstraint, PointSurfaceConstraint, PointFrameConstraint are different types of point constraints which limit such points to move with another point, along with a curve, inside a surface, or together with a rigid frame (usually a rigid body) respectively. Therefore, in addition to the corresponding Point object pointer, we store a pointer variable in these classes which references its parent constraint, such as a Point, Curve, Surface, or DofFrame object.  5.2.6  Dynamical Object Related Classes  DynObj. DynObj is short for dynamical object. It contains not only DoF but also mass. For example, Particle is the simplest DynObj. It is a 3D point with mass. Rigid. Rigid generalizes all the rigid body types at the dynamics level. It contains a DofFrame pointer which references the real DoFs of the rigid body, a generalized 6 × 1 mass array, and a density variable. It derives different shapes of rigid bodies with different variables to describe the geometry information. For example, RigidCuboid defines a cuboid rigid body. Hence we store the weight, height, and depth of the cuboid as a 3 × 1 array whd. In RigidObj, the geometry shape is represented as a mesh. Other shapes of rigid bodies such as sphere, cylinder, etc., can be introduced as other types of inherited classes from Rigid class. Strand. Following Sueda et al. [2011], the strand object is defined as node and segment. Node describes the DoF while the dynamics is integrated along with the piecewise linear segment from one node to another.  46  5.2. Musculotendon Simulator Node includes both the Lagrangian DoF from the Point object and the Eulerian Dof as variable dofE. For a purely Lagrangian node, dofE can be set as REDUCED or CONSTRAINED to eliminate its DoF. In the purely Eulerian node, the dof variable in the point object is eliminated similarly. Segment stores the pointers to the nodes in its two sides and the geometry properties of the strand such as radius and density.  5.2.7  The Driver Class  Scene. Scene is the entry and integration class of the musculotendon simulator. It stores all the dynamics related variables such as time step size h, position vector x, velocity vector v, overall generalized mass matrix M W, overall generalized force vector f W, overall equality constraint matrix Ge W, inequality constraint matrix Gi W, and right hand side constraint vectors ge W and gi W. The suffix W means working variable that is reconstructed in every time step. Therefore, we do not need to store the working variables during serialization. Algorithm 2 Step Overview of the Musculotendon Simulation 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12:  // Initialization create(); // Run-time while simulating do update(); setIndex(); prefill(); fill(); postfill(); dynamics(); integration(); end while  Algorithm 2 describes the general steps. In each time step, we first call update function to update the basic temporary variables (e.g., the current Jacobian matrix or the collision state). Then we gather all the updated Dof and Constraint objects in current step. In the setIndex function, we build the index locations of each Dof and Constraint object inside the mass matrix and constraint matrix, and count the size of the mass and constraint matrix. After that, we rescale the mass and constraint matrices to be the right size, and fill the real entries into the two matrices with the previous 47  5.3. Skin Simulator stored index. We solve the dynamics in the dynamics step to obtain the updated velocity, and integrate the velocity into position in integration step. We derive subclasses from Scene class for each individual project. The methods create, prefill and postfill are the only three functions needed to be extended.  5.3  Skin Simulator Modifier::ModifierSkinning -skeleton : Rigid*[] +modify() +draw() Modifier::ModifierMeshUpdater +modify() +draw() ModifierSlider::ModifierSliderDynamical -strainEnergyFunc -skinCoordinates : LocalCoordinate +modify() +draw() LocalCoordinate -u,v : double -patchId : int  Modifier -pShell : Shell* +modify() +draw()  Shell -modifiers : Modifier*[] -mesh : Mesh +draw() +update()  Modifier::ModifierSlider -atlas : Patch*[] -dynMesh : Mesh +modify() +draw()  Patch -adjacentInfo -umin,umax,vmin,vmax : double +transition()  Figure 5.4: The class diagram of the skin simulator Figure 5.4 illustrates the class diagram of the skin simulator. Again we omit some helper classes, and mainly focus on the main classes to give the reader a clear idea of our system design. We use the class Shell to represent the 2D skin simulation domain. It contains a variable mesh to denote the body mesh (Section 4.1), and an array of modifiers which modify both the body and the skin. Modifier is an abstract class which generalizes the algorithms for modifying the skin or the body mesh. The derived modifier classes overrides the modify method to fill in the actual algorithm, and override the draw method to achieve its specific rendering effects. For example, ModifierSkinning deforms the body mesh vertices via dual skinning algorithm [Kavan et al., 2008] with respect to the skeleton movement. The draw method is designed to display the weights magnitude for each driver rigid body. ModifierMeshUpdater updates the mesh vertex positions in each time step. The 48  5.4. Modeling Tool mesh deformation sequence can be generated by artist, other software, or any other mesh deformation algorithm. ModifierSlider wraps our elastodynamic skin algorithm. The variable skinCoordinates stores the skin DoF. The skin atlas is stored as a array of Patch objects. Each Patch object stores its own geometry information together with its adjacency information. Since we only use rectangular chart in our algorithm, only the maximal and minimal “uv” coordinates (the two components of the skin domain in Section 4.1) are needed to describe the whole geometry of each patch. One should note that the skin simulation domain is not required to fill the whole body mesh domain. For example, we can load a complete mesh deformation of the whole human body, but maybe only the hand region is of interest for our skin simulation. Therefore, we separate the ‘dynamic mesh’ from the full body mesh as a variable dynMesh, and only apply the skin simulation in dynMesh region.  5.4 5.4.1  Modeling Tool Strand Modeling Tool  Figure 5.5: Strand Modeling Tool Since our strand simulation algorithm is deeply coupled with rigid body simulation by definition, we need to model the strand and the rigid bodies together. Figure 5.5 is a snapshot of our strand modeling tool. We use Blender to build in rigid body objects, joint constraints, plane 49  5.4. Modeling Tool  (a)  (b)  Figure 5.6: Musculotendon modeling tool panels: (a) shows the setting of the scene; (b) shows the modeling parameters needed for the strand object objects to model skeleton, joints and constraint planes (Section 3.3). The strand is modeled as a path curve object, which is piecewise linear. Figure 5.6 illustrates the panels of our musculotendon modeling tool. We can import or export a modeled scene and render the generated musculotendon and skeleton simulation in this panel (Figure 5.6a). Figure 5.6b shows the modeling parameters needed for the strand object such as radius. Each node can be set as Eulerian node or Lagrangian node. We can choose which object this node is constrained to as well as the type of the corresponding constraint.  5.4.2  Parameterization Tool  We can segment the body mesh (Section 4.1) automatically as Dong et al. [2006], or manually with Blender. Figure 5.7 shows the manual chart segmentation tool in Blender. The yellow dots denote the border of each chart. We then store the border information and solve the uv coordinates with dis50  5.4. Modeling Tool  Figure 5.7: Chart Segmentation Tool crete Laplace operator L. We use u to denote the stacked uv coordinates u for each vertex. Then the Laplacian equation can be written as  Lu = 0,  where     k wik Lij = −wij   0  if i = j if edge(i, j) exists, otherwise  (5.1)  where wij = 12 (cot αij + cot βij ) are the discrete harmonic weights. Here αij and cot βij are the opposite angles of the edge (i, j) of the mesh. Since the vertex on the chart border has multiple uv coordiantes, we use a constraint matrix G and a right hand side vector b to ensure that the repeated coordinates have the same value during transition, such that Gu = b.  (5.2)  We solve Equation 5.1 and Equation 5.2 with a KKT system as L GT G 0  u λ  =  0 . b  (5.3)  The obtained u can be further tuned with the uv coordinates editor in Blender.  51  Chapter 6  Conclusion In this thesis, we have presented a biomechanical simulation system for the hand musculoskeletal system and skin. We first presented a simulation framework for tendons that can handle the complex routing constraints of the hand, such as pulleys and sheaths. By extending the constrained strands framework of Sueda et al. [2011], we were able to efficiently detect collision between bones and tendons, and to take large time steps despite the stiffness of the tendons. We showed that using a tendon simulator gives us coupled motion of the digits, as well as energy storage in extended tendons. Finally, we showed that we can also simulate deformities of the hand by changing some of the tendon parameters. We also introduced a new method for simulating human-like skin that is in close contact with the underlying structures of the body. The key to the method is a novel Eulerian discretization of thin membranes that are constrained to slide on surfaces of arbitrary topology. This discretization makes the simulation very robust since the major problems of dealing with contact between the skin and the body are avoided. The method is also easy to implement and efficient. Since it is simple to integrate the method with any animation pipeline as a post-process, we hope it will be widely useful. Future Work. Although we were able to simulate a highly complex system of tendons and ligaments, there are still many more approximations that remain. For example, we still approximate joints as mechanical joints. An interesting avenue of future work would be to extend the strands framework for handling joint limits using ligaments. Our technique should work very well with the thumb, and its inclusion is in the works. The naturalness of the resulting animations depended heavily on the activations computed by the activation controller. Using a more sophisticated controller will improve the results. Our skin simulation method has several limitations. First of all, to make it easy to integrate with any animation pipeline, we chose one-way coupling between body and mesh. Two-way coupling is useful when the skin (or a tight fitting suit) is to influence the body’s dynamics. Secondly, simulating 52  Chapter 6. Conclusion wrinkles and buckling from the computed stresses could be easily included in this framework, but were not. Besides, we assume that the skin can be embedded in a stress-free state on the body surface at some configuration, while this may not be possible for some biological materials. It should be possible to account for this using the same methods used for representing plastically deformed solids. Moreover, even though our framework is general, our examples used a simple St. Venant-Kirchhoff material. We tried a more biomechanically realistic Fung-like skin model with an exponential term to prevent large deformation, but it introduced numerical instabilities and may need new types of integrators. This is an area for future work. Last but not least, currently we only use low order methods for discretizing space and time, following the practice in animation, which favors efficiency over accuracy of the simulation.  53  Bibliography Albrecht, I., Haber, J., and Seidel, H.-P. (2003). Construction and animation of anatomically based human hand models. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 98– 109. ApS, M. (2009). The MOSEK optimization tools manual. MOSEK. Baraff, D. and Witkin, A. (1998). Large steps in cloth simulation. In Proc. SIGGRAPH 1998, volume 32, pages 43–54. Bargteil, A. W., Wojtan, C., Hodgins, J. K., and Turk, G. (2007). A Finite Element Method for Animating Large Viscoplastic Flow. ACM Trans. Graph., 26(3):16:1–16:8. Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Sumner, R., and Gross, M. (2011). High-quality passive facial performance capture using anchor frames. In ACM Trans. Graph., volume 30, page 75. ACM. Bergou, M., Audoly, B., Vouga, E., Wardetzky, M., and Grinspun, E. (2010). Discrete viscous threads. ACM Trans. Graph., 29(4):116:1–116:10. Bergou, M., Wardetzky, M., Robinson, S., Audoly, B., and Grinspun, E. (2008). Discrete elastic rods. ACM Trans. Graph., 27(3):63:1–63:12. Blemker, S. S. and Delp, S. L. (2005). Three-dimensional representation of complex muscle architectures and geometries. Annals of Biomedical Engineering, 33(5):661–673. Carr, N. A., Hoberock, J., Crane, K., and Hart, J. C. (2006). Rectangular multi-chart geometry images. In Proceedings of the fourth Eurographics symposium on Geometry processing, SGP ’06, pages 181–190, Aire-laVille, Switzerland, Switzerland. Eurographics Association.  54  Bibliography Chen, D. T. and Zeltzer, D. (1992). Pump it up: computer animation of a biomechanically based model of muscle using the finite element method. In Computer Graphics (Proc. SIGGRAPH 92), volume 26, pages 89–98. ACM. Choe, B., Lee, H., and Ko, H. (2001). Performance-driven muscle-based facial animation. The Journal of Visualization and Computer Animation, 12(2):67–79. Cline, M. B. and Pai, D. K. (2003). Post-stabilization for rigid body simulation with contact and constraints. In ICRA, pages 3744–3751. Damsgaard, M., Rasmussen, J., Christensen, S. T., Surma, E., and de Zee, M. (2006). Analysis of musculoskeletal systems in the anybody modeling system. Simulation Modelling Practice and Theory, 14(8):1100–1111. Davis, T. A. (2006). Direct Methods for Sparse Linear Systems. SIAM Book Series on the Fundamentals of Algorithms. SIAM. Delp, S. L., Anderson, F. C., Arnold, A. S., Loan, P., Habib, A., John, C. T., Guendelman, E., and Thelen, D. G. (2007). Opensim: open-source software to create and analyze dynamic simulations of movement. Biomedical Engineering, IEEE Transactions on, 54(11):1940–1950. Dong, S., Bremer, P.-T., Garland, M., Pascucci, V., and Hart, J. C. (2006). Spectral surface quadrangulation. In ACM SIGGRAPH 2006 Papers, SIGGRAPH ’06, pages 1057–1066, New York, NY, USA. ACM. ElKoura, G. and Singh, K. (2003). Handrix: animating the human hand. In Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation, pages 110–119. Eurographics Association. Feldman, B. E., O’Brien, J. F., Klingner, B. M., and Goktekin, T. G. (2005). Fluids in deforming meshes. In Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation, SCA ’05, pages 255–259, New York, NY, USA. ACM. Garner, B. A., Pandy, M., et al. (2000). The obstacle-set method for representing muscle paths in musculoskeletal models. Computer methods in biomechanics and biomedical engineering, 3(1):1–30. Gilles, B., Bousquet, G., Faure, F., and Pai, D. K. (2011). Frame-based elastic models. ACM Trans. Graph., 30(2):15:1–15:12. 55  Bibliography Gourret, J., Thalmann, N., and Thalmann, D. (1989). Simulation of object and human skin formations in a grasping task. In ACM Siggraph Computer Graphics, volume 23, pages 21–30. Grinspun, E., Hirani, A. N., Desbrun, M., and Schr¨oder, P. (2003). Discrete shells. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 62–67. Hormann, K., L´evy, B., and Sheffer, A. (2007). Mesh parameterization: theory and practice. In ACM SIGGRAPH 2007 courses, SIGGRAPH ’07, New York, NY, USA. ACM. Huang, H., Zhao, L., Yin, K., Qi, Y., Yu, Y., and Tong, X. (2011). Controllable hand deformation from sparse examples with rich details. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 73–82. James, D. and Twigg, C. (2005). Skinning mesh animations. In ACM Trans. Graph., volume 24, pages 399–407. Johnson, E., Morris, K., and Murphey, T. (2009). A variational approach to strand-based modeling of the human hand. In Chirikjian, G., Choset, H., Morales, M., and Murphey, T., editors, Algorithmic Foundation of Robotics VIII, volume 57 of Springer Tracts in Advanced Robotics, pages 151–166. Springer Berlin / Heidelberg. Kalberer, F., Nieser, M., and Polthier, K. (2007). Quadcover-surface parameterization using branched coverings. Kapandji, I. A. (2007). The Physiology of the Joints, Volume 1: Upper Limb. Churchill Livingstone, 6 edition. Kaufman, D. M., Sueda, S., James, D. L., and Pai, D. K. (2008). Staggered projections for frictional contact in multibody systems. ACM Trans. Graph., 27(5):164:1–164:11. Kaufman, K. R., Morrow, D. A., Odegard, G. M., Donahue, T. L. H., Cottler, P. J., Ward, S., and Lieber, R. (2010). 3d model of skeletal muscle to predict intramuscular pressure. In American Society of Biomechanics Annual Conference. ˇ ara, J., and O’Sullivan, C. (2008). Geometric skinKavan, L., Collins, S., Z´ ning with approximate dual quaternion blending. ACM Trans. Graph., 27(4):105:1–105:23. 56  Bibliography Khodakovsky, A., Litke, N., and Schr¨oder, P. (2003). Globally smooth parameterizations with low distortion. In ACM SIGGRAPH 2003 Papers, SIGGRAPH ’03, pages 350–357, New York, NY, USA. ACM. Kry, P. G., James, D. L., and Pai, D. K. (2002). Eigenskin: real time large deformation character skinning in hardware. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 153– 159. Kry, P. G. and Pai, D. K. (2006). Interaction capture and synthesis. ACM Trans. Graph., 25(3):872–880. Kurihara, T. and Miyata, N. (2004). Modeling deformable human hands from medical images. In ACM SIGGRAPH/Eurographics symposium on Computer animation, pages 355–363. Lang, C. and Schieber, M. (2004). Human finger independence: limitations due to passive mechanical coupling versus active neuromuscular control. J Neurophysiol, 92(5):2802–10. Lee, S.-H., Sifakis, E., and Terzopoulos, D. (2009). Comprehensive biomechanical modeling and simulation of the upper body. ACM Trans. Graph., 28(4):99:1–99:17. Lee, S.-H. and Terzopoulos, D. (2006). Heads up!: biomechanical modeling and neuromuscular control of the neck. In ACM Transactions on Graphics (TOG), volume 25, pages 1188–1198. ACM. Leijnse, J. N., Bonte, J. E., Landsmeer, J. M., Kalker, J. J., Van Der Meulen, J. C., and Snijders, C. J. (1992). Biomechanics of the finger with anatomical restrictions–the significance for the exercising hand of the musician. Journal of Biomechanics, 25(11):1253–1264. Lentine, M., Aanjaneya, M., and Fedkiw, R. (2011). Mass and momentum conservation for fluid simulation. In Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA ’11, pages 91–100, New York, NY, USA. ACM. Levin, D. I. W., Litven, J., Jones, G. L., Sueda, S., and Pai, D. K. (2011). Eulerian solid simulation with contact. ACM Trans. Graph., 30(4):36:1– 36:9.  57  Bibliography Lewis, J., Cordner, M., and Fong, N. (2000). Pose space deformation: a unified approach to shape interpolation and skeleton-driven deformation. In Proc. SIGGRAPH, pages 165–172. Li, Z., Sastry, S. S., and Murray, R. (1994). A mathematical introduction to robotic manipulation. Liu, C. K. (2008). Synthesis of interactive hand manipulation. In Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 163–171. Eurographics Association. Liu, C. K. (2009). Dextrous manipulation from a grasping pose. In ACM SIGGRAPH 2009 papers, SIGGRAPH ’09, pages 59:1–59:6, New York, NY, USA. ACM. Magnenat-thalmann, N., Laperrire, R., Thalmann, D., and Montral, U. D. (1988). Joint-dependent local deformations for hand animation and object grasping. In In Proceedings on Graphics interface 88, pages 26–33. Maillot, J., Yahia, H., and Verroust, A. (1993). Interactive texture mapping. In Proceedings of the 20th annual conference on Computer graphics and interactive techniques, SIGGRAPH ’93, pages 27–34, New York, NY, USA. ACM. McAdams, A., Zhu, Y., Selle, A., Empey, M., Tamstorf, R., Teran, J., and Sifakis, E. (2011). Efficient elasticity for character skinning with contact and collisions. ACM Trans. Graph., 30(4):37:1–37:12. Mohr, A. and Gleicher, M. (2003). Building efficient, accurate character skins from examples. In ACM Trans. Graph., volume 22, pages 562–568. Monagan, M. B., Geddes, K. O., Heal, K. M., Labahn, G., Vorkoetter, S. M., McCarron, J., and DeMarco, P. (2005). Maple 10 Programming Guide. Maplesoft, Waterloo ON, Canada. Ng-Thow-Hing, V. (2001). Anatomically-based models for physical and geometric reconstruction of humans and other animals. Park, S. I. and Hodgins, J. K. (2006). Capturing and animating skin deformation in human motion. ACM Trans. Graph., 25(3):881–889. Pietroni, N., Tarini, M., and Cignoni, P. (2010). Almost isometric mesh parameterization through abstract domains. IEEE Transactions on Visualization and Computer Graphics, 16(4):621–635. 58  Bibliography Pollard, N. S. and Zordan, V. B. (2005). Physically based grasping control from example. In Proc. ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 311–318. Qin, H. and Terzopoulos, D. (1996). D-NURBS: A Physics-Based Framework for Geometric Design. IEEE Transactions on Visualization and Computer Graphics, 2(1):85–96. Ray, N., Li, W. C., L´evy, B., Sheffer, A., and Alliez, P. (2006). Periodic global parameterization. ACM Trans. Graph., 25(4):1460–1485. Sheffer, A., Praun, E., and Rose, K. (2006). Mesh parameterization methods and their applications. Found. Trends. Comput. Graph. Vis., 2(2):105– 171. Sifakis, E., Hellrung, J., Teran, J., Oliker, A., and Cutting, C. (2009). Local flaps: A real-time finite element based solution to the plastic surgery defect puzzle. Studies in Health Technology and Informatics, 142:313–8. Sifakis, E., Neverov, I., and Fedkiw, R. (2005). Automatic determination of facial muscle activations from sparse motion capture marker data. ACM Trans. Graph., 24(3):417–425. Spillmann, J. and Teschner, M. (2009). Cosserat nets. IEEE Trans. Vis. Comput. Graph., 15(2):325–338. Stam, J. (1999). Stable fluids. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, SIGGRAPH ’99, pages 121–128, New York, NY, USA. ACM Press/Addison-Wesley Publishing Co. Stam, J. (2003). Flows on surfaces of arbitrary topology. ACM Trans. Graph., 22(3):724–731. Sueda, S., Jones, G. L., Levin, D. I. W., and Pai, D. K. (2011). Large-scale dynamic simulation of highly constrained strands. ACM Trans. Graph., 30(4):39:1–39:9. Sueda, S., Kaufman, A., and Pai, D. K. (2008). Musculotendon simulation for hand animation. ACM Trans. Graph., 27(3):83:1–83:8. Teran, J., Blemker, S., Hing, V. N. T., and Fedkiw, R. (2003). Finite volume methods for the simulation of skeletal muscle. In Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation, SCA ’03, pages 68–74. 59  Bibliography Teran, J., Sifakis, E., Blemker, S. S., Ng-Thow-Hing, V., Lau, C., and Fedkiw, R. (2005). Creating and simulating skeletal muscle from the visible human data set. IEEE Transactions on Visualization and Computer Graphics, 11(3):317–328. Terzopoulos, D. and Waters, K. (1990). Physically-based facial modelling, analysis, and animation. The Journal of Visualization and Computer Animation, 1(2):73–80. Teschner, M., Heidelberger, B., M¨ uller, M., Pomeranets, D., and Gross, M. (2003). Optimized spatial hashing for collision detection of deformable objects. In Proceedings of Vision, Modeling, Visualization VMV03, pages 47–54. Tsang, W., Singh, K., and Eugene, F. (2005). Helping hand: an anatomically accurate inverse dynamics solution for unconstrained hand motion. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages 319–328. Valero-Cuevas, F. J., Yi, J.-W., Brown, D., McNamara, R. V., Paul, C., and Lipson, H. (2007). The tendon network of the fingers performs anatomical computation at a macroscopic scale. Biomedical Engineering, IEEE Transactions on, 54(6):1161–1166. Volino, P., Magnenat-Thalmann, N., and Faure, F. (2009). A simple approach to nonlinear tensile stiffness for accurate cloth simulation. ACM Trans. Graph., 28(4):105. Wicke, M., Ritchie, D., Klingner, B. M., Burke, S., Shewchuk, J. R., and O’Brien, J. F. (2010). Dynamic local remeshing for elastoplastic simulation. ACM Trans. Graph., 29:49:1–49:11. Wilhelms, J. and Gelder, A. V. (1997). Anatomically based modeling. In Proc. SIGGRAPH 1997, pages 173–180. Wu, Y., Kalra, P., and Thalmann, N. (1996). Simulation of static and dynamic wrinkles of skin. In Computer Animation’96. Proceedings, pages 90–97. Ying, L., Fu, J. L., and Pollard, N. S. (2007). Data-driven grasp synthesis using shape matching and task-based pruning. Visualization and Computer Graphics, IEEE Transactions on, 13(4):732–747.  60  Zancolli, E. (1979). Structural and Dynamic Bases of Hand Surgery. Lippincott. Zhu, Q.-h., Chen, Y., and Kaufman, A. (2001). Real-time biomechanicallybased muscle volume deformation using fem. In Computer Graphics Forum, volume 17, pages 275–284. Wiley Online Library.  61  Appendix A  Modeling Details  Figure A.1: Overview of the tendons and bones of the index finger In this appendix, we show the detailed arrangement of the nodes and constraints of the tendon network of the index finger. We made two simplifications for our full hand model. First, the middle finger, ring finger and pinky finger employ the same tendon network arrangement as the index finger. Second, to reduce the DoF of the whole system, we omit the symmetric components on the ulnar side of each finger. Figure A.1 illustrates the 9 tendons and ligament in our index finger model. There are dorsal interosseus (DI), palmar interosseus (PI), flexor digitorum profundus (FDP), flexor digitorum superficialis (FDS), extensor digitorum communis (EDC), intrinsic medial (IntMed), extrinsic lateral (ExtLat), lumbrical (LUM), and oblique ligament (OL). Figure A.2 shows the 5 bones and 4 joints of the index finger. It also shows the 11 constraint planes and 2 constraint lines needed for our simulation. The red planes denote blending planes while green planes stand for fixed planes (Section 3.3). The green line on the metacarpal is a line segment called ‘II DI line’. We hide the symmetric ‘II PI line’ line segment which is located on the other side of the metacarpal. We use the prefix ‘II ’ to denote 62  Appendix A. Modeling Details  Figure A.2: Overview of the constraint objects this component is related to the index finger. Respectively, ‘I ’, ‘III ’, ‘IV ’ and ‘V ’ denote thumb, middle finger, ring finger, and pinky finger related components. Table A.1 summarizes the details of each constraint plane. Plane Name II Metacarpal plane1 II Metacarpal vplane II Proximal plane0 II Proximal hplane II Proximal vplane II Proximal plane1 II Middle plane0 II Middle hplane II Middle vplane II Middle plane1  Type Blending Fixed Blending Fixed Fixed Blending Blending Fixed Fixed Blending  Parent 1 Metacarpal Metacarpal Metacarpal Proximal Proximal Proximal Metacarpal Middle Middle Middle  Parent 2 Proximal Proximal Middle Proximal Distal  Table A.1: Constraint planes Section A.1 to Section A.9 illustrate the nodes arrangement of each tendon. Before stepping into the details, we show the legend of each node in Figure A.3. A solid circle denotes the Lagrangian node, which fixes the Eulerian coordinates. While a hallow circle means an Eulerian node, which 63  A.1. Dorsal Interosseus (DI) allows the material to slide through. The color of each circle denotes the type of constraint applied to the Lagrangian coordinates. Blue means that a PointFrameConstraint is attached to the Lagrangian DoF. Purple means that the point is limited to move on a line curve with PointCurveConstraint. Red and orange circles all denote the PointPlaneConstraint. While a red circle implies the type of constraint plane that this node attaches to is a SurfacePlaneBlending, an orange circle means that the type of the plane is SurfacePlaneFixed.  Figure A.3: Legend of the nodes  A.1  Dorsal Interosseus (DI)  Figure A.4: Dorsal interosseus (DI)  64  A.2. Palmar Interosseus (PI) NI 1 2 3  E/L Lag Eul Lag  Constraint Type PointCurve PointFrame PointFrame  Constraint II DI line Metacarpal Proximal  Share -  Table A.2: Nodes of DI NI = node index, E/L = Eulerian/Lagrangian Node Type, Eul = Eulerian, Lag = Lagrangian. Share means whether this node is shared by other tendons Figure A.4 illustrates the 3 nodes of DI. Table A.2 shows the details of each node. Column ‘NI’ denotes the node index. Column ‘E/L’ shows the node type. Column ‘Constraint Type’ means the type of the constraint for each node. ‘PointFrame’, ‘PointCurve’, and ‘PointPlane’ correspond to PointFrameConstraint, PointCurveConstraint and PointPlaneConstraint, respectively. Sometimes the point object in a node of a tendon is shared by another tendon object. Column ‘Share’ summarizes such information. Here, DI consists of three nodes. The first node is a Lagrangian node which is constrained to a line curve called ’II DI line’, which is the green line in Figure A.4. The second node is an Eulerian node, which moves along with the metacarpal bone with a PointFrameConstraint. The last node is a Lagrangian node which attaches to the middle bone with another PointFrameConstraint. We attach a spring muscle at node 1 along the ’II DI line’ direction to pull DI.  A.2  Palmar Interosseus (PI) NI 1 2 3  E/L Lag Eul Lag  Constraint Type PointCurve PointFrame PointFrame  Constraint II PI line Metacarpal Proximal  Share -  Table A.3: Nodes of PI. The green line is the II PI line constraint line Figure A.5 and Table A.3 illustrates the PI tendon. Similar with DI, the first node of PI is constrained to a line called ‘II PI line’ with the PointCurveConstraint. Also, a spring muscle is attached to node 1 in the ‘II PI line’ line segment direction.  65  A.3. Flexor Digitorum Profundus (FDP)  Figure A.5: Palmar interosseus (PI)  A.3  Flexor Digitorum Profundus (FDP)  Figure A.6: Flexor digitorum profundus (FDP) Figure A.6 and Table A.4 illustrate the arrangement of FDP. FDP is pulled by a spring muscle at node 1 in the II FDP line direction.  66  A.4. Flexor Digitorum Superficialis (FDS) NI 1 2 3 4 5 6 7 8  E/L Lag Eul Eul Eul Eul Eul Eul Lag  Constraint Type PointCurve PointFrame PointFrame PointFrame PointFrame PointFrame PointFrame PointFrame  Constraint II FDP line Wrist Metacarpal Proximal Proximal Middle Middle Distal  Share -  Table A.4: Nodes of FDP  A.4  Flexor Digitorum Superficialis (FDS)  Figure A.7: Flexor digitorum superficialis (FDS) Figure A.7 and Table A.4 illustrate the arrangement of FDS. FDS is pulled by a spring muscle at node 1 in the II FDS line direction.  67  A.5. Extensor Digitorum Communis (EDC) NI 1 2 3 4 5 6  E/L Lag Eul Eul Eul Eul Lag  Constraint Type PointCurve PointFrame PointFrame PointFrame PointFrame PointFrame  Constraint II FDS line Wrist Metacarpal Proximal Proximal Middle  Share -  Table A.5: Nodes of FDS  A.5  Extensor Digitorum Communis (EDC)  Figure A.8: Nodes of EDC Figure A.8 and Table A.6 illustrate the arrangement of EDC. EDC is pulled by a spring muscle at node 1 in the II EDC line direction. Nodes 5 and 6 have multiple point-plane types of constraint. First, they are constrained with the pair of constraint planes: II Metacarpal plane1 and II Proximal plane0. These constraints restrict the nodes to always move near the joint. Besides, these two nodes are also constrained with the vertical plane II Metacarpal vplane, which is placed on the notch of the metacarpal. This vertical plane prevents the nodes from sliding out of bone notch. We apply the same setting to node 10. Nodes 1 to 8 of EDC are shared with ExtLat, and nodes 9 to 11 are shared by IntMed. By ‘share’ we mean the two nodes contain the same Point object pointer. One should note that such sharing only happens on the point position of each node, or in other words on the Lagrangian level. The ‘shared’ nodes still contain their own Eulerian DoF. And since our integration happens on each segment, the material of the shared segment which is connected by two pairs of shared nodes will be counted individually. For example, EDC and ExtLat share the first 8 nodes and the corresponding 68  A.6. Intrinsic Medial (IntMed) NI 1 2 3 4 5  E/L Lag Eul Eul Eul Eul  Constraint Type PointCurve PointFrame PointFrame PointFrame PointPlane  6  Eul  PointPlane  7 8 9 10  Eul Eul Eul Eul  PointFrame PointFrame PointFrame PointPlane  11  Lag  PointFrame  Constraint II EDC line Wrist Metacarpal Metacarpal II Metacarpal plane1 II Proximal plane0 II Metacarpal vplane II Metacarpal plane1 II Proximal plane0 II Metacarpal vplane Proximal Proximal Proximal II Proximal plane1 II Middle plane0 II Proximal vplane Middle  Share ExtLat (1) ExtLat (2) ExtLat (3) ExtLat (4) ExtLat (5)  ExtLat (6)  ExtLat (7) ExtLat (8) IntMed (6) IntMed (7)  IntMed (8)  Table A.6: Nodes of EDC 7 segments. Such nodes will move together. But EDC and ExtLat still have different radius and density, and the first 7 segments will be integrated individually on each tendon during simulation. EDC is pulled by the spring muscle in II EDC line direction.  A.6  Intrinsic Medial (IntMed)  Figure A.9 and Table A.7 illustrate the 8 nodes of IntMed. It is pulled in the II LUM line direction with a spring muscle. The first 4 nodes are shared by LUM and the last 3 nodes are shared by EDC. The special node we need to elaborate is node 4. Node 4 is constrained by a PointPlaneConstraint with the fixed plane II Proximal hplane. Unlike other plane constrained nodes, which use equality constraint that the node is only allowed to move inside the plane, here in node 4 the constraint is inequality constraint. This means that the point plane constraint is triggered only when node 4 hits the plane II Proximal hplane. It prevents the node from sliding to the ventral side of the finger. In the following sections, we will only point out the PointPlaneConstraint 69  A.7. Extrinsic Lateral (ExtLat)  Figure A.9: Nodes of Intrinsic medial (IntMed) NI 1 2 3 4 5 6 7  E/L Lag Eul Eul Eul Eul Eul Eul  Constraint Type PointCurve PointFrame PointFrame PointPlane PointFrame PointFrame PointPlane  8  Lag  PointFrame  Constraint II LUM line Metacarpal Proximal II Proximal hplane Proximal Proximal II Proximal plane1 II Middle plane0 II Proximal vplane Middle  Share LUM (1) LUM (2) LUM (3) LUM (4) EDC (9) EDC (10)  EDC (11)  Table A.7: Nodes of IntMed objects with the inequality constraint. If we do not mention intentionally, the constraint is considered as equality constraint by default.  A.7  Extrinsic Lateral (ExtLat)  Figure A.10 and Table A.8 illustrate the arrangement of ExtLat. ExtLat is pulled by the same spring muscle at node 1 in the II EDC line direction as EDC. ExtLat shares the first 8 nodes with EDC, shares the last 7 nodes with LUM, and shares the last 3 nodes with both OL and LUM. Nodes 9 and 10 are constrained with P ointP laneConstrain and the plane II Proximal hplane via inequality constraint. They are also constrained with the pair of planes II Proximal plane1 and II Middle plane0 via equality constraints.  70  A.7. Extrinsic Lateral (ExtLat)  Figure A.10: Nodes of Extrinsic lateral (ExtLat)  71  A.7. Extrinsic Lateral (ExtLat)  NI 1 2 3 4 5  E/L Lag Eul Eul Eul Eul  Constraint Type PointCurve PointFrame PointFrame PointFrame PointPlane  6  Eul  PointPlane  7 8 9  Eul Eul Eul  PointFrame PointFrame PointPlane  10  Eul  PointPlane  11 12 13  Eul Eul Eul  PointFrame PointFrame PointFrame  14  Eul  PointPlane  15  Lag  PointFrame  Constraint II EDC line Wrist Metacarpal Metacarpal II Metacarpal plane1 II Proximal plane0 II Metacarpal vplane II Metacarpal plane1 II Proximal plane0 II Metacarpal vplane Proximal Proximal II Proximal plane1 II Middle plane0 II Proximal hplane II Proximal plane1 II Middle plane0 II Proximal hplane Middle Middle Middle II Middle plane1 II Distal plane0 II Middle vplane Distal  Share EDC (1) EDC (2) EDC (3) EDC (4) EDC (5)  EDC (6)  EDC (7) EDC (8) LUM (5)  LUM (6)  LUM (7) LUM (8) LUM (9) OL (4) LUM (10) OL (5) LUM (11) OL (6)  Table A.8: Nodes of ExtLat  72  A.8. Lumbrical (LUM)  A.8  Lumbrical (LUM)  Figure A.11: Nodes of Lumbrical (LUM)  NI 1 2 3 4 5  E/L Lag Eul Eul Eul Eul  Constraint Type PointCurve PointFrame PointFrame PointPlane PointPlane  6  Eul  PointPlane  7 8 9  Eul Eul Eul  PointFrame PointFrame PointFrame  10  Eul  PointPlane  11  Lag  PointFrame  Constraint II LUM line Metacarpal Proximal II Proximal hplane II Proximal plane1 II Middle plane0 II Proximal vplane II Proximal plane1 II Middle plane0 II Proximal vplane Middle Middle Middle II Middle plane1 II Distal plane0 II Middle vplane Distal  Share IntMed (1) IntMed (2) IntMed (3) IntMed (4) ExtLat (9)  ExtLat (10)  ExtLat (11) ExtLat (12) ExtLat (13) OL (4) ExtLat (14) OL (5) ExtLat (15) OL (6)  Table A.9: Nodes of LUM Figure A.11 and Table A.9 illustrate the arrangement of LUM. ExtLat is pulled by the same spring muscle at node 1 in the II IntMed line direction as IntMed. The first 4 nodes of LUM are shared by IntMed, the last 7 nodes are shared by ExtLat, and the last 3 nodes are shared by OL and ExtLat 73  A.9. Oblique Ligament (OL) together. Since we have discussed about all the nodes from previous section, we will skip the details of the nodes in this section.  A.9  Oblique Ligament (OL)  Figure A.12: Nodes of Oblique ligament (OL)  NI 1 2 3  E/L Lag Eul Eul  Constraint Type PointFrame PointFrame PointPlane  4  Eul  PointFrame  5  Eul  PointPlane  6  Lag  PointFrame  Constraint Proximal Proximal II Middle plane0 II Middle hplane Middle II Middle plane1 II Distal plane0 II Middle vplane Distal  Share ExtLat (13) LUM (9) ExtLat (14) LUM (10) ExtLat (15) LUM (11)  Table A.10: Nodes of OL As in Figure A.12 and Table A.10, OL includes 6 nodes. The last three nodes are shared by LUM and ExtLat. Similar to node 4 in IntMed, node 3 of OL is constrained with II Middle hplane with inequality constraint. Besides, it is also constrained to move inside the plane II Middle plane0 with equality constraint. This setting allows the OL to slide on the surface of Middle, but only in the dorsal and side regions. Different from other tendons, OL is not connected to a muscle. It influences the hand movement passively as discussed in Section 3.1. 74  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0052208/manifest

Comment

Related Items