Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Precision manipulations using a low-dimensional haptic interface Humberston, Benjamin 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2014_september_humberston_benjamin.pdf [ 9.09MB ]
Metadata
JSON: 24-1.0165938.json
JSON-LD: 24-1.0165938-ld.json
RDF/XML (Pretty): 24-1.0165938-rdf.xml
RDF/JSON: 24-1.0165938-rdf.json
Turtle: 24-1.0165938-turtle.txt
N-Triples: 24-1.0165938-rdf-ntriples.txt
Original Record: 24-1.0165938-source.json
Full Text
24-1.0165938-fulltext.txt
Citation
24-1.0165938.ris

Full Text

Precision Manipulations Using aLow-Dimensional Haptic InterfacebyBen HumberstonB.Sc, Cornell University, 2009A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFMaster of ScienceinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Computer Science)The University Of British Columbia(Vancouver)August 2014c© Ben Humberston, 2014AbstractWhen interacting with physical objects using their own hands, humans display ef-fortless dexterity. It remains a non-intuitive task, however, to specify the motion ofa virtual character’s hand or of a robotic manipulator. Creating these motions gen-erally requires animation expertise or extensive periods of offline motion capture.This thesis presents a real-time, adaptive animation interface, specifically designedaround haptic (i.e., touch) feedback, for creating precision manipulations of virtualobjects. Using this interface, an animator controls an abstract grasper trajectorywhile the full hand pose is automatically shaped by compliant scene interactionsand proactive adaptation. Haptic feedback enables intuitive control by mappinginteraction forces from the full animated hand back to the reduced animator feed-back space, invoking the same sensorimotor control systems utilized in naturalprecision manipulations. We provide an approach for online, adaptive shaping ofthe animated manipulator using our interface based on prior interactions, resultingin more functional and appealing motions.In a user study with nonexpert participants, we tested the effectiveness of hapticfeedback and proactive adaptation of grasp shaping. Comparing the quality ofmotions produced with and without force rendering, haptic feedback was shownto be critical for efficiently communicating contact forces and dynamic events tothe user. The effects of proactive shaping, though inarguably beneficial to visualquality, resulted in mixed behavior for our grasp quality metrics.iiPrefaceThe algorithms, experiments, and analysis described in this thesis are original andunpublished, though portions of this thesis are based on work previously submit-ted for publication by the author. They were designed and implemented with theassistance of my supervisor, Dr. Dinesh K. Pai. Related work is appropriatelycited.The prototype of our custom encoder gimbal, depicted in Figure 3.2b and em-ployed in the user study of Chapter 5, was designed and created by Cole Shing, aresearch engineer at the University of British Columbia working with the Sensori-motor Systems Lab, under the supervision of Dr. Dinesh K. Pai.The user study described in Chapter 5 was conducted in accordance with ap-proval from the University of British Columbia’s Behavioural Research EthicsBoard, Certificate of Approval Number H13-01107.iiiTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiGlossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Background and Related Work . . . . . . . . . . . . . . . . . . . . . 52.1 Neurophysiology . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2.1 Formal Planning . . . . . . . . . . . . . . . . . . . . . . 72.2.2 Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 82.3 Computer Animation . . . . . . . . . . . . . . . . . . . . . . . . 82.3.1 Motion Capture . . . . . . . . . . . . . . . . . . . . . . . 92.3.2 Procedural Motion . . . . . . . . . . . . . . . . . . . . . 102.4 Haptics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.5 Telemanipulation . . . . . . . . . . . . . . . . . . . . . . . . . . 13iv3 Precision Manipulation Interface . . . . . . . . . . . . . . . . . . . . 143.1 Grasp Control with an Indirect Mapping . . . . . . . . . . . . . . 153.2 Interaction Overview . . . . . . . . . . . . . . . . . . . . . . . . 163.3 Workspace Design . . . . . . . . . . . . . . . . . . . . . . . . . 163.3.1 Haptic Fidelity . . . . . . . . . . . . . . . . . . . . . . . 183.4 Software Implementation . . . . . . . . . . . . . . . . . . . . . . 203.5 Interactive Grasp Control . . . . . . . . . . . . . . . . . . . . . . 203.5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 203.5.2 Reference Pose Control . . . . . . . . . . . . . . . . . . . 223.5.3 Compliant Fingertip Motion . . . . . . . . . . . . . . . . 263.5.4 Interaction Forces . . . . . . . . . . . . . . . . . . . . . . 273.5.5 Haptic Feedback . . . . . . . . . . . . . . . . . . . . . . 283.5.6 Full Hand Pose . . . . . . . . . . . . . . . . . . . . . . . 283.6 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293.7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Grasp Shaping Using Proactive Adaptation . . . . . . . . . . . . . . 344.1 Introduction to Proactive Grasp Shaping . . . . . . . . . . . . . . 354.2 Our Approach to Adaptive Shaping . . . . . . . . . . . . . . . . . 364.3 Grasp Shape Sampling . . . . . . . . . . . . . . . . . . . . . . . 374.4 Shaping Interpolation . . . . . . . . . . . . . . . . . . . . . . . . 394.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 User Study to Evaluate Interaction Behavior Using Our Interface . 455.1 Experimental Design . . . . . . . . . . . . . . . . . . . . . . . . 465.1.1 Remarks on Basic Lifting Task . . . . . . . . . . . . . . . 475.2 Effect of Haptic Feedback . . . . . . . . . . . . . . . . . . . . . 485.2.1 Task 1: Light Touch Control . . . . . . . . . . . . . . . . 485.2.2 Task 2: Indirect Interactions With a Virtual Tool . . . . . 485.2.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 515.3 Effect of Proactive Adaptation . . . . . . . . . . . . . . . . . . . 525.3.1 Task 3: Grasping with Proactive Adaptation . . . . . . . . 525.3.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 54v6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58A System Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67A.1 System Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . 67B Custom Gimbal Design . . . . . . . . . . . . . . . . . . . . . . . . . 70viList of FiguresFigure 1.1 Interactively creating movements of an animated hand usingour interface. Clockwise from top left: the haptic workspace,stacking virtual objects with compliant contact, generalizationto control a different hand type, and proactive adaptation of thegrasp shape to a wine glass. . . . . . . . . . . . . . . . . . . 2Figure 3.1 A snapshot of the visual interface used for recording motions . 16Figure 3.2 A typical desktop setup, including our animator interface andstereo display. . . . . . . . . . . . . . . . . . . . . . . . . . . 17Figure 3.3 Top-down diagrammatic view of shared haptic workspace formultiple Phantom devices . . . . . . . . . . . . . . . . . . . 18Figure 3.4 The CyberGrasp, a glove-based solutions for hand capture andhaptic feedback (image from CyberGlove Systems) . . . . . . 19Figure 3.5 The pipeline to control a full hand pose based on user input. . 21Figure 3.6 Abstract grasper state for given input animator pose . . . . . . 24Figure 3.7 Example of a reference grasp pose with three fingers, includingthe cylindrical coordinates which define the pose of finger hi. 25Figure 3.8 Final pose for a single finger based on inverse kinematics . . . 29Figure 3.9 Stacking blocks using our haptic interface . . . . . . . . . . . 30Figure 3.10 Tapping ash from a cigarette using a user-customized graspshape (little finger out) . . . . . . . . . . . . . . . . . . . . . 31Figure 3.11 Manipulations of an object. (a) Normal, unconstrained wrist.(b) Locked wrist (automatically induces fine finger motions) . 32viiFigure 3.12 Block stacking and toppling with an alternative robotic tripodmanipulator . . . . . . . . . . . . . . . . . . . . . . . . . . . 33Figure 4.1 Grasp shaping for different approaches to the same object in 2D. 35Figure 4.2 Problems that arise without proactive grasp shaping. . . . . . 36Figure 4.3 The grasp shaping for a new context c is generated byA usingthe information from a set S of previously sampled shapingparameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . 38Figure 4.4 During scene interactions, some prior shaping parameters θare used to control the hand’s reference pose. However, whentaking an adaptation sample, we calculate and store the shap-ing parameters sθ that match the hand’s compliant pose. Thiseffectively captures information about the object surface ge-ometry in the local context sc. . . . . . . . . . . . . . . . . . 39Figure 4.5 Example animation sequences produced using our interface.Left: Manipulating a chess pawn. Right Twisting a doorknob. 42Figure 4.6 Grasping a wine glass without (left) and with (right) proac-tive shaping adaptation. Preshaping arises in the latter caseby using contextual similarity to retrieve similar grasp shapesamples from previous interactions. . . . . . . . . . . . . . . 43Figure 4.7 Lifting two different types of drink glasses in the same inter-action by swapping proactive adaptation sample sets . . . . . 44Figure 5.1 Experimental setup for user study . . . . . . . . . . . . . . . 46Figure 5.2 Task 1: Contact force on fingertip while attempting lightestpossible scratch on a surface, with (blue) and without (red)haptic feedback. (a) Trials for a representative participant. (b)Average across participants with first/third quartiles given. . . 49Figure 5.3 Task 2: Magnitude of vertical force on a gripped block whileusing it to tap on a platform, with (blue) and without (red) hap-tic feedback. Time and force values are offset to zero at startof tap. (a) Trials for a representative participant. (b) Averageacross participants with first/third quartiles given . . . . . . . 50viiiFigure 5.4 Task 3: Contact quality for each finger during task execution,averaged over all participants. Ideally, we expect contact to benearly 100% for all fingers and for contact forces to be bal-anced across non-thumb fingertips (a) Percent of Task Time InContact. (b) Total Summed Forces During Task . . . . . . . . 53Figure A.1 Major components of the Dihaptic system . . . . . . . . . . . 68Figure A.2 Proper Phantom device arrangement for Dihaptic . . . . . . . 69ixGlossaryDMP Dynamic Movement PrimitiveGPR Gaussian Process RegressionGUI Graphical User InterfacePC Principal ComponentPCA Principal Component AnalysisCIO Contact-Invariant OptimizationxAcknowledgmentsI would like to express my deepest thanks and appreciation to my supervisor, Dr.Dinesh K. Pai. Hailing from a background primarily in computer graphics, I highlyappreciated his guidance and direction as I was introduced to many difficult andnovel concepts in sensorimotor computation. Under his advisement, I have gainedan appreciation for the broad future possibilities when computer scientists learn toembrace and explore biological systems.I also wish to thank all the members of the Sensorimotor Systems Lab fortheir insights and sense of community during my time at the University of BritishColumbia. Additionally, I am grateful for the feedback and technical support fromCole Shing, a welcome member to the lab’s occasional lunch group.I am indebted to Dr. Timothy Edmunds for his support during my initial in-troduction to the world of academic research as well as for allowing the use ofhis custom tabletop hardware setting for our user study. Many thanks are due forletting me build off the sweat of his brow!Finally, I am very thankful to my parents and family for their support through-out my life and patient forbearance with long phone calls home as I undertook thework for my degree.xiChapter 1Introduction1.1 MotivationEvery day, most people easily perform hundreds of interactions requiring that theygrasp and manipulate objects in the world using their hands. Replicating these ma-nipulations for animation is remarkably challenging, both due to the large numberof degrees of freedom of the hand and the difficulty in realistically complying withobjects in the scene.Specifying complete hand motion is by itself a complex task: the human hand iscommonly represented as an articulated structure with over 25 degrees of freedomwhose motion must be expertly coordinated. More importantly, when a character’sfingers contact another object, new physical factors come into play that influencethe animation in comparison to free space motions such as waving or pointing.These factors include contact constraints, joint compliance, and the inertial prop-erties of lifted objects.Current solutions for creating such interactions use three distinct approaches(see Chapter 2 for a review). The first, most common, approach places the creativeburden entirely on an expert animator, who uses traditional modeling software tocreate complete hand motions for a required sequence. This gives maximum flexi-bility in the motion produced, but is slow and labor-intensive.The second approach, motion capture, simplifies production but is limited tomotions that can be physically performed by an actor. It is also labor-intensive in1Figure 1.1: Interactively creating movements of an animated hand using ourinterface. Clockwise from top left: the haptic workspace, stacking vir-tual objects with compliant contact, generalization to control a differenthand type, and proactive adaptation of the grasp shape to a wine glass.a different way, as the captured motions generally must be processed to removenoise, fill missing data due to marker dropout, and otherwise fit the performedmotion to the desired virtual trajectory.The third, most recent, approach is to synthesize complete hand motions in aprocedural manner. This approach can simulate the physics of interaction well, butdoes not exploit knowledge of human manipulation skills or animator expertise.Our goal is to develop a new intuitive animation interface that utilizes the tal-ents of animators, while exploiting knowledge of both the physics and the physiol-ogy of human manipulation skills to simplify their task.A fundamental shortcoming of previous animation interfaces is that the userfocuses purely on kinematic quantities, such as the motions of objects or fingers,2and does not receive any feedback on fingertip forces. Such forces are essentialfor skilled manipulation, as they are rapidly processed by the human sensorimotorsystem to control hand movement [27]. For example, consider playing the game ofJenga or moving a chess piece without the sensation of touch. Using vision alone,these actions may be possible, but they will be clumsily executed. We address thisshortcoming by using haptic interfaces to display forces to the animator’s hand.Haptics refers to the sense of touch. Animators can exploit haptic informationintuitively, without conscious thought.A second major shortcoming is that previous approaches require animators tocontrol the large number of degrees of freedom of the hand directly. While intro-ducing haptic feedback to an animation workflow is advantageous, current hapticsystems are a far cry from the holodeck of science fiction. It is both difficult andexpensive to provide high fidelity force feedback to the whole hand.We address these shortcomings by exploiting a specific type of dimensional-ity reduction observed in human grasping and manipulation. Many multifingeredhuman grasps can be functionally abstracted as grasping with two virtual fingers[5, 23]. It has been hypothesized that these virtual fingers encode the high levelneural control of the hand, which is then elaborated at more peripheral levels ofthe nervous system to generate detailed hand movements. For our purposes, thetwo virtual fingers provide a natural, but abstract, grasper by which the animatorcan interact with objects in the scene with force feedback. It is possible to providehigh quality force feedback to two fingers using existing commercial haptic inter-faces, such as the Phantom devices (SensAble/Geomagic) we use in our system.An additional benefit of adding this level of abstraction is ease of generalizationto different types of hands (ranging from human hands to three-fingered claws).Note that the animator still sees the full hand interacting with the 3D scene, in real-time, synchronized with the forces on the fingers. With this system, animators cancontrol the high degree of freedom hand intuitively, without much training.One important question remains: how do we map low dimensional user in-teraction to high dimensional 3D interaction with a full multifingered hand in anatural way? Briefly, we achieve natural-looking movements in two ways: startingwith a nominal mapping, fingertip positions are modified both reactively, based oncontact with scene objects as described in Chapter 3, and proactively based on prior3interaction knowledge (discussed in Chapter 4). This interaction knowledge is ac-quired with a simple grasp adaptation algorithm using a small number of traininggrasps performed by an animator. After training, this knowledge can be reused fordifferent interactions with the object and by different animators. It also producesrealistic preshaping of the hand when it approaches an object, without expensivegeometric planning.1.2 ContributionsThe contributions of this thesis are as follows:1. An interface for interactive animation of hands manipulating 3D virtual ob-jects. It provides force feedback signals that are very important for intuitivemanipulation of objects with the hand.2. A method for bidirectional mapping of motions and forces between the lowdimensional physical user interface and high dimensional animated hands.3. A process for automatically adapting the pose of an animated hand based onprior sampled knowledge and the current interaction context, which simpli-fies the creation of new, rich interactions using our interface.4. Quantitative results from a user study showing the importance of haptic feed-back for creating precision manipulations of virtual objects.The rest of this thesis is organized as follows. A sampling of relevant workand background material in hand animation is given in Chapter 2. We describe thedesign of our initial interface for creating precision manipulations in Chapter 3.The notion of proactive adaptation of the grasp shape is introduced in Chapter 4along with details of how we extend our interface to permit this form of adaptation.A user study was undertaken in order to test its effectiveness for non-expert users;we give details on the experimental design and results in Chapter 5. We close thiswork with a summary of the interface and a discussion of some of the limitationsand possible future work in 6.4Chapter 2Background and Related WorkControlling the motion of an articulated manipulator is a topic of interest in a va-riety of applications both for purely visual (movies, games, virtual reality) as wellas real-world purposes (as in robotics, motor learning, and neurophysiology). Assuch, we survey a number of diverse background domains which inform the designof our interface.2.1 NeurophysiologyAlthough we do not explicitly model biological control mechanisms or constraints,our interface draws several of its design principles from works in the neurophysiol-ogy literature which provide insights on how motion control is implemented in na-ture. For example, it appears likely that control of hand motions is divided betweenthe central nervous system and peripheral planning and reflexes. Our interface en-gages both aspects through visual and haptic feedback, and low-level reactivenessis automated by the animated hand.A possible misconception of newcomers to motor control is believing that thelogic for controlling movements is consciously perceived and concentrated exclu-sively in the brain, with the peripheral nervous system acting only as a networkthat forwards sensory signals and distributes motor commands from the brain tothe muscles. Among other factors, the communication latency between the bodyand brain makes such a simplistic model implausible, especially for rapid feedback5control. Far more plausible is a multilayered model of perception where varioussensorimotor systems operate on a variety of timescales. Landmark early work([64], [25]) found evidence for fast-reacting coordination of the horizontal gripforces and vertical load forces during grasp-and-lift scenarios that operated withoutsubjects’ conscious perception. At short time scales on the order of 60 millisec-onds, the grip coordination was reactive to the surface texture of the object andsmall slips in the grasp so that a constant, stable grip safety margin was preservedduring the interaction. Notably, when the cutaneous feedback from fingertips wasremoved by anaesthesia, the general coordination remained but grip forces becameunreactive to surface properties. This motivates the use of cutaneous haptic feed-back is a desirable feature in a future version of our interface.Prior experiments measured the flexion of the metacarpal-phalangeal and prox-imal interphalangeal joints of the fingers as subjects grasped objects with varyinglevels of concavity/convexity [51]. They demonstrate that the hand only graduallyshapes itself as it approaches an object to be grasped. Additionally, the experimentsprovide evidence that this preshaping lives in a continuum of postures, dependingon the specific contours of the object rather than being selected from a small dis-crete set applicable to all objects. For the purposes of our controller, this motivatesinteractive learning and adapting of preshape postures for different objects andtasks rather than defining a static set of preshapes.Regarding grasp shaping for tool use, applying Principal Component Analy-sis (PCA) to a body of hand recordings showed that, when subjects were askedto mime grasp shapes for familiar objects, 80% of the posture variance could bedescribed using only two Principal Components (PCS) [52]. However, an infor-mation transmission analysis found also that the next several components actuallycontributed significantly to the gain in information about the object being grasped.This suggests that neural control of grasping may be segregated into coarse (PCS 1& 2) and fine-grained aspects (PCS 3 - 5 or 6). After allowing a user to manuallyspecify the grasp structure at a coarse level, we similarly employ adaptive modula-tion of shaping parameters to automatically elaborate the finer details of the graspfor a particular object.Biological control is often viewed through the lens of optimal control theory,but recent works make the case for an alternative paradigm of “good-enough”control6([14], [34]). Good-enough control is based on contextual recall of motor programsfrom past experiences and active exploration of a variety of solutions. In this view,the brain operates not so much as a monolithic, top-down optimizer of control pro-grams as a distributed, retrieval-based system (of course, mixtures of these twoextremes are also plausible). Our proactive adaptation of the hand’s grasp shape(Chapter 4) is a simple example of the latter approach, using contextual signalsto look up grasp shapes that have proven useful in prior interactions. It relies onretrieval over past experiences to build novel motions rather than attempting tocompute optimal motions based on some predefined quality metric.2.2 RoboticsThere is an extended history of research on grasp planning, control, and execu-tion in the robotics literature. We do not attempt a complete catalogue here, buthighlight selected works which we separate into two general categories: rigorousformal planning and more flexible learning approaches.In many cases, formal planning methods either derive optimal solutions tograsping under geometrical constraints or define an explicit quality metric whichthey seek to optimize. We wish to target complex environments and place a greateremphasis on aesthetic quality than on functional optimality for our tool. We thuspursue a direction more aligned to the learning approaches.2.2.1 Formal PlanningIn robotic grasping applications, formal planning produces grasps that are guar-anteed to meet particular properties or to maximize some notion of “optimality”.For example, it is possible to define optimality metrics in terms of required graspforces [20]. Formal planning typically require precise and complete models of thegrasper, objects, and dynamics of the environment in which grasping will occur.Using explicit optimality criteria formalizes the goals of a grasp task in a way con-ducive to rigorous analysis.A typical goal is to establish a grasp which completely constrains the motionof an object due to arbitrary external forces. This can be formulated either asa form closure or force closure. Form closure is a purely geometrical problem7that assumes frictionless, perfectly rigid contacts [7]. While no physical systemexactly embodies such conditions, assuming zero friction is a conservative choiceand surface rigidity is a reasonable approximation for many industrial applications.This makes the enumeration of possible form closure grasps a useful exercise for alimited set of applications [10].For a more general set of scenarios, force closure analyses allow for frictionalforces at the fingertips and thus may require fewer contacts. The complexity ofthese analyses depends on the model of the robot’s compliance, frictional contacts,and other properties ([38], [48]). It is also possible to further relax the constraintrequirements for the grasp, such as requiring only “local force closure” [30] whichcan provably resist only a limited region of external wrenches.2.2.2 LearningLearning approaches for robotic graphing embody a data-driven philosophy wherethe robot’s model of the world is incomplete or nonexistent. For instance, usingthe Dynamic Movement Primitives (DMPS) framework [54], a robotic manipulatormay adapt its grasping trajectory online in response to deviations from predictedsensory traces [46].Other controllers uses human demonstrations to learn a mapping between ob-ject features and the hand pose (position and orientation) suitable for grasping theobject [12]. Grasping approaches may be generalized to novel objects based onthe similarity of point cloud representations of previously encountered object parts([15], [16]). This approach is particularly useful for objects which require differentgrasp strategies for different parts of an object’s surface. In order to leverage hu-man intuition about grasping, it is also possible to iteratively adjust learned graspbehavior based on intervention from a human user [53].2.3 Computer AnimationGiven the frequency of grasp-driven manipulations in everyday life, it is often de-sirable to reproduce these motions for film or interactive applications. The com-plexity of the human hand and variety of possible grasp types (observe, for exam-ple, the range found in formal grasp taxonomies [11]) makes this a challenging task8that is often best served by having an experienced animator manually specify themotion using a rigged character. Since this is a time-consuming process that doesnot generalize to new motions easily, two chief alternative techniques have arisen,motion capture and procedural motion.2.3.1 Motion CaptureMotion capture uses digital sensors to record relevant aspects of the performance ofan actor (such as joint angles or limb positions). It is frequently used for full-bodycapture scenarios, but capturing an actor’s hand motions has proven tricky due totheir intricacy and the visual occlusions that occur in many sequences. Genericmarker-based motion capture technologies require significant manual cleanup tocompensate for marker dropout or incorrect identification. For example, whengrasping the handle of a mug or closing the hand into a fist, markers on the pha-langes may be obstructed from the view of some or all externally fixed trackingcameras, causing marker dropout that must be filled in during postprocessing. In-strumented gloves like the CyberGlove (CyberGlove Systems) are immune to oc-clusion, but captured poses are noisy and coarsely quantized. Finding specializedmethods for hand motion capture has thus been an active research area.Image-based lookup into a database of hand poses may be used to infer posesin real-time from a video stream ([61], [62]). Another method, using marker-baseddata augmented by an RGB/D video stream from a Microsoft Kinect, reconstructsthe hand pose offline by minimizing the difference between renderings of the re-constructed pose and the observed image [68]. It is also possible to progressivelytrack kinematic hand poses from a single-camera RGB stream [13], but this ap-proach is less effective in contact-heavy sequences. A recent work framed theproblem of capturing high-contact sequences from a multi-camera video stream asan offline optimization over the state of a physics-based motion controller, generat-ing impressive results for sequences with significant object contact and interaction[63].Any capture method must make a tradeoff between reconstruction accuracy,interaction stability, and the amount of offline processing required. Since our inter-face is targeted toward iterative creation of animations, it is designed to maintain9a real-time update rate and stable haptic interactions while adapting the shape ofthe animated hand. In contrast with most motion capture techniques for the hand,we use low dimensional input (the pose of the fingertip end effectors of the hapticdevices) and thus do not attempt to accurately capture the full pose of the user’shand.2.3.2 Procedural MotionParallel to motion capture approaches, other works have sought to synthesize ma-nipulation sequences for the hand either in toto or in concert with partial motiondata. Building on a layered optimization technique [33], one approach uses a data-driven grasp selection process to construct grasps for novel objects outside theiroriginal training set [67].It is not necessary for hand motions to be generated in a purely proceduralmanner; especially when a natural appearance is desired, generalizing or augment-ing motion capture data is a viable option. Using data-driven physical controllersallows retargeting captured hand motions to different objects, though the motionshew closely to the captured trajectory [47]. Recorded movements may be appliedto new objects by simultaneously estimating joint compliance with the motion andadjusting these values during a retargeting phase [31]. In order to complement full-body motion data, missing finger motion may be generated given the trajectory ofthe wrist and scene objects by searching over physically allowed finger-object con-tacts and motions [66].A successful recent approach [39] to producing goal-oriented manipulationsmakes use of the Contact-Invariant Optimization (CIO) framework, a form of space-time optimization requiring only a high level specification of the motion to be per-formed, such as the final position of the hand and object to be grasped. While thedexterity of the motions using CIO are impressive ([40], [41]), they remain some-what oversmoothed and implausible because physical dynamics are enforced onlyas soft constraints. An alternative method produces task-driven hand motions for arestricted domain of “finger gaiting”, using a physically-based controller in orderto retain plausibility [1].Many hand motions are specific to a particular activitiy, allowing for domain-10specialized controllers. For example, one controller drives the motion of a handplaying a guitar using a k-Nearest Neighbors search keyed by a partial hand poseand a number of biologically-motivated heuristics on probable motions [17].Unlike in most robotics applications, it is important in the context of anima-tion to replictate specific qualities of real human hands for a natural appearance.Investigations into the effect of using soft contact models for physical animationcontrollers finds that soft contacts improve controller robustness and realism for apinch grasp scenario [24]. Keyframed motions of the hand and arm may be aug-mented with subcutaneous motion of tendons and muscles; as part of the animationprocess, the motion is exported to a dynamic tendon simulation which finds the ac-tivation levels to track the input motion [57]. The tendon activations are used inthe final animation to deform the surface of the skin realistically.As with motion capture methods, many of these synthesis approaches are de-signed for offline reconstruction and are unsuitable for an iterative animation work-flow. Additionally, there is limited room for an animator to influence the motion tohis or her liking. For instance, it is unclear how an animator might take effectivecontrol over the automatically synthesized finger motions produced using grosswrist motion from motion capture [66].2.4 HapticsSince grasp interactions are largely determined by contact- and force-based sen-sory events, haptic feedback technology is a powerful sensory augmentation whenusers are executing manipulation tasks in virtual environments, particularly withkinesthetic feedback [45].Due to mechanical constraints on device design, however, there are neces-sary tradeoffs between measurement completeness, feedback quality, and the in-trusion of device bulk on interactions. For example, the CyberTouch glove (Cy-berGlove Systems) records more complete hand pose information than the basicend effector state from a Phantom haptic device (SensAble/Geomagic), but it pro-vides only light feedback compared to the Phantom’s compelling grounded forces.Furthermore, the increase in feedback dimensionality provided by related glove-based solutions such as the CyberGrasp and CyberForce (CyberGlove Systems) is11accompanied by significantly increased device complexity and bulk which limitsthe range and naturalness of motions. We thus eschew glove-based capture tech-niques for low-dimensional animator input and feedback. The motivation behindthis choice is discussed further in Section 3.3.1.Although rigid impedance devices, capable of applying feedback in arbitrarydirections, are the most common type of haptic interface, alternative designs sacri-fice the complete 3D space of feedback forces in favor of other useful properties.For example, combining a pair of string-based SPIDAR-G devices [29], bimanualmanipulations are possible with feedback spread across multiple fingers [43].A popular class of algorithms in haptic rendering is known as god-object orproxy methods [69]. In addition to the measured pose of a user’s fingertip whichmay penetrate the surface of virtual objects, known as the haptic interface point(HIP), these algorithms maintain a proxy fingertip which obeys constraints on ob-ject penetration, friction, or other physical factors in the scene. Our tool drives thefingertips in the animated scene using a modified fingertip proxy method. Alter-natively, the feedback forces from a complete “proxy hand” with joint articulationand PD-controlled joints may be mapped to a CyberForce glove, though the qualityand skill of motions is limited by noise in the glove’s encoders and feedback qual-ity [8]. Simulations of a deformable hand may also be used for haptic applications[21].Similar to our system, a recent work builds complete hand poses from a low-dimensional haptic interface [42], but this is accomplished using kinematic posturalsynergies, which are insufficient for detailed object manipulation.An experiment with a comparable virtual environment to the one used in ouruser study (Chapter 5) verified the presence of a human grasp strategy designedto minimize object roll during liftoff, indicating that virtual environments may beuseful for investigations into human motor control [6].The haptic devices we use provides only kinesthetic feedback on the tips of thethumb and index finger. It is also possible, though trickier, to emulate tactile sen-sory information provided by local deformations of the fingertips in contact withan object’s surface. One approach uses an end effector device that combines bothkinesthetic and tactile feedback [32], finding that the addition of tactile feedbackneither to help nor hindered performance in a shape recognition task.122.5 TelemanipulationIn telemanipulation, a human user interactively operates a remote device or robot,generally with the assistance of a visual or haptic interface. Although we targetour tool exclusively to creating virtual grasp motions, the split between the userand virtual world has clear parallels to telerobotic applications, and we integrateobservations from this domain into our tool.For example, the indirect mapping strategy for a non-anthropomorphic tripodmanipulator [60] bears a likeness to our own strategy. Another telemanipulationinterface uses a multi-touch display as the user interface [58]. It is noted that evengrasping motions executed in three dimensions often require only planar controlsignals, an observation which we exploit for our mapping low-dimensional anima-tor input into full hand motions.Haptic feedback provides a sense of tactile presence for telemanipulation. Weconsidered the use of vibrotactile actuators on the fingertips as an inexpensive,low-bulk method of providing some measure of tactile sensation. Applying pro-portional vibrotacticle feedback to subject’s fingertips increases their ability toperceive relative weight and reduced overgripping during telemanipulation [44].Additionally, surgeons participating in a study [35] preferred using a telepresencesurgical tool augmented with vibrotactile and auditory feedback, though it wasfound that these modes didn’t significantly improve task performance. As such, itmay be constructive to evaluate the use of vibrotactile feedback in a future versionof our interface.13Chapter 3Precision Manipulation InterfaceThis chapter introduces the design and implementation of our haptic interface forcreating precision manipulations. Using an indirect mapping based on the principalof “virtual” fingers, the interface lets animators interactively act out and recordmotions within an animated scene.The phrase “precision manipulations”, as used in this thesis, refers to motionsrequiring a precision grasp, which use only the fingertips of a manipulator such asa hand or robotic gripper. These differ from power and intermediate grips (usingthe classification of [19]) in that the primary goal of the grasp is to handle anobject with exactness; the employed grip forces and hand-object contact region arecomparatively small.We motivate our decision to use an indirect control mapping for the interfacein Section 3.1 before providing an overview of the real-time interaction pipeline inSection 3.2. Section 3.3 specifies the design of the desktop workspace for our inter-face and includes a note on the reasoning behind this design. The interface’s soft-ware architecture is briefly described in Section 3.4. Section 3.5 gives an in-depthbreakdown of the core interaction loop which translates user input into motionsof an animated hand and reflects contact forces back to the animator. Our inter-face’s runtime performance is noted in Section 3.6, and we conclude this chapterby presenting several demonstrative motions created by it in Section 3.7.143.1 Grasp Control with an Indirect MappingFrom our perspective, a shortcoming of many tools for generating hand motionsis that they require the user to specify the motion too precisely, often as the literaltrajectory of all of the hand’s degrees of freedom over the course of the motion.In contrast, it is a recurring goal in computer animation research to generalize theinformation from lower-dimensional user input to a rich output motion. This ispartly motivated by the observation that many biological control systems rely ontheir inherent physical structure and knowledge gleaned from previous interactionsin order to generate motion online with a reduced dimensionality controller. Asophisticated animation tool might adopt the same strategy to accelerate the pro-duction of interaction sequences.We propose that hand-scene interaction sequences may be divided into twoseparate components: a relatively low-dimensional interaction trajectory and thedetailed, high-dimensional finger poses and forces. The former corresponds tothe animator’s intended general motion for a sequence, while the latter is largelya byproduct of local reactive processes. An interface for recording hand-sceneinteractions need only capture the animator’s interaction trajectory and allow thedetailed hand poses to be computed automatically based on that trajectory andthe structure and compliance of the manipulator. By delaying the stage at whichthe animator’s input is finally converted into a literal sequence of hand poses, itbecomes possible to map the input to a variety of manipulators.Our interface thus departs from the approach of many prior haptic interfacesfor grasping which directly map the user’s fingers to on-screen fingers. We in-sert a layer of abstraction between the user input and animation output that allowstranslating motion and forces between a low-dimensional user control space and afully animated hand with fingertip contacts. The animator is tasked with providingthe high level position, orientation, and aperture trajectory of a notional grasper,while our method automatically produces the complete motion of all joints in theanimation.15Figure 3.1: A snapshot of the visual interface used for recording motions3.2 Interaction OverviewThe Graphical User Interface (GUI) for our interface is depicted in Figure 3.1. Af-ter loading a digital scene, the animator can practice, record, and review a motionsequence while monitoring the interaction on a stereoscopic display, which assistsin accurately gauging depth. Before each recording, the animator may define ba-sic shaping parameters of the hand using on-screen controls, such as scale factorson how individual fingertips respond to the input grasp aperture and static trans-lational offsets from the center of a grasp. Once the animator is satisfied with arecorded motion, the trajectory of the animated hand and other objects in the sceneare exported to disk.3.3 Workspace DesignUsers interact with the our precision grasping system via a pair of armature-basedhaptic devices. In our implementation, we use two Phantom Premium 1.0 devices(SensAble/Geomagic), though other choices such as the more affordable Touch X(formerly known as the Phantom Desktop) may also suffice. Each device tracks16(a) Workspace for our interface (b) Encoder gimbal for index fingerFigure 3.2: A typical desktop setup, including our animator interface andstereo display.the position of a single endpoint effector. We optionally use a customized encodergimbal (Figure 3.2b) on one of the end effectors in order to track the orientation ofthe fingertip; this provides additional control over the orientation of the animatedmanipulator.Since each device tracks the state of its end effector independent of the otherdevice, after the initial device arrangement it is necessary to calibrate their relativeposition and orientation to ensure correct translation of physical to virtual motions.Note, though, that since we use the encoded device positions to drive the poseof an animated manipulator indirectly rather than via a direct fingertip-to-fingertipmapping, our interface is somewhat less sensitive than many haptic applicationsto small miscalibrations. This indirectness also permits us to tweak the controlmapping to accomodate most users’ hand sizes without having to physically shiftthe devices.Figure 3.2 illustrates the workspace setup, which is suitable for a desktop set-ting and utilizes a pair of Phantom Premium 1.0 devices. The thumb and index fin-ger of one hand insert into thimbles on the endpoint of each device. The positionof these endpoints are tracked with high precision by rotary encoders integratedinto the device motors. It bears noting that, in contrast with vision-based motioncapture systems, there is no risk of losing data to occlusions or incorrectly inferredposes during complex manipulations.The effective physical space for interactions using the devices consists of the17(a) Two device setup (b) Three device setup (not used)Figure 3.3: Top-down diagrammatic view of shared haptic workspace formultiple Phantom devicesoverlapping volume of each end effector’s range of motion. Each Phantom pro-vides a roughly (though somewhat flattened) hemispherical work volume. Usingan overhead view, Figure 3.3a highlights the overlap in workspace (yellow outline)between the two Phantom devices that we employ. Note that this restricted deviceworkspace presents a practical reason to use only a pair of Phantom devices ratherthan three or more in order to provide feedback on extra fingertips. Consider theworkspace for an arrangement of three devices, shown in Figure 3.3; adding a thirddevice would cut the shared work volume nearly in half, a tradeoff that we do notbelieve is justified by adding feedback to a third fingertip.3.3.1 Haptic FidelityThe pair of Phantom devices have the advantage of providing grounded feedbackforces, but only for two fingertips of the hand (in our case, feedback is in a 6Dspace, with 3 translational feedback degrees of freedom per fingertip). Contrast thiswith glove-based solutions such as the CyberGrasp (CyberGlove Systems), shownin Figure 3.4. It uses embedded sensors to encode the hand pose as 22 degreesof freedom and provides feedback forces perpendicular to each finger, yielding amore complete encoding of hand pose and better emulation of contact with virtualobjects over the whole hand.18Figure 3.4: The CyberGrasp, a glove-based solutions for hand capture andhaptic feedback (image from CyberGlove Systems)However, each of the sensors on the CyberGlove encodes the pose on the orderof 1 degree of resolution, with the standard deviation of encoded poses on theorder of 3 degrees between sessions. Especially for joints near the kinematic rootof the hand, this may cause erratic and imprecise motion of the animated fingertips.Additionally, the actuators on the glove exoskeleton are bulky and weigh roughlya pound in total, which may inhibit the naturalness of motions.Clearly, there are tradeoffs between the fidelity versus completeness of pose en-coding and feedback forces. We propose that, in the domain of precision grasping,it is sensible to design the interface in order to maximize the fidelity of feedbackin a low dimensional space rather than attempt to encode and provide feedback forcomplete hand poses. This is, again, motivated by our belief that grasping is, inessence, a low-dimensional task in which motion details are only elaborated at aperipheral control level. In order to maximize the generality of captured motionsand reduce the amount of work per captured interaction for users, then, we decidedon the paired Phantom-style devices as a compromise.An important aspect of perception in precision manipulation tasks which isnot addressed by our hardware design is a detailed recreation of contacts on theskin of the fingertips with virtual objects. In our setup, the user’s fingertips arealways in contact with a rigid plastic thimble that masks any tactile sensation on19the fingertips. As described in [26], tactile (also known as “cutaneous”) afferentsignals generated from dense receptors in the contacting skin itself are critical toperceiving the fine-grained properties of contact with objects. Simulating suchproperties for virtual objects is a significant challenge in its own right, though. Toreproduce the tactile modality, it may be possible to augment our design with amodified thimble, such as presented in [32]. For other examples of recent efforts atproducing cutaneous haptic signals, see [65], [37], and [45]. See also [28] for directevidence of how losing tactile information may reduce the quality of interactionsand perception.3.4 Software ImplementationOur software implementation is built on top of a customized version of the CHAI3D haptic environment library [18] and uses the Open Dynamics Engine [56] tosimulate rigid body dynamics in the animated scene. Digital objects are renderedboth visually and haptically in the workspace using user-specified mass and fric-tional properties. After creation and playback review, animation sequences maybe exported to disk as BVH skeletal animations for integration into a completeanimated sequence.For details on the software implementation of the interface, we refer the readerto Appendix A.3.5 Interactive Grasp ControlOur interface maps animator control input into full hand motions and reflects forceson the animated hand back to the animator at interactive rates. We describe in detaileach stage of the pipeline that drives our interface.3.5.1 OverviewFigure 3.5 illustrates the pipeline which constructs the full animated hand poseat each interactive control update. User input u consists of the position and, op-tionally, the orientation of each device endpoint (we use a custom encoder gimbalas shown in Figure 3.2b for this purpose. The gimbal was fabricated using rapid20Fingertip Forces User Input Abstract Grasper Reference Fingertips Compliant Fingertips & Full Hand Pose 𝒉  𝒖 𝒉 𝐸𝑔𝑤  and 𝑟 𝒒 Haptic Feedback 𝒇𝒖 Control Shaping Compliance Inverse Kinematics Figure 3.5: The pipeline to control a full hand pose based on user input.prototyping; for ease of adoption, we include the design in Appendix B.The user input controls the state of an abstract grasper. Think of this grasperas the most basic representation of a grasping motion; it includes a grasper frame,specified by the transformation matrix wgE, as well as a grasp aperture r (Sec-tion 3.5.2). In order to interact with the scene, this grasper must first be instantiatedas a particular hand morphology.To accomplish this, several reference fingertips h are arrayed in the grasperframe based on the current aperture and grasp shaping parameters (Section 3.5.2).Each reference fingertip hi functions as the pose which an animated fingertip as-sumes in the absence of contact with the environment. However, while gripping anobject, this reference pose could penetrate its surface and exhibit otherwise non-physical behavior. As with human hands, our animated hand has some compliancethat allows it to conform to the environment. Thus, the final positions of the ani-mated fingertips are determined using a set of compliant fingertips h˜ which trackthe reference pose while still respecting non-penetration, friction, and other con-straints in a scene (Section 3.5.3). Finally, full hand pose q (ie: the set of jointangles for the hand) is found via inverse kinematics at the end of each update anddisplayed to the animator (Section 3.5.6).Since it is our belief that manipulations using the hand are as much a haptic ex-perience as they are visual, we use haptic feedback to help communicate importantinformation about the state of the interaction back to the animator (Section 3.5.5).21When the compliant fingertips of the animated hand contact other objects, rapidlyupdated feedback forces f u are applied to the animator’s fingertips which simulatethe contact with a solid surface, including sensations of the object mass, friction,surface shape, and other physical aspects. We currently use a soft fingerpad approx-imation for determining explicit fingertip contact forces at haptic rates. Since thenumber of animated fingertips is greater than the two fingers used by the animator,these forces are reflected back to the animator in a dimension-reducing transformthat is mediated by the current grasp shaping parameters.Many simple motions may be created using only static shaping parameters andreactive compliance as described above. However, we can produce more realistic,functional manipulations for a particular object by proactively adapting the graspshape during the reference fingertip placement stage. An augmention of the inter-action pipeline to accomplish this proactive shaping is given in Chapter 4; we useonly static shaping parameters in the current chapter.3.5.2 Reference Pose ControlSince the space of our user input and feedback is of reduced dimension comparedto the configuration space of the full animated hand, we use a modified versionof the haptic proxy method that translates between these different spaces. Ratherthan associate each device endpoint reading with a single haptic interface point, weemploy an abstracted kinematic mapping h(u,θ(c)) which transforms user input uinto a set of reference fingertips h of the animated hand. The animated hand mayhave any number of fingers n, but it is generally greater than two (eg: n = 5 fin-gers are used for anthropomorphic hands). θ(c) are context-dependent parameterswhich affect the intrinsic shape of the generated reference pose. These parametersmay vary based on the information in some interaction context c; for clarity wewill refer to them without their context argument as θ . In the simplest case, theseparameters are statically defined to produce a basic circular grasping shape θ d .To make this mapping concrete, we define user input u in our setup to be theposition and orientation of the user’s thumb (a) and index finger (b), all providedin the world frame of the animated scene w. We use leading superscripts andsubscripts to label coordinate frames. The transformation from frame a to frame w22is denoted by an affine transform waE. Thus:u =[waEwbE],where (3.1)waE =[waRwa p0 1]and wbE =[wbRwb p0 1]It should be noted that, depending on the end effector hardware used, the user’sfingertip orientations may or may not be encoded. In the latter case, it is assumedthat the rotational component of these transforms is identity.When controlling an animated hand with n fingers, the reference fingertipspose h is a concatenation of n affine transforms which determine the position andorientation of each animated fingertip in the world frame:h(u,θ) =h1h2...hn=w1Ew2E...wnE(3.2)Here, wiE is the 4× 4 matrix specifying the homogeneous coordinates of the ithfingertip with respect to the animated world frame.The specific form of the mapping in Equation 3.2 is a critical choice. Thoughwe require some abstraction of the animator’s input in order to amplify the controldimensionality, we should still allow him or her to control the animated hand inas intuitive a fashion as possible. Based on an exploration of different mappingstrategies, we found that control was highly compelling when input u is first trans-lated into the state of an abstract grasper, consisting of a scalar grasp aperture rand a rigid pose wgE, where g is the reference frame of the grasper (Figure 3.6). Theaperture value is proportional to the distance between the animator’s fingertips:r(u) = ‖wa p−wb p‖2 (3.3)wgE positions the grasper at the midpoint between the thumb and index fingerinputs, scaled for convenience by some s, if desired. We use s = 2.5, which ex-23Figure 3.6: Abstract grasper state for given input animator poseaggerates the translation of the virtual hand relative to the animator’s real hand asa way of providing a larger effective workspace. The grasper orientation wgR isdefined using the line between the animator’s control fingertips and the encodedpointing direction of the index fingertip, wbe1. The grasper frame mapping is thus:wgE(u) =[wgRwg p0 1](3.4)wg p =s2(wa p+wb p),wgR =[ge1 ge2 ge3]ge1 =wa p−wb p‖wa p−wb p‖, ge2 =ge1×wbe1‖ ge1×wbe1‖, ge3 = ge1× ge2Once this extrinsic grasper frame g is defined, the reference pose of individualanimated fingertips within the frame are set based on the grasp aperture and graspshape parameters θ . We use an aperture scale factor σi and cylindrical coordinatesof the fingertip for the shaping parameters, which enables arbitrary placement of24݄௝ ݄௞ ݖ௜ ߩ௜ 𝐸௜𝑔  ߶௜ Figure 3.7: Example of a reference grasp pose with three fingers, includingthe cylindrical coordinates which define the pose of finger hi.fingertips within the grasper frame, as shown in Figure 3.7:giE(r,θ i) =(σir +ρi)cos(φi)giR (σir +ρi)sin(φi)zi0 1(3.5)θ i =σiρiφizi,θ =θ 1θ 2...θ n(3.6)Offsetting the radial component by the scaled input aperture σi ∗ r allows theanimator to open or close all of the fingers about the center of the grasp. RotationgiR simply orients the fingertip such that the “fingerpad” half points toward the localgrasp origin.Combining equations 3.4 and 3.5 gives the world-frame reference pose of each25fingertip i:wiE(u,θ) =wgE(u)giE(r,θ) (3.7)Note how the motion is broken into extrinsic grasper motion in the first termand intrinsic shaping in the second. While the animator guides the trajectory g andaperture r of the abstract grasper, individual reference fingertips are automaticallyplaced within g based on the active shaping parameters and the specified aperture.Furthermore, since the animator’s control is mediated through the abstract grasper,observe that there is not a one-to-one mapping between the animator’s fingers andthose of the animated hand. This lets us map the input to different hand morpholo-gies without modifying the core interface.3.5.3 Compliant Fingertip MotionThe compliance of human hands is an important reason for the robustness of humaninteractions with physical objects. See [31] and the literature cited therein. We ap-proximate the compliance due to tendons and muscles with an efficient, fingertip-based compliance. Each reference fingertip is paired with a corresponding proxyfingertip within the animated scene to provide compliant coupling between the an-imator and digital scene (a technique first described in [69]).The spherical fingertip proxies in our implementation are based on the softfingerpad approximation found in [2]. Linear and torsional friction constraintsare determined using a simple model of the fingerpad-object contact area. Duringeach haptic update while the fingertip is in contact, the proxy is projected onto thenearest object surface in the direction of the reference fingertip, sliding tangentiallyif necessary until it lies within a friction cone projected upward from the reference.Proxy rotation about the contact normal is likewise restricted based on torsionalfriction constraints.Denoting this proxy algorithm as P , the pose h˜ of the compliant fingertips is26the concatenation of each fingertip’s proxy transform at the current time:h˜ =h˜1h˜2...h˜n,where (3.8)h˜i ∈ SE(3) ={P(hi) : if fingertip i is in contacthi : otherwise(3.9)This fingerpad approximation was chosen for its computational speed and itseffectiveness for dexterous interactions as seen in previous haptic works. Othermodels of contact may be substituted; for example, [31] uses a quasi-static LCPto compute the true motion of a finger in contact with the environment, given themeasured reference trajectory.3.5.4 Interaction ForcesUnder the given proxy model, the contact force f i and torque τ i experienced byeach animated fingertip i is proportional to the difference between the finger’s com-pliant pose h˜i and its reference pose hi, with optional velocity damping:[f iτ i]= C(h˜i−hi)+B(˙˜hi− h˙i) (3.10)Many haptic applications set the proportional gain C as high as possible beforefeedback becomes unstable in order to simulate high surface stiffness values (eg:1000 N/m). In contrast, we set our proportional gain C to a relatively modestconstant of 200 N/m that allows the reference fingertips to significantly penetratean object surface (on the order of a few centimeters) before contact forces becomeappreciable. This results in highly compliant fingers which are better suited forcreating stable multifinger grasps.After each fingertip’s contact forces/torques are calculated, their negation isapplied to the objects that each fingertip is contacting, which emulates the grip andmanipulation forces from a real hand. By timestepping the underlying dynamics27simulation, the hand is able to grasp, lift, and otherwise manipulate objects in theanimated scene.3.5.5 Haptic FeedbackGiven the final animated fingertip forces f =[f T1 fT2 . . . fTn]T, our goal isto provide the animator with haptic feedback which accurately captures contactevents, the inertial properties of objects, and applied grip forces (for conciseness,we omit torques τ i from this discussion, though the same procedure applies). Basedon our hardware constraints, this requires mapping the forces across n animatedfingers down to only f a and f b, the forces on the animator’s thumb and indexfinger. A reasonable method for doing so, based on the principle of virtual work, isto calculate the Jacobian of the current reference kinematic mapping h(u,θ) withrespect to the input thumb and index finger positions, wa p andwb p, and apply itstranspose to the animated fingertip forces:J =[∂h∂ wa p∂h∂ wb p](3.11)f u =[f af b]= JT f (3.12)Although there is a necessary loss of some nuanced force information in thisreduction, we found that the essential sensory qualities of the interaction are pre-served. The animator is able to sense and respond to force events across all fingersof the animated hand in a largely intuitive fashion. Without this haptic informa-tion, it is difficult to emulate several aspects of an interaction using vision alone,as highlighted by our user study (see Chapter 5).3.5.6 Full Hand PoseAfter determining the final compliant transform h˜i of each animated fingertip i forthe current timestep, the complete hand pose q, consisting of the angles on eachjoint of an articulated hand, is generated via inverse kinematics.Typically, a base wrist transform is defined which is offset some distance from28Wrist offset Compliant  Fingertip ෨݄௜ Grasp frame IK Finger Pose ݍ Figure 3.8: Final pose for a single finger based on inverse kinematicsthe origin of the grasper frame g and each finger is modeled as an articulated chainconnected to this wrist and using the fingertip as a target end effector, as shownin Figure 3.8. By constraining the wrist, it is possible to produce animations thatswitch between gross hand motion and fine finger articulation.The trajectory of the articulated hand pose q and other objects in the scenecomprises the final specification of the motion sequence and is ready for playback,export, and rendering.3.6 PerformanceThe time required for both our animator control mapping constitutes a minimalportion of each simulation update. Simulations may be run in excess of 2000 Hzfor simple scenes on a modern desktop computer, though in practice we limit theupdate rate to 1500 Hz to avoid some undesired hardware vibrational modes. Thecomplexity of environments is limited primarily by the dynamics simulation andcollision detection between the fingertip proxies and animated scene. Using axis-aligned bounding box or sphere trees to accelerate collisions, it is generally pos-sible to interact stably with meshes consisting of 1000 or more faces, dependingon the specific geometry. In the absence of a more sophisticated haptic collisionalgorithm such as that in [3], more detailed object geometries may be handled byrecording a sequence with a reduced-complexity version, then running an offlineretargeting of the finger motion against the original geometry.29Figure 3.9: Stacking blocks using our haptic interface3.7 ResultsWe provide a sampling of motion sequences created using our interface. All mo-tions were recorded and produced in real-time.Figure 3.9 demonstrates a simple sequence where the animator grasps and liftsa small block. As the fingers close around the block, the contact forces are reflectedback to the haptic devices. As a result, the animator can intuitively sense the initialcontacts with the block as well as the firmness of his or her grip before liftoff. Weinvite the reader to compare the dexterity of such motions with those created via anon-haptic interface such as the block manipulation motions in [62]. Additionally,compared to performance capture using a traditional system, recording takes onlya fraction of the time and may be conducted at an animator’s workstation.The advantage of haptic force reflection becomes even more apparent whenthe motion depends on contact of the grasped object with the external environ-30Figure 3.10: Tapping ash from a cigarette using a user-customized graspshape (little finger out)ment. Figure 3.10 shows a sequence where the animator delicately picks up acigarette and taps it on a floating “ashtray”. While it is possible to mime this mo-tion based on visual feedback alone, the sensation of contact transmitted by ourinterface lets the animator act the motion out naturally, producing a sharper, morebelievable sequence. We quantitatively analyze a similar motion in our user study(Section 5.2.2) and find that the “tap” event without feedback is overly soft andprolonged.A consequence of the focus on precision fingertip positioning in our controlmapping is that the pose of the rest of the animated hand is not uniquely deter-mined. This allows constraints on the motion to influence its overall nature. Forexample, in Figure 3.11, the animator waves a thin tube in the air using the an-imated hand. In the top row (3.11a), the motion of the animated wrist is uncon-strained, resulting in gross motion of the whole hand to match the animator’s input.In the bottom row (3.11b), the wrist is constrained to a fixed position and orienta-tion, so the same type of manipulation is instead effected by fine finger motions.Since we map the animator input to a fingerless, abstract grasper before in-stantiating it as a particular manipulator, it is possible to support morphologies31(a)(b)Figure 3.11: Manipulations of an object. (a) Normal, unconstrained wrist.(b) Locked wrist (automatically induces fine finger motions)with different numbers and arrangements of fingers without needing to modifythe interface. While we focus mainly on anthropomorphic hands in this thesis,we demonstrate interactions via a robotic tripod hand in Figure 3.12. The onlymodifications needed to support this tripod were changing the number of animatedfingers n to 3, offsetting the static shaping parameters θ , and swapping the visualappearance of the manipulator. An interesting direction for future work may beto expand on this capability to facilitate production of grasping motions for non-anthropomophic characters, as has been recently explored for general motions in[9] and [55].32Figure 3.12: Block stacking and toppling with an alternative robotic tripodmanipulator33Chapter 4Grasp Shaping Using ProactiveAdaptationThe interface described in Chapter 3 allows an animator to interactively craft basicvirtual object manipulation sequences, during which the animated hand adapts toobjects reactively. However, the resulting motions lack many of the fine adjust-ments exhibited by human actors when grasping and manipulating objects. Thosemotions are characterized by proactive adaptations of the hand shape to the localgeometry of an object, to its surface material properties, or even to an actor’s in-tended action. For example, Figure 4.1 shows how the shaping of the fingertipswill vary greatly based on where they grasp a simple hourglass-like object.In this chapter, we describe an approach to proactive adaptation of the graspshaping during interactions, before contact with the object being grasped. Thisadaptation augments the interface from Chapter 3 while retaining the same basicinterface characteristics. The animated hand modulates its shape automaticallywhile the animator guides the interaction trajectory.We first review the benefits of adaptive grasp shaping in Section 4.1. A generic,high-level description of our approach to adaptive shaping is then introduced inSection 4.2, followed by details on how we obtain grasp shaping samples in Sec-tion 4.3 and our method for blending those samples in Section 4.4. We concludethe chapter in Section 4.5 with a review of the interactions that may be producedusing the adaptive version of our interface.34Figure 4.1: Grasp shaping for different approaches to the same object in 2D.4.1 Introduction to Proactive Grasp ShapingA particularly notable aspect of human grasping is preshaping of the grasp. Due tofinger compliance when enclosing an object, some grasps are stable even withoutshaping the hand to accommodate it. However, this may lead to awkward imbal-ances in contact timing and forces across fingers. This is illustrated in Figure 4.2.The staggered timing of contact between fingertips (or failure to make contact atall) may cause the grasped object to shift unexpectedly, and the resulting imbalancein contact forces is wasteful and unnecessary from an energy-conserving perspec-tive.In real grasping situations, humans use more effective strategies than simpleenclosing grasps to avoid these problems. When they execute a grasp, they use vi-sion and sensorimotor memory to shape their hand appropriately, and our interfaceseeks recreate that form of proactive adaptation for virtual manipulations. Preshap-ing [51] will modulate the grasp shape to approximately match the object surfaceduring reaching, before object contact, in order to bring about a more natural grip.We might decide to emulate preshaping by choosing a closest object fit from aset of predefined grasp shapes [50] or by searching over physically plausible finger35(a) Poor contact timing(b) Poor contact force balanceFigure 4.2: Problems that arise without proactive grasp shaping.trajectories for a recorded sequence [66]. A more appealing option, though, is todraw from prior experiences: when encountering a grasping situation that bearscontextual similarities to previous grasps, the hand should automatically adjust itsbehavior to suit the new interaction. This removes the limitations of using onlypredefined grasp shapes and sidesteps expensive, biologically implausible compu-tations that solve the grasp shape from scratch at each step of an interaction (seeour sampling of related works in robotics in Section 2.2.1 for examples of the lat-ter approach). Moveover, learning from experience makes it possible to capture anindividual user’s motion style.4.2 Our Approach to Adaptive ShapingUsing our interface, the animator controls a high-level abstract grasper trajec-tory while the system influences the grasp shape via shaping parameters θ (Equa-tions 3.5 and 3.6) . To facilitate proactive grasp shaping, we define an adaptiveshaping function θ =A (c,S) which determines these parameters for a given con-text. In addition to the active grasp context c,A uses a set of m previously sampled36shaping parameters, each paired with a particular grasp context:S = {(sc, sθ )|s = (1 . . .m)} (4.1)For our purposes, grasp context is defined as the spatial relationship betweenthe hand and the object to be grasped: intuitively, as the hand approaches some partof an object, it should change shape in order to conform to the local object surfacegeometry. Given a label identifying which object is being grasped and its localreference frame, o, we found that it is often sufficient to encode the grasp contextusing the current position of the abstract grasper in that object’s local frame:c ∈ R3 = og p (4.2)This simple context functions well enough for our current tasks, but it is con-ceivable that a more nuanced context would be needed in sophisticated tasks. Forexample, a hammer may be held differently depending on whether it is being usedto hammer in or pry out a nail, or the hand shape might change to effect differentpitches in baseball. The grasp context should encode this difference in goals in away amenable to retrieval using a numeric distance metric.To review, given a shaping function A , a set of context-associated shapingsamples S for a given object, and a basic interaction context c updated at each con-trol step, our interface will adaptively change the grasp shape during a reachingmovement, yielding richer and more functional grasp motions as a result. Fig-ure 4.3 illustrates this process. Note that while the sample set S is defined before-hand, adaptation occurs in real-time while the animator acts out the motion.In the following sections, we first describe how the parameters sample set Sis generated (Section 4.3) and then give our choice of the shaping function A(Section 4.4).4.3 Grasp Shape SamplingEach sampled interaction s in S should specify a useful grasp shape, encoded bysθ , that was employed while context sc was active in a previous interaction withthe object. They will be retrieved and used to preshape the hand during novel37ࣛ ࢉǡ࣭ ൌ ࢉ … Figure 4.3: The grasp shaping for a new context c is generated by A usingthe information from a set S of previously sampled shaping parameters.interactions with that object.A key insight is that, while the hand is grasping an object, these shaping param-eters may be inferred by observing the shape the hand actually takes after comply-ing with an object. This shape is given by the set of compliant fingertips h˜ whichare constrained to the object surface (see Section 3.5.3).We calculate temporary shaping parameterssθ˘ i = (sρ˘i,sφ˘i, sz˘i) for each fin-gertip which match this compliant shape. To normalize the radial component sρ˘i(otherwise, sρ˘i tends to grow monotonically), the mean value of sρ˘i across all fin-gertips is subtracted from each fingertip. This yields a final set of grasp shapingparameters sθ for the sample:sθ i = (sρ˘i−1nn∑i=1sρ˘i,sφ˘i, sz˘i) (4.3)sθ =sθ 1sθ 2...sθ nAlthough determining these “ideal” shaping parameters is a purely kinematicoperation when considering a particular hand pose, it may also be interpreted as38ࣂ  ࣂ௦  ࢉ௦  Figure 4.4: During scene interactions, some prior shaping parameters θ areused to control the hand’s reference pose. However, when taking anadaptation sample, we calculate and store the shaping parameters sθ thatmatch the hand’s compliant pose. This effectively captures informationabout the object surface geometry in the local context sc.a form of energy minimization. Define the best hand shape for a given contextas that which minimizes the difference in grip forces across all of the fingertips(grip forces are the component of contact force normal to the object surface at eachcontact point). Given some grasp context (hand position relative to the object),this condition is met by setting sθ such that the hand conforms exactly to the localobject surface when the input aperture r from the animator is equal to the object’swidth. The compliant hand shape during the original interaction meets this criteria.As the input aperture contracts, all of the fingertips will penetrate the surface at thesame even rate, producing balanced grasp forces.Figure 4.4 illustrates sample acquisition in two dimensions. The sampled shap-ing parameters sθ are paired with the context sc that is active at the time the sampleis taken and added to the set S. In our current solution, users are able to manuallyadd shaping samples during an interaction session, building a library of samplesthat describe the relationship between the animated hand and a particular object.4.4 Shaping InterpolationThe adaptive shaping function A (c,S) must generate a new set of shaping param-eters for the current interaction context c using the collection of shaping samples S.Since grasp shapes tend to vary smoothly based on context (position of the grasperrelative to an object), we chose an interpolation scheme using Gaussian ProcessRegression (GPR) as outlined in [49]. GPR provides smooth, non-linear interpola-39tion of a target value based on the distribution of samples in the input space, which,in our case, is the space of grasp contexts.Note that other interpolation methods are also feasible, such as kNN inter-polation. The primary requirements for any chosen method are that it smoothlyinterpolate between different shaping samples and fall off to zero influence whenthe grasper is far from any sample. However, as further discussed in Section 6.2,we select GPR since the covariance terms between different samples provides anautomated mechanism for varying sample influence. This property, which may beexploited in future work, becomes important when dealing with dozens or hun-dreds of densely spaced samples rather than just a few sparse samples, as is thecurrent usage pattern.Our covariance function between contexts uses the common choice of a squaredexponential distribution:k(c,c′) = σ2f exp[−‖c− c′‖22l2](4.4)‖c− c′‖ is the familiar Euclidean norm, though other norms are reasonable. Themaximum covariance is equal to σ2f , and the effective width of the kernel func-tion depends on l. These hyperparameters may be set using a maximum likelihoodoptimization or manually determined for a particular interaction based on the ap-proximate size of object features and distribution of samples. We use the latter forits simplicity and directness.For notational convenience, we collect the shaping parameters from each ofour m samples into a single parameter matrix for the sample set:ΘS =[1θ 2θ . . . mθ](4.5)Using a zero value for our prior mean on the grasp parameters would not be sen-sible, as this describes a grasp where all the fingertips are collapsed to the originand oriented identically. Instead, we use a default, user-defined grasp θ d as a non-zero prior mean. Thus, our adaptation process is defined as follows, where Θd is a40matrix whose m columns are each equal to θ d :A (c,S) = θ d +[K∗K−1(ΘS−Θd)T ]T (4.6)K∗ =[k(c, 1c) k(c, 2c) . . . k(c,mc)]K =k(1c, 1c) . . . k(1c,mc).... . ....k(mc, 1c) . . . k(mc,mc)We direct the reader to a definitive GPR reference for details on this regressionprocess [49]. The upshot is that the hand produces adapted grasp shapes whenit is contextually “nearby” to previous shape samples but reverts to a basic (yetfunctional) grasp for situations where no samples are in range. A key advantageof our approach is that preshaping of the hand when in proximity to the targetobject arises as a natural consequence. There is no need for a separate preshapingcomputation.4.5 ResultsWe demonstrate several motion sequences made possible by the proactive adapta-tion augmentation of our interface. As with the results from Chapter 3, all of thesequences were recorded and produced in real-time.For all of these sequences, preshaping was achieved by demonstrating a smallset (5 or fewer) grasp samples in a short pre-recording period. This resulted inmotions with more functional grasps than would be possible if using only reac-tive compliance. For example, in Figure 4.5, the index finger shifts to the topof the pawn as the hand approaches the object, which stabilizes the subsequentrapid twisting motion for this sequence. Proactive adaptation also proved usefulfor twisting a doorknob, as the fingertips naturally molded to the shape the knob,giving a firm but balanced grip (with only reactive compliance, an imbalance incontact forces between fingertips induces an unintended torque on the knob).Figure 4.6 highlights how proactive adaptation of the grasp shape also yieldsgrasps with a more natural appearance. Without proactive shaping, the hand fails to41Figure 4.5: Example animation sequences produced using our interface.Left: Manipulating a chess pawn. Right Twisting a doorknob.42Figure 4.6: Grasping a wine glass without (left) and with (right) proactiveshaping adaptation. Preshaping arises in the latter case by using con-textual similarity to retrieve similar grasp shape samples from previousinteractions.conform to the curved surface of the wine glass, resulting in some fingers danglingaway from the surface. By enabling our proactive adaptation process and addingjust three canonical grasps to the prior shaping set S, the hand dynamically changesits shape to accommodate the object surface, similar to a human actor.As noted above in Section 4.2, it should be possible to use grasp context tokey into different grasping strategies depending on high level goals. We have notyet achieved this sort of high-level contextual switching, but a manual solutionto emulate it is simply to swap out the underlying sample set S being used forproactive adaptation during an interaction (eg: via a hotkey). Assuming the livegrasp context (hand position) is sufficiently distant from any prior samples at themoment of the swap, this allows seamless transitions between grasp strategies ina single interaction. Figure 4.7 shows an example of this, where the animator firstlifts a tall, thin champagne glass using a sample set trained on that object (top)before switching to an alternative sample set (middle) trained on a wider red wineglass, producing a firm grasp when reaching for this second object (bottom).43Figure 4.7: Lifting two different types of drink glasses in the same interactionby swapping proactive adaptation sample sets44Chapter 5User Study to EvaluateInteraction Behavior Using OurInterfaceIn order to test the efficacy of our interface for controlling virtual interactions,we conducted a study in which nonexpert participants were asked to complete anumber of simple tasks in a virtual environment. This chapter describes the designand results of those experiments and discusses their significance.Ours is by no means the first experimental assessment of task performance in asimulated haptic environment. See, for examples, an example of a peg-in-hole taskin [59], shared collaboration in [4], or haptic shape recognition in [32]. Our studyfocused on tasks that we believe are relevant for animating precision interactions.We focused the investigation of two particular questions:• Does the force-reflecting haptic component of our interface improve natural-ness and precision of motions?• Does our proactive adaptation produced quantifiably more natural motions?If so, how?45Figure 5.1: Experimental setup for user study5.1 Experimental DesignThe interaction environment for our experiments is similar to the design from [6].In each trial a virtual scene was shown to the participant using stereoscopic ren-dering that was colocated, via a mirror setup, with the participant’s hand. See Fig-ure 5.1 for an illustration. The angled mirror reflected the output of a downward-facing monitor (not pictured) so that the plane of the image was at the same depthas the participant’s fingertips at the center of the workspace. This allows for morenatural interactions than the mirrorless setup shown in Figure 3.2 but comes at thecost of an environment poorly suited for integration with a typical animator work-flow.During individual sessions, seven participants (age 19 - 25, five male) com-pleted a set of interaction tasks, each separated into blocks of 10 trials. Participantswere guided through a brief familiarization period of grabbing and stacking ani-mated blocks and experiencing the associated haptic feedback before starting therecorded trials.46In each trial participants controlled the motion of an anthropomorphic handwith either five fingers (Tasks 2 and 3) or a single finger (Task 1). Position andforce trajectories for the haptic interface, animated hand, and objects in the scenewere recorded at 100 Hz over the duration of each trial. When participants madean error which prevented completion of a task, the trial was reset for a single repeatattempt before moving on to the next.In all trials, motion of the virtual hand was driven by the system outlined inSection 3.5. In Tasks 1 and 2, haptic feedback was alternatively enabled and dis-abled per block; whether or not feedback was enabled in the first block of each taskwas counterbalanced across participants. Haptic feedback was enabled in all trialsfor Task 3.In most sessions, there were short (<5 minute) breaks between task blocksboth for the participant’s benefit as well as to prevent overheating of the hapticdevice motors. The latter was mainly necessary for Tasks 2 and 3, which requiredthe device motors to simulate the large vertical weight and horizontal grip forcesneeded for lifting a virtual object.5.1.1 Remarks on Basic Lifting TaskWe note that some of the earlier participant sessions included an additional taskwhich we do not describe in detail (4 participants, 2 - 4 blocks per participant, 136total trials). It consisted of reaching out to grasp and lift either a 100 g or 300 gblock to a specified height before replacing the block on the ground. The mass ofthe block in each trial was indicated to the participant by its color and size beforemaking contact. We initially hypothesized that this task may demonstrate inter-esting behavior regarding grip and lift force profiles depending on the presence orabsence of haptic feedback. Specifically, we wanted to compare interactions us-ing our haptic interface to the anesthetized fingertip scenario from [64]. However,our analysis to date has revealed no significant differences in grip/lift force trendsbased on the presence of haptic feedback. It is believed that a new, more focusedstudy may prove useful for investigating this aspect of precision grasping.475.2 Effect of Haptic FeedbackWe describe our observations of two separate tasks with the goal of evaluatingparticipants’ motor behavior with and without haptic feedback from our interface.Although capture devices such as instrumented data gloves are useful for recordingfree-space hand motions, we hypothesize that it is to emulate the subtle dynamicsof pressing, lifting, and manipulating physical objects without a complementarysensation of touch.5.2.1 Task 1: Light Touch ControlTask 1 (7 participants, 2 blocks per participant, 140 total trials) was designed totest the level of control of very light touches using the animated hand. In each trial,participants were asked, using a single animated fingertip, to scratch as lightly aspossible across a virtual surface for a duration of at least 1 second.Figure 5.2 shows the magnitude of forces on the fingertip as it makes contactwith the surface in each trial. For this precision task, there are pronounced benefitsto adding haptic feedback: the finger contact is lighter, more consistent in magni-tude, and quicker to reach a steady contact state. When guiding their motion usingonly visual feedback, participants were slow to react to the finger’s contact withthe surface and unable to reproduce and maintain a constant, light force.5.2.2 Task 2: Indirect Interactions With a Virtual ToolIn Task 2 (7 participants, 2 - 4 blocks per participant, 240 total trials), participantswere asked to grasp either a 100 g or 300 g block and use it to tap once on thesurface of a raised platform before replacing the block at its original position. Notime constraint was imposed; participants were instructed to emulate a natural,self-determined pace.For this task, we are interested in the quality of the motion during the “tap”,shown in Figure 5.3. In a real interaction, humans can detect and respond to thecontact between the block tool and the platform both by visual observation as wellas by a sharp inversion of the vertical force on the block from downward (due tothe weight of the block) to upward (due to the contact forces on the bottom of theblock).480 200 400 600 800 10000246Time (ms)Force (N)(a) Single Participant0 200 400 600 800 10000246Time (ms)Force (N)(b) All ParticipantsFigure 5.2: Task 1: Contact force on fingertip while attempting lightest possi-ble scratch on a surface, with (blue) and without (red) haptic feedback.(a) Trials for a representative participant. (b) Average across partici-pants with first/third quartiles given.490 200 400 600−8−6−4−202Time (ms)Force (N)(a) Single Participant0 200 400 600−8−6−4−202Time (ms)Force (N)(b) All ParticipantsFigure 5.3: Task 2: Magnitude of vertical force on a gripped block whileusing it to tap on a platform, with (blue) and without (red) haptic feed-back. Time and force values are offset to zero at start of tap. (a) Trialsfor a representative participant. (b) Average across participants withfirst/third quartiles given50As in the results for Task 1, there is a delayed and prolonged reaction to themoment of contact when haptic feedback is disabled. When the participant taps theblock on the platform, haptic feedback facilitates a sharp response to the event, onthe order of tens of ms, while motions without feedback show a mushy reaction of200 - 400 ms. This unnaturally slow response is readily observable when viewinga playback of the motion.5.2.3 DiscussionThe results from the preceding Tasks 1 and 2 reinforce our belief that haptic feed-back is a strong aid when creating motions with contact between a manipulatorand the rest of the virtual scene. Task 1 (light scratch) is fairly trivial to accom-plish with or without haptic feedback, but adding feedback allowed participants togreatly decrease the overpenetration of their virtual finger into the ground duringthe scratch. Consider how this aids interactions that require a light touch, such aspetting a kitten, grasping an egg, or running one’s fingers over a page of braille.Task 2’s grasp-lift-tap routine was more challenging, as it required coordinat-ing a full virtual hand with five fingers at once; the hand might easily knock overor drop the tool. Encouragingly for our general interface design, participants weregenerally able to complete the task even without force feedback after the first fewtrials. We credit this success to the significant finger compliance and soft contactmodel that we used to simulate interactions between the hand and tool. As a result,stable grasps were simple to achieve while lifting and moving the tool. Addinghaptic feedback, though, was instrumental in significantly increasing the temporalprecision of the “tap” event, allowing the sharpness of the contact to be commu-nicated to the participant in both a mechanical as well as perceptual sense. Tounderstand this benefit, imagine actions such as hammering a nail or playing thepiano without this reinforcing sensation of contact.It is acknowledged that we cannot say whether the sharpness of the contactresponse in Task 2 results mainly from the constraints of mechanical feedbackor from the participant’s sensorimotor control responding the the contact eventby ceasing to press the block downward. Indeed, we expect that a mixture ofthese two factors is responsible for the sharpened response when haptic feedback51is enabled. One way to separate them in a future experiment would be to includea third condition in which the contact is indicated by a non-grounded, vibrotactiledevice. This condition would produce only the sensorimotor response to a contactevent. See, for example, [35] for an example of the effect of such vibrotactilefeedback.In further experiments, it may be desirable to compare the effectiveness of ourinterface against other those built on top of alternative haptic devices such as theCyberForce, which provides more feedback points over the whole hand at the costof mounted device bulk and forces which must be normal to the fingerpad.5.3 Effect of Proactive AdaptationOur interface includes a method for proactively adaptating the hand pose via pre-shaping, described in Chapter 4. We expect that motions will be in some sensemore natural as a result of this adaptation, especially when the hand must changeits pose to match the shape of more geometrically interesting objects. The finaltask in our study sought to quantify this effect.5.3.1 Task 3: Grasping with Proactive AdaptationIn Task 3 (4 participants, 4 blocks per participant, 160 total trials) we investigatedthe effectiveness of the proactive adaptation process for producing balanced graspcontact across fingers. With proper adaptive shaping during a grasp, all fingertipsshould maintain secure contact with the object surface rather than have contactlimited to a subset of fingertips. We use the potential difficulties raised in the intro-duction of Chapter 4 to measure the “unnaturalness” of the interaction, examiningboth the relative time in contact and grasp force applied by each fingertip.Participants were asked to grasp and lift objects to a specified height of 10 cm.The object to be lifted in each trial was either a concave hourglass-like primitivemesh or a simple chess piece. In a random subset of half the trials per block,the animated hand was proactively shaped based on a small, predefined collectionof grasp shaping samples for the relevant object. The remaining trials used onlyreactive compliance to shape the grasp. We considered each fingertip to be incontact with the object if the force on it rose above a low threshold of 0.001 N.52Normal Proactively Adapted0%25%50%75%100%ThumbThumbIndexIndexMiddleMiddleRing RingLittleLittlePercent Time In Object Contact During TaskPercentage In Contact(a)Normal Proactively Adapted050010001500ThumbThumbIndexIndexMiddleMiddleRing RingLittleLittleSum of Contact Forces Over Whole TrialForce (N)(b)Figure 5.4: Task 3: Contact quality for each finger during task execution,averaged over all participants. Ideally, we expect contact to be nearly100% for all fingers and for contact forces to be balanced across non-thumb fingertips (a) Percent of Task Time In Contact. (b) Total SummedForces During TaskSee Figure 5.4 for a summary of our observations, averaged over all partici-pants. Error bars indicate standard deviations of each value. Note that the contactpercentage metric is normalized to the time during each trial in which at least onefingertip was contacting the object.535.3.2 DiscussionThe interpretation of proactive adaptation’s effectiveness is less clearcut than forhaptic feedback. Based on Figure 5.4, note that proactive adaptation has success-fully increased the portion of the task in which the middle and ring fingers were incontact with the object. Additionally, as desired, the manipulation forces have alsobeen partly offloaded from the index finger onto these fingers. Contrast this withthe baseline case of using only reactive compliance, in which the middle and ringfingers often failed to make or keep secure contact; the normal circular grasp shapeis ill-suited to the concave object shapes in this task.However, shaping from proactive adaptation at the same time reduced the qual-ity of contact for the little finger. The total contact force on this finger fell steeply,indicating a load imbalance compared to the other non-thumb fingers, and the por-tion of time spent in contact with the object became highly variable between trialsand participants.These results contrast somewhat with our own informal impressions. From vi-sual inspection, proactive adaptation clearly improved the grasp shaping, with theanimated hand smoothly molding itself to the object surface. However, it seemsthat the little finger failed to make true, substantial contact in many cases. It is be-lieved that this occurred both because participants were naı¨ve to the use of proac-tive adaptation and because they were not explicitly asked to ensure that all fingersmade contact while grasping the object. We theorize that, in practical use by aninformed animator, proactive adaptation will be more effective at creating securecontacts, though further experiments will be necessary to verify this assertion.54Chapter 6ConclusionsEspecially when grasping or manipulating objects, humans demonstrate high dex-terity that is difficult to capture and replicate in digital environments. Precisionmotions depend on a complex interplay between the mechanics of fingerpad con-tacts, rapid visual and haptic feedback sensations, and the ability of the performerto adaptively shape the grasp based on those sensations. Traditional animation in-terfaces address only a limited set of these factors and require expert training tocreate believable motions.In this thesis, we present a novel interface which allows users to interactivelyanimate the motion of a hand as it manipulates 3D virtual objects. The interfaceprovides reactive compliance to contact with object surfaces as well as haptic feed-back signals that are important for intuitive manipulation of objects. As part of thisinterface, we describe a method for bidirectional mapping of motions and forcesbetween the low dimensional physical user interface and animated hands whichhave an arbitrary number of fingertips and articulated kinematic links.Additionally, we show an augmentation to this interface which allows proac-tive adaptation of the pose of an animated hand based on prior sampled knowledgeand the current interaction context, which simplifies the creation of new, rich inter-actions using our interface.Finally, in order to test the efficacy of haptic feedback and proactive adaptationfor our tool, we give a quantitative analysis of results from a user study with non-expert animators. Our results show the importance of haptic feedback for creating55precision manipulations of virtual objects. Motions with feedback were more pre-cise and natural, with reduced contact overpenetration during light finger drags andsharpened subject reactions to contact events. The efficacy of our proactive adap-tation was not as clear, however. Although it creates more visually dynamic graspsand reduces an overweighting of contact force on the index finger while grasping,using proactive adaptation also reduced grasp quality for the little finger both interms of time in contact as well as force balance. We hypothesize that this issuewould be lessened when users are made excplitily aware of the proactive shapingaspect of the interface; further user studies would be necessary to test this belief.6.1 LimitationsAt present, our interface is limited to creating precision interactions using the fin-gertips. Apart from the fingerpads, the animated hand is non-physical and kine-matically posed after the main control update logic is completed. We expect thatour approach may be extended to whole hand interactions by mapping user inputto compliant joint control rather than just the fingertip transforms. Environmentalcontact, currently limited to the fingerpads, could also be extended to the pha-langes, palm, and other parts of the hand, allowing for a broader diversity of grasptypes. While we believe our model may be extended to high-contact situationssuch as power grasps, it remains to be seen whether the dimension-reducing forcemapping remains intuitive for the animator in those situations.The size of the virtual workspace for creating animations is limited by theshared work region between the two Phantom devices in our hardware setup. Al-though small motions within the workspace are easy to produce, large transla-tions or motions where the hand twists to a large degree may encounter the ro-tational limits of the device end effectors. Additionally, there is a tradeoff betweenthe effective precision of the animated hand’s motion and the scale of the virtualworkspace depending on the scaling parameter s found in Equation 3.4As noted in Chapter 4, the interpolation hyperparameters for proactivate adap-tation depend on the relative geometric feature sizes for a given object. We manu-ally tune these values in the current implementation, but it would be preferable ifthey changed automatically based on some geometric criteria.566.2 Future WorkOur interface is able to modify its behavior based on previous experiences, but itis currently the responsibility of the animator to manually cue when these expe-riences are sampled. A possible future goal is to remove the need to add graspshape adaptation samples explicitly. Instead, the interface could sample interac-tions continually so that improvements in proactive adaptation occur without userintervention. We have conducted early investigations into this alternative strategyand found that a key task is assigning a dynamic quality measure to each sample.To manage the much larger collection of samples that is produced by automaticsampling, it is necessary to modify the GPR covariance function so that the influ-ence of older or lower quality samples is reduced or eliminated.Finally, it should be possible to generalize more broadly the grasp context andthe knowledge encoded by the adaptation process. Higher-level contextual cuessuch as animator task goals or functional properties of objects in the environmentshould modify the grasp adaptation appropriately. For example, when grasping ascrewdriver, the hand should be shaped in preparation for applying torque aboutthe tool’s major axis. This also suggests that sampled interactions for one objectmay be reused on other objects with a similar functional purpose. Thus, we hopeto investigate ways of combining prior grasp knowledge across different classes ofobjects, automatically calling up adaptive grasp shaping for a novel object basedon its similarity to previously encountered objects.57Bibliography[1] S. Andrews and P. G. Kry. Policies for goal directed multi-fingermanipulation. In VRIPHYS, pages 137–145, 2012. → pages 10[2] F. Barbagli, A. Frisoli, K. Salisbury, and M. Bergamasco. Simulating humanfingers: a soft finger proxy model and algorithm. In In Haptics Symposium,pages 9–17, 2004. → pages 26[3] J. Barbicˇ and D. James. Six-dof haptic rendering of contact betweengeometrically complex reduced deformable models. Haptics, IEEETransactions on, 1(1):39–52, 2008. ISSN 1939-1412.doi:10.1109/TOH.2008.1. → pages 29[4] C. Basdogan, C.-H. Ho, M. A. Srinivasan, and M. Slater. An experimentalstudy on the role of touch in shared virtual environments. ACM Trans.Comput.-Hum. Interact., 7(4):443–460, Dec. 2000. ISSN 1073-0516.doi:10.1145/365058.365082. URLhttp://doi.acm.org/10.1145/365058.365082. → pages 45[5] G. Baud-Bovy and J. F. Soechting. Two virtual fingers in the control of thetripod grasp. Journal of Neurophysiology, 86(2):604–615, 2001. → pages 3[6] M. Bianchi, G. Grioli, E. P. Scilingo, M. Santello, and A. Bicchi. Validationof a virtual reality environment to study anticipatory modulation of digitforces and position. In Proceedings of the 2010 international conference onHaptics - generating and perceiving tangible sensations: Part II,EuroHaptics’10, pages 136–143, Berlin, Heidelberg, 2010. Springer-Verlag.ISBN 3-642-14074-2, 978-3-642-14074-7. URLhttp://dl.acm.org/citation.cfm?id=1893760.1893782. → pages 12, 46[7] A. Bicchi. On the closure properties of robotic grasping. The InternationalJournal of Robotics Research, 14(4):319–334, 1995. → pages 858[8] C. W. Borst and A. P. Indugula. Realistic virtual grasping. In Virtual Reality,2005. Proceedings. VR 2005. IEEE, pages 91–98. IEEE, 2005. → pages 12[9] J. Chen, S. Izadi, and A. Fitzgibbon. Kin&#202;tre: Animating the worldwith the human body. In Proceedings of the 25th Annual ACM Symposiumon User Interface Software and Technology, UIST ’12, pages 435–444, NewYork, NY, USA, 2012. ACM. ISBN 978-1-4503-1580-7.doi:10.1145/2380116.2380171. URLhttp://doi.acm.org/10.1145/2380116.2380171. → pages 32[10] J.-S. Cheong, H. Haverkort, and A. Stappen. On computing all immobilizinggrasps of a simple polygon with few contacts. In T. Ibaraki, N. Katoh, andH. Ono, editors, Algorithms and Computation, volume 2906 of LectureNotes in Computer Science, pages 260–269. Springer Berlin Heidelberg,2003. ISBN 978-3-540-20695-8. doi:10.1007/978-3-540-24587-2 28. URLhttp://dx.doi.org/10.1007/978-3-540-24587-2 28. → pages 8[11] M. Cutkosky. On grasp choice, grasp models, and the design of hands formanufacturing tasks. Robotics and Automation, IEEE Transactions on, 5(3):269–279, Jun 1989. ISSN 1042-296X. doi:10.1109/70.34763. → pages 8[12] C. de Granville, J. Southerland, and A. H. Fagg. Learning grasp affordancesthrough human demonstration. In Proceedings of the InternationalConference on Development and Learning (ICDL06), 2006. → pages 8[13] M. de La Gorce, D. Fleet, and N. Paragios. Model-based 3d hand poseestimation from monocular video. Pattern Analysis and MachineIntelligence, IEEE Transactions on, 33(9):1793–1805, 2011. ISSN0162-8828. doi:10.1109/TPAMI.2011.33. → pages 9[14] A. De Rugy, G. E. Loeb, and T. J. Carroll. Muscle coordination is habitualrather than optimal. The Journal of Neuroscience, 32(21):7384–7391, 2012.→ pages 7[15] R. Detry, C. Ek, M. Madry, J. Piater, and D. Kragic. Generalizing graspsacross partly similar objects. In Robotics and Automation (ICRA), 2012IEEE International Conference on, pages 3791–3797, May 2012.doi:10.1109/ICRA.2012.6224992. → pages 8[16] R. Detry, C. H. Ek, M. Madry, and D. Kragic. Learning a dictionary ofprototypical grasp-predicting parts from grasping experience. In Roboticsand Automation (ICRA), 2013 IEEE International Conference on, pages601–608. IEEE, 2013. → pages 859[17] G. ElKoura and K. Singh. Handrix: animating the human hand. InProceedings of the 2003 ACM SIGGRAPH/Eurographics symposium onComputer animation, pages 110–119. Eurographics Association, 2003. →pages 11[18] D. M. F. Conti, F. Barbagli and C. Sewell. Chai 3d: An open-source libraryfor the rapid development of haptic scenes. In IEEE World Haptics, Pisa,Italy, 2005. → pages 20, 67[19] T. Feix, R. Pawlik, H.-B. Schmiedmayer, J. Romero, and D. Kragic. Acomprehensive grasp taxonomy. In Robotics, Science and Systems:Workshop on Understanding the Human Hand for Advancing RoboticManipulation, pages 2–3, 2009. → pages 14[20] C. Ferrari and J. Canny. Planning optimal grasps. In Robotics andAutomation, 1992. Proceedings., 1992 IEEE International Conference on,pages 2290–2295 vol.3, May 1992. doi:10.1109/ROBOT.1992.219918. →pages 7[21] C. Garre, F. Hernandez, A. Gracia, and M. Otaduy. Interactive simulation ofa deformable hand for haptic rendering. In World Haptics Conference(WHC), 2011 IEEE, pages 239–244, 2011.doi:10.1109/WHC.2011.5945492. → pages 12[22] Google. Protocol buffers. http://code.google.com/apis/protocolbuffers/. →pages 68[23] T. Iberall, G. Bingham, and M. Arbib. Opposition space as a structuringconcept for the analysis of skilled hand movements. Experimental BrainResearch, 15:158–173, 1986. → pages 3[24] S. Jain and C. K. Liu. Controlling physics-based characters using softcontacts. ACM Trans. Graph. (SIGGRAPH Asia), 30:163:1–163:10, Dec.2011. ISSN 0730-0301. doi:http://doi.acm.org/10.1145/2070781.2024197.URL http://doi.acm.org/10.1145/2070781.2024197. → pages 11[25] R. Johansson and G. Westling. Roles of glabrous skin receptors andsensorimotor memory in automatic control of precision grip when liftingrougher or more slippery objects. Experimental Brain Research, 56(3):550–564, 1984. ISSN 0014-4819. doi:10.1007/BF00237997. URLhttp://dx.doi.org/10.1007/BF00237997. → pages 660[26] R. S. Johansson and J. R. Flanagan. Coding and use of tactile signals fromthe fingertips in object manipulation tasks. Nature Reviews Neuroscience, 10(5):345–359, 2009. → pages 20[27] L. A. Jones and S. J. Lederman. Human hand function. Oxford UniversityPress, 2006. → pages 3[28] L. A. Jones and E. Piateski. Contribution of tactile feedback from the handto the perception of force. Experimental Brain Research, 168(1-2):298–302,2006. → pages 20[29] S. Kim, J. Berkley, and M. Sato. A novel seven degree of freedom hapticdevice for engineering design. Virtual Reality, 6(4):217–228, 2003. ISSN1359-4338. doi:10.1007/s10055-003-0105-x. URLhttp://dx.doi.org/10.1007/s10055-003-0105-x. → pages 12[30] H. Kruger, E. Rimon, and A. van der Stappen. Local force closure. InRobotics and Automation (ICRA), 2012 IEEE International Conference on,pages 4176–4182, May 2012. doi:10.1109/ICRA.2012.6225091. → pages 8[31] P. G. Kry and D. K. Pai. Interaction capture and synthesis. ACM Trans.Graph., 25(3):872–880, July 2006. ISSN 0730-0301.doi:10.1145/1141911.1141969. URLhttp://doi.acm.org/10.1145/1141911.1141969. → pages 10, 26, 27[32] K. Kuchenbecker, D. Ferguson, M. Kutzer, M. Moses, and A. Okamura. Thetouch thimble: Providing fingertip contact feedback during point-forcehaptic interaction. In Haptic interfaces for virtual environment andteleoperator systems, 2008. haptics 2008. symposium on, pages 239–246,March 2008. doi:10.1109/HAPTICS.2008.4479950. → pages 12, 20, 45[33] C. K. Liu. Dextrous manipulation from a grasping pose. ACM Trans.Graph., 28(3):59:1–59:6, July 2009. ISSN 0730-0301.doi:10.1145/1531326.1531365. URLhttp://doi.acm.org/10.1145/1531326.1531365. → pages 10[34] G. Loeb. Optimal isnt good enough. Biological Cybernetics, 106(11-12):757–765, 2012. ISSN 0340-1200. doi:10.1007/s00422-012-0514-6. URLhttp://dx.doi.org/10.1007/s00422-012-0514-6. → pages 7[35] W. McMahan, J. Gewirtz, D. Standish, P. Martin, J. Kunkel, M. Lilavois,A. Wedmid, D. Lee, and K. Kuchenbecker. Tool contact accelerationfeedback for telerobotic surgery. Haptics, IEEE Transactions on, 4(3):61210–220, May 2011. ISSN 1939-1412. doi:10.1109/TOH.2011.31. → pages13, 52[36] M. Meredith and S. Maddock. Motion capture file formats explained.Department of Computer Science, University of Sheffield, 211, 2001. →pages 68[37] K. Minamizawa, D. Prattichizzo, and S. Tachi. Simplified design of hapticdisplay by extending one-point kinesthetic feedback to multipoint tactilefeedback. In Haptics Symposium, 2010 IEEE, pages 257–260, March 2010.doi:10.1109/HAPTIC.2010.5444646. → pages 20[38] B. Mishra, J. Schwartz, and M. Sharir. On the existence and synthesis ofmultifinger positive grips. Algorithmica, 2(1-4):541–558, 1987. ISSN0178-4617. doi:10.1007/BF01840373. URLhttp://dx.doi.org/10.1007/BF01840373. → pages 8[39] I. Mordatch, Z. Popovic´, and E. Todorov. Contact-invariant optimization forhand manipulation. In Proceedings of the ACM SIGGRAPH/EurographicsSymposium on Computer Animation, SCA ’12, pages 137–144,Aire-la-Ville, Switzerland, Switzerland, 2012. Eurographics Association.ISBN 978-3-905674-37-8. URLhttp://dl.acm.org/citation.cfm?id=2422356.2422377. → pages 10[40] I. Mordatch, E. Todorov, and Z. Popovic´. Discovery of complex behaviorsthrough contact-invariant optimization. ACM Transactions on Graphics(TOG), 31(4):43, 2012. → pages 10[41] I. Mordatch, J. M. Wang, E. Todorov, and V. Koltun. Animating humanlower limbs using contact-invariant optimization. ACM Transactions onGraphics (TOG), 32(6):203, 2013. → pages 10[42] S. Mulatto, A. Formaglio, M. Malvezzi, and D. Prattichizzo. Using posturalsynergies to animate a low-dimensional hand avatar in haptic simulation.Haptics, IEEE Transactions on, 6(1):106–116, 2013. ISSN 1939-1412.doi:10.1109/TOH.2012.13. → pages 12[43] J. Murayama, L. Bougrila, Y. Luo, K. Akahane, S. Hasegawa,B. Hirsbrunner, and M. Sato. Spidar g&g: A two-handed haptic interface forbimanual vr interaction. In Proceedings of EuroHaptics, pages 138–146.Citeseer, 2004. → pages 1262[44] A. M. Murray, R. L. Klatzky, and P. K. Khosla. Psychophysicalcharacterization and testbed validation of a wearable vibrotactile glove fortelemanipulation. Presence: Teleoperators and Virtual Environments, 12(2):156–182, 2003. → pages 13[45] C. Pacchierotti, F. Chinello, M. Malvezzi, L. Meli, and D. Prattichizzo. Twofinger grasping simulation with cutaneous and kinesthetic force feedback. InProceedings of the 2012 international conference on Haptics: perception,devices, mobility, and communication - Volume Part I, EuroHaptics’12,pages 373–382, Berlin, Heidelberg, 2012. Springer-Verlag. ISBN978-3-642-31400-1. doi:10.1007/978-3-642-31401-8 34. URLhttp://dx.doi.org/10.1007/978-3-642-31401-8 34. → pages 11, 20[46] P. Pastor, L. Righetti, M. Kalakrishnan, and S. Schaal. Online movementadaptation based on previous sensor experiences. In Intelligent Robots andSystems (IROS), 2011 IEEE/RSJ International Conference on, pages365–371, Sept 2011. doi:10.1109/IROS.2011.6095059. → pages 8[47] N. S. Pollard and V. B. Zordan. Physically based grasping control fromexample. In Proceedings of the 2005 ACM SIGGRAPH/Eurographicssymposium on Computer animation, SCA ’05, pages 311–318, New York,NY, USA, 2005. ACM. ISBN 1-59593-198-8.doi:10.1145/1073368.1073413. URLhttp://doi.acm.org/10.1145/1073368.1073413. → pages 10[48] D. Prattichizzo and J. C. Trinkle. Grasping. Springer handbook of robotics,pages 671–700, 2008. → pages 8[49] C. E. Rasmussen. Gaussian processes for machine learning. MIT Press,2006. → pages 39, 41[50] R. M. Sanso and D. Thalmann. A hand control and automatic graspingsystem for synthetic actors. In Computer Graphics Forum, volume 13, pages167–177. Wiley Online Library, 1994. → pages 35[51] M. Santello and J. F. Soechting. Gradual molding of the hand to objectcontours. Journal of Neurophysiology, 79(3):1307–1320, 1998. URLhttp://jn.physiology.org/content/79/3/1307.abstract. → pages 6, 35[52] M. Santello, M. Flanders, and J. F. Soechting. Postural hand synergies fortool use. The Journal of Neuroscience, 18(23):10105–10115, 1998. URLhttp://www.jneurosci.org/content/18/23/10105.abstract. → pages 663[53] E. L. Sauser, B. D. Argall, G. Metta, and A. G. Billard. Iterative learning ofgrasp adaptation through human corrections. Robotics and AutonomousSystems, 60(1):55–71, 2012. → pages 8[54] S. Schaal. Dynamic movement primitives -a framework for motor control inhumans and humanoid robotics. In H. Kimura, K. Tsuchiya, A. Ishiguro,and H. Witte, editors, Adaptive Motion of Animals and Machines, pages261–280. Springer Tokyo, 2006. ISBN 978-4-431-24164-5.doi:10.1007/4-431-31381-8 23. URLhttp://dx.doi.org/10.1007/4-431-31381-8 23. → pages 8[55] Y. Seol, C. O’Sullivan, and J. Lee. Creature features: Online motionpuppetry for non-human characters. In Proceedings of the 12th ACMSIGGRAPH/Eurographics Symposium on Computer Animation, SCA ’13,pages 213–221, New York, NY, USA, 2013. ACM. ISBN978-1-4503-2132-7. doi:10.1145/2485895.2485903. URLhttp://doi.acm.org/10.1145/2485895.2485903. → pages 32[56] R. Smith. Open dynamics engine, 2008. URL http://www.ode.org/.http://www.ode.org/. → pages 20, 67[57] S. Sueda, A. Kaufman, and D. K. Pai. Musculotendon simulation for handanimation. ACM Trans. Graph. (Proc. SIGGRAPH), 27(3), 2008. → pages11[58] Y. P. Toh, S. Huang, J. Lin, M. Bajzek, G. Zeglin, and N. Pollard. Dexteroustelemanipulation with a multi-touch interface. In Humanoid Robots(Humanoids), 2012 12th IEEE-RAS International Conference on, pages270–277, Nov 2012. doi:10.1109/HUMANOIDS.2012.6651531. → pages 13[59] B. Unger, A. Nicolaidis, P. Berkelman, A. Thompson, R. Klatzky, andR. Hollis. Comparison of 3-d haptic peg-in-hole tasks in real and virtualenvironments. In Intelligent Robots and Systems, 2001. Proceedings. 2001IEEE/RSJ International Conference on, volume 3, pages 1751–1756 vol.3,2001. doi:10.1109/IROS.2001.977231. → pages 45[60] H. Wang, K. H. Low, M. Wang, and F. Gong. A mapping method fortelemanipulation of the non-anthropomorphic robotic hands with initialexperimental validation. In Robotics and Automation, 2005. ICRA 2005.Proceedings of the 2005 IEEE International Conference on, pages4218–4223, April 2005. doi:10.1109/ROBOT.2005.1570768. → pages 1364[61] R. Wang, S. Paris, and J. Popovic´. 6d hands: markerless hand-tracking forcomputer aided design. In Proceedings of the 24th annual ACM symposiumon User interface software and technology, pages 549–558. ACM, 2011. →pages 9[62] R. Y. Wang and J. Popovic´. Real-time hand-tracking with a color glove.ACM Transactions on Graphics, 28(3), 2009. → pages 9, 30[63] Y. Wang, J. Min, J. Zhang, Y. Liu, F. Xu, Q. Dai, and J. Chai. Video-basedhand manipulation capture through composite motion control. ACMTransactions on Graphics (TOG), 32(4):43:1–43:14, 2013. → pages 9[64] G. Westling and R. Johansson. Factors influencing the force control duringprecision grip. Experimental Brain Research, 53(2):277–284, 1984. ISSN0014-4819. doi:10.1007/BF00238156. URLhttp://dx.doi.org/10.1007/BF00238156. → pages 6, 47[65] G.-H. Yang, K.-U. Kyung, M. Srinivasan, and D.-S. Kwon. Quantitativetactile display device with pin-array type tactile feedback and thermalfeedback. In Robotics and Automation, 2006. ICRA 2006. Proceedings 2006IEEE International Conference on, pages 3917–3922, May 2006.doi:10.1109/ROBOT.2006.1642302. → pages 20[66] Y. Ye and C. K. Liu. Synthesis of detailed hand manipulations using contactsampling. ACM Trans. Graph., 31(4):41:1–41:10, July 2012. ISSN0730-0301. doi:10.1145/2185520.2185537. URLhttp://doi.acm.org/10.1145/2185520.2185537. → pages 10, 11, 36[67] L. Ying, J. Fu, and N. Pollard. Data-driven grasp synthesis using shapematching and task-based pruning. Visualization and Computer Graphics,IEEE Transactions on, 13(4):732–747, July 2007. ISSN 1077-2626.doi:10.1109/TVCG.2007.1033. → pages 10[68] W. Zhao, J. Chai, and Y.-Q. Xu. Combining marker-based mocap and rgb-dcamera for acquiring high-fidelity hand motion data. In Proceedings of theACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA’12, pages 33–42, Aire-la-Ville, Switzerland, Switzerland, 2012.Eurographics Association. ISBN 978-3-905674-37-8. URLhttp://dl.acm.org/citation.cfm?id=2422356.2422363. → pages 9[69] C. B. Zilles and J. Salisbury. A constraint-based god-object method forhaptic display. In Intelligent Robots and Systems 95. ’Human RobotInteraction and Cooperative Robots’, Proceedings. 1995 IEEE/RSJ65International Conference on, volume 3, pages 146–151 vol.3, 1995.doi:10.1109/IROS.1995.525876. → pages 12, 2666Appendix ASystem DesignThe following is a brief overview of the major software and hardware componentsthat implement our interface.For consistency with the code implementation we will refer to the interfaceby the internal title “Dihaptic” (a truncation of “distal control haptic”), though wediscourage the use of this name for general discussion.A.1 System OutlineFigure A.1 shows the major structural components of the Dihaptic system. Basedon commands from the GraspAnimUI user application, a virtual scene consistingof a ground plane, one or more rigid objects, and an animated manipulator (eg:human hand) is initialized and timestepped. DihapticLib comprises the bulk ofimplementation for this project, containing the primary methods for timesteppingthe scene state, updating the pose of the animated manipulator, and calculatinguser feedback forces based on the current scene state and haptic device informationforwarded from the CHAI 3D haptic library [18]. We use a customized version ofCHAI 3D that includes new algorithms for a soft fingerpad contact approximationas well as miscellaneous other changes that proved useful for the Dihaptic project(such as improved multi-device support and scene resource management).For convenience, both DihapticLib and CHAI 3D are permitted to inter-face with ODE [56], the underlying dynamics simulator, and OpenGL, the graph-67ODE (simulation)	CHAI 3D Library (customized for Dihaptic)	 OpenHaptics HD Device API 	GraspAnimUI App UI (Qt)	OpenGL (graphics)	DihapticLib	(control, manipulators, IK)	Figure A.1: Major components of the Dihaptic systemical rendering interface. While CHAI 3D’s scene graph system is not structuredto support advanced shaders, Dihaptic does use a simple shadow mapping shaderprogram to improve depth perception for users. It also supports stereo renderingvia OpenGL, which is particularly effective for workspace setups with colocatedgraphic rendering and haptic feedback (such as in our user study in Chapter 5).Dihaptic uses Google Protobuffers [22] for serializing motion sequence datafor replay and observation within the GraspAnimUI application. From the userinterface, motion may be exported to a BVH motion sequence [36] for use in ren-dering applications.The physical arrangement of the Phantom devices used for Dihaptic is shownin Figure A.2; both devices are connected to a single host computer.68(a) Top View(b) Side View(c) Side View (in use)Figure A.2: Proper Phantom device arrangement for Dihaptic69Appendix BCustom Gimbal Design70  		 	  	  	 	    	  	 	 	      	 ! !" #$%  &%'() #*"  !+ ,$#-## !"*&'(# #$(+( #$%  '(*"&. '/ !/#(0"("%1  %"$!11"#2/%/1+ %&/ %&/$*$3  &%'%4#+((#*1%"&(%("&&" !1"&($"!&%$( & (1#(!# %& ( "!&%$#+&%$"(!%%-%& 1%$* 4 "'! %$  $(1%"*& %$  $(1%%($%"(!-# !5" %##67 ,    !8	871	            !"# !"$ !"% &'( )(# $* '*+,)%- .$#  ( $/)$$ (#+,$ $0, ,'1$*1 1 +,%#1!'+% )($,2#,#1)*3 '*#(33)#$14*3 % *00)% *005''*+)*&$ ,),$3*#,*,)#0# %(3#,#(*),)% 01',3)$,()$ * ,) #(*)$1 *#,()**)%/*1 3* &)'#+( 0* ',3*# 0* ',3**00,*#,()/$' ())6# *$$7    72						         !" #!$%&% '# $#()& *!" ++!",  % '-& %() .'))$/'#/ / () /$(  &%)0)/&#1 $#'%11&/2#1 #..& #..''3$$#(&#4)&)1#)#)&.  %1)'%#&')& ./$)1&)%& # )& !!%#&'/#')%&##& -#/ 1#' 4&$(% .#' $')1# .#' $')1##..)'#)%&-$ %&&5 #6    73	  	    ! 	 "#$ #%&!' (()*	   + "!$  	*%& ,"&+&-"#$- - %&*	-% ! &.	&	$-!#/ #	" //!	-0#/+ #,,! #,,"*"1#%!#+&!&*/#	&#$&!	,	  /	&"	 $#!"&! ,-&/!& ! # &! 	 $#!"-+#"	& !#$#!$#- /#"* !	% ,$#" "&/#$	* ,$#" "&/#$#,,$&"#	& !  !!2	 #3    74

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0165938/manifest

Comment

Related Items