UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Two-handed coordination in robots : by combining two one-handed trajectories based on probabilistic models… Blumer, Benjamin 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata


24-ubc_2017_february_blumer_benjamin.pdf [ 5.26MB ]
JSON: 24-1.0340608.json
JSON-LD: 24-1.0340608-ld.json
RDF/XML (Pretty): 24-1.0340608-rdf.xml
RDF/JSON: 24-1.0340608-rdf.json
Turtle: 24-1.0340608-turtle.txt
N-Triples: 24-1.0340608-rdf-ntriples.txt
Original Record: 24-1.0340608-source.json
Full Text

Full Text

Two-handed coordination in robotsBy combining two one-handed trajectories based onprobabilistic models of taskspace effectsbyBenjamin BlumerB.Sc. (first-class honours), The University of Calgary, 2011A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF APPLIED SCIENCEinTHE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES(Mechanical Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)December 2016c© Benjamin Blumer 2016AbstractHuman environments and tools are commonly designed to be used by two-handedagents. In order for a robot to make use of human tools or to navigate in a humanenvironment it must be able to use two arms. Planning motion for two arms is adifficult task as it requires taking into account a large number of joints and links andinvolves both temporal and spatial coordination. The work in this thesis addressesthese problems by providing a framework to combine two single-arm trajectoriesto perform a two-armed task. Inspired by results indicating that humans performbetter on motor tasks when focusing on the outcome of their movements ratherthan their joint motions, I propose a solution that considers each trajectory’s effecton the taskspace.I develop a novel framework for modifying and combining one-armed trajec-tories to complete two-armed tasks. The framework is designed to be as generalas possible and is agnostic to how the one-armed trajectories were generated andthe robot(s) being used. Physical roll-outs of the individual arm trajectories areused to create probabilistic models of their performance in taskspace using Gaus-sian Mixture Models. This approach allows for error compensation. Trajectoriesare combined in taskspace in order to achieve the highest probability of successand task performance quality. The framework was tested using two Barrett WAMrobots performing the difficult, two-armed task of serving a ping-pong ball. ForiiAbstractthis demonstration, the trajectories were created using quintic interpolations ofjoint coordinates. The trajectory combinations are tested for collisions in the robotsimulation tool, Gazebo. I demonstrated that the system can successfully chooseand execute the highest-probability trajectory combination that is collision-free toachieve a given taskspace goal.The framework achieved timing of the two single-arm trajectories optimal towithin 0.0389 s – approximately equal to the time between frames of the 30Hzcamera. The implemented algorithm successfully ranked the likelihood of successfor four out of five serving motions. Finally, the framework’s ability to perform ahigher-level tasks was demonstrated by performing a legal ping-pong serve. Theseresults were achieved despite significant noise in the data.iiiPrefaceThe author developed the research problem in consultation with Drs. Machiel Vander Loos and Elizabeth Croft. The author conducted the literature review, devel-oped the algorithm, implemented all software, conducted all experiments, and per-formed the analysis.The contents of Chapter 3 and the first experiment in Chapter 5 have been sub-mitted for publication as: Benjamin Blumer, Machiel Van der Loos, and ElizabethCroft. Serving up two-handed coordination. International Conference on Roboticsand Automation 2017. The author wrote the submission. Editing and supervisionwere provided by Drs. Machiel Van der Loos and Elizabeth Croft.All software and hardware developed for this work (described in Chapter 4)is available at the author’s BitBucket software repository [1]. Videos of the robotserving are available on the author’s YouTube page [2].ivContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivContents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xGlossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Introduction to robot motion planning . . . . . . . . . . . . . . . 21.2 Challenges in two-arm motion planning . . . . . . . . . . . . . . 31.2.1 Computational difficulty . . . . . . . . . . . . . . . . . . 41.2.2 Uncertainties in robot execution and environment . . . . 51.3 Why a ping-pong serve? . . . . . . . . . . . . . . . . . . . . . . 6vContents1.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.4.1 A conceptual framework for planning two-armed motions 71.4.2 An implementation of this framework to demonstrate itscapabilities and feasibility . . . . . . . . . . . . . . . . . 81.4.3 Validation of the framework by using it to complete a ping-pong serving task . . . . . . . . . . . . . . . . . . . . . 91.5 Thesis layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.1 Obtaining one-armed trajectories . . . . . . . . . . . . . . . . . 233.2 Probabilistic taskspace trajectory representations . . . . . . . . . 243.3 Combining two single-armed trajectories . . . . . . . . . . . . . 273.3.1 Goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.3.2 Tool - Mode finding . . . . . . . . . . . . . . . . . . . . 293.3.3 Tool - Marginal distribution . . . . . . . . . . . . . . . . 293.3.4 Tool - Conditional distribution . . . . . . . . . . . . . . . 323.3.5 Tool - Joint distribution . . . . . . . . . . . . . . . . . . 333.3.6 Tool - External modifications . . . . . . . . . . . . . . . 343.3.7 Tool - Time shifting . . . . . . . . . . . . . . . . . . . . 343.3.8 Ranking . . . . . . . . . . . . . . . . . . . . . . . . . . 383.4 Collision Avoidance . . . . . . . . . . . . . . . . . . . . . . . . 383.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41viContents4.1 Obtaining one-armed trajectories . . . . . . . . . . . . . . . . . 434.1.1 Generating robot motions . . . . . . . . . . . . . . . . . 434.1.2 Capturing the effects of robot motions in taskspace . . . . 454.2 Probabilistic taskspace trajectory representations . . . . . . . . . 484.3 Choosing trajectory combinations . . . . . . . . . . . . . . . . . 504.4 Collision checking . . . . . . . . . . . . . . . . . . . . . . . . . 524.5 Summary of contributions . . . . . . . . . . . . . . . . . . . . . 545 Experiments and Demonstrations . . . . . . . . . . . . . . . . . . . 555.1 Experiment 1: Ability to choose the best trajectory combination . 555.1.1 Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . 565.1.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 565.1.3 Results and analysis . . . . . . . . . . . . . . . . . . . . 575.1.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 585.1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 645.2 Experiment 2: Timing . . . . . . . . . . . . . . . . . . . . . . . 645.2.1 Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . 655.2.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 655.2.3 Results and analysis . . . . . . . . . . . . . . . . . . . . 665.2.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 705.2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 705.3 Demonstration: Ping-pong serve . . . . . . . . . . . . . . . . . . 715.3.1 Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . 715.3.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 715.3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 72viiContents5.3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 725.3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 735.4 Experiments 1 & 2 and service demonstration: discussion and con-clusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756.1 Summary of contributions . . . . . . . . . . . . . . . . . . . . . 756.2 Summary of results . . . . . . . . . . . . . . . . . . . . . . . . . 776.3 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786.4 Ping-pong serving challenge . . . . . . . . . . . . . . . . . . . . 79Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80viiiList of Tables5.1 The number of successful hits for each different throwing trajec-tory (Experiment 1). . . . . . . . . . . . . . . . . . . . . . . . . . 575.2 Standard deviation of the ball and paddle positions at the temporalmode (Experiment 1). . . . . . . . . . . . . . . . . . . . . . . . . 575.3 The number of hits for different offsets from the algorithm-recommendedtime using trajectories T 5s1 and T5s2 (Experiment 2). . . . . . . . . 675.4 The number of hits for different offsets from the algorithm-recommendedtime using trajectories T 4s1 and T4s2 (Experiment 2). . . . . . . . . . 675.5 The number of hits for different offsets from the algorithm-recommendedtime using trajectories T 3s1 and T3s2 (Experiment 2). . . . . . . . . . 685.6 The number of hits for different offsets from the algorithm-recommendedtime using trajectories T 2s1 and T2s2 (Experiment 2). . . . . . . . . . 685.7 The number of hits for different offsets from the algorithm-recommendedtime using trajectories T 1s1 and T1s2 (Experiment 2). . . . . . . . . . 685.8 The lower and upper bound on the satisfactory performance win-dows for different combinations of swing and throwing trajectories(Experiment 2). . . . . . . . . . . . . . . . . . . . . . . . . . . . 70ixList of Figures2.1 A sample two-handed trajectory generated by the master-slave ap-proach in [3].1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.1 The two-handed coordination framework. . . . . . . . . . . . . . 223.2 A 2-component, 1-dimensional Gaussian Mixture Model (GMM)[4]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.3 Marginal distribution of a 3-component, 2-dimensional GMM. . . 313.4 Conditional distribution of a 3-component, 2-dimensional GMM . 323.5 A joint probability distribution. . . . . . . . . . . . . . . . . . . . 343.6 The spatial coordinates of the left and right hands for an exampleregrasping task. . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.7 Marginal probability density functions for each hand for the re-grasp task (top and middle) and joint probability density (bottom).. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.8 Conditional (temporal) probability densities of the left and righthands for an example re-grasping task. . . . . . . . . . . . . . . 394.1 Laser-cut acrylic ball-throwing hand . . . . . . . . . . . . . . . . 424.2 3D-printed ping-pong paddle holder. . . . . . . . . . . . . . . . . 42xList of Figures4.3 Workflow for generating one-armed trajectories. . . . . . . . . . . 444.4 Workflow for capturing taskspace outcomes. . . . . . . . . . . . . 464.5 Work flow for creating Gaussian Mixture Models from taskspaceoutcomes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494.6 Pairing trajectories and ranking them by their effect on the com-bined taskspace. . . . . . . . . . . . . . . . . . . . . . . . . . . . 504.7 Collision-checking procedure. . . . . . . . . . . . . . . . . . . . 535.1 Experimentally-determined probability of success vs algorithm-predictedprobability of success. . . . . . . . . . . . . . . . . . . . . . . . 595.2 Cartesian locations of the ball and paddle (Experiment 1) for j6 =0.17radians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605.3 Cartesian locations of the ball and paddle (Experiment 1) for j6 =0.22radians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615.4 Cartesian locations of the ball and paddle (Experiment 1) for j6 =0.27radians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625.5 Cartesian locations of the ball and paddle (Experiment 1) for j6 =0.32radians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635.6 Cartesian locations of the ball and paddle (Experiment 1) for j6 =0.37radians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645.7 Success vs time shift from the algorithm-recommended time delaybetween throwing the ball and swinging the paddle. . . . . . . . . 695.8 Setup for the ping-pong serve demonstration. . . . . . . . . . . . 72xiGlossaryBumblebee A stereovision camera from Point Grey Research.. xii, 44, 53Configuration A configuration of a manipulator is a complete specification of thelocation of every point on the manipulator [5].. xiiConfiguration Space The set of all configurations is known as the configurationspace [5].. xiiCSV A table of data in which columns are separated by commas. Useful formatfor saving and loading datasets on a computer.. xii, 47DMP The Dynamic Movement Primitive framework is a means of storing a robottrajectory as a non-linear second order differential equation.. xii, 16, 23DOF An object has n degrees of freedom (DOF) if its configuration can be mini-mally specified by n parameters [5].. xii, 14, 15EM Expectation-maximization is an algorithm used for fitting parameters in aGaussian Mixture Model.. xii, 26, 47GMM A Gaussian Mixture Model is a probability distribution composed of aweighted sum of Gaussians.. x, xii, 26, 29–31, 47, 50, 53, 54, 59, 65xiiGlossaryHMM A statistical model. Informally, the model has the assumptions that the(hidden) state of the system probabilistically influences the output. The prob-ability of transitioning to another state is dependent only on the current state..xii, 18ITTF The governing body for international table tennis associations.. xii, 6, 72ROS An open-source robot operating system. ROS acts as a robot-agnostic com-munication layer, simplifying the task of having different pieces of softwarecommunicate with each other or with robot hardware.. xii, 42, 44, 46, 47,49, 50, 52, 53WAM A 7-degree of freedom robot arm.. xii, 8, 44, 50xiiiAcknowledgementsI would like to thank my research supervisors, Drs. Elizabeth Croft and MachielVan der Loos. They granted me the freedom to explore different fields and prob-lems so that I could find an area that really interested me. They supported me intwo formative experiences: doing research at Reykjavík University, Iceland for asemester and performing research and development at a local company. Withouttheir patience and support, this thesis would not have come to fruition.I am also grateful for the wonderful people I have for labmates at the CARISlab. In particular, I would like to acknowledge AJung Moon and Navid Lambert-Shirzad for lengthy discussions that helped me narrow my research topic and PhilipWang for help modelling the paddle-holding hand depicted in Chapter 4.My dog, Bojo Dogeo, has never failed to cheer me up with a goofy prance,licks, or cuddles. Her companionship while writing, scripting, and analysing datahas been invaluable.Laurel Anderson has been an incredible lab assistant, late-night-thesis-coding-session dog walker, sounding board, and friend.Finally, I would like to thank two undergraduate interns: Katelyn Currie, forher collaboration on the design of the ball dispenser described in Chapter 1 andearly prototypes of the ball-holding hand depicted in Chapter 4, and Mateus An-dreazza for his contributions to the ball-tracking code.xivTo my parents, Joan, Steve, and Rosanne, and my grandparents, Rhoda and Jack,for their unending encouragement of my nerdy endeavours: personal, academic,and professional.xvChapter 1IntroductionRobots have the potential to improve human lives in many ways. For example,robots could be used to improve productivity, perform dangerous or boring jobs, orprovide assistive care to older persons or people with disabilities. However, manyof these jobs require robots to function in human environments and to work withhuman tools. Human environments are largely designed for two-handed agents.For example, to open a door while carrying something requires the coordination oftwo arms. Moreover, many human devices require two arms. To open a childproofmedicine container, button a shirt, or to operate a seat belt requires two handed-coordination. Human tools that require two hands include the broom and dustpan,hammer and nail, and even a fork and knife. Therefore, it is important to developthe capability for robots to use two arms.Aspects of two-handed coordination include: relative and absolute position,relative and absolute timing, relative and absolute velocity, and collision avoid-ance. Clapping is a good example to illustrate this. Imagine applauding after aperformance has finished. A person needs the absolute positions of both hands tobe in front of the body. Clapping to the side or behind defies social conventions,and risks hitting other performance attendees. In order to clap, the relative posi-tion of the palms of the hands must be zero at some point in time – that is, they11.1. Introduction to robot motion planningmust touch. If the absolute timing is premature, the performance would be inter-rupted. If it is delayed, it could result in an awkward pause. The hands must havea relative velocity towards each other so that they reach, and an absolute velocitysuch that the motion is symmetric. Unintended collisions between the arms and theenvironment must be avoided.1.1 Introduction to robot motion planningFor a robot to accomplish a task, it must be be able to plan and execute motions.Different tasks require fundamentally different motions and considerations. Takethe examples of a robot pouring a drink [6] and a robot folding clothes [7]. Thepath a hand must take to pour a can of soda into a cup is entirely different than thepath(s) required to fold a shirt.While each task will have its own requirements, the ability to generalize a mo-tion is important. Ideally, the same algorithm that allows a robot to fold a faceclothwould also allow it to fold a bath towel. Similarly, a drink-pouring robot should beable to pour a drink in a cup in different locations on different tables or counters.For this reason, it is desirable to generate appropriate trajectories as needed ratherthan pre-planning every motion.Robot motions are planned in either jointspace, workspace, or taskspace. Jointspacerefers to a (multidimensional) space consisting of angles for each of the robot’sjoints. For example, a robot arm with a single wrist, elbow, and shoulder jointwould have a 3-dimensional jointspace. Any point in that space would describea potential robot configuration (e.g. shoulder joint rotated at pi , elbow joint bentat pi2 , wrist joint bent topi4 ) . Workspace refers to the area a robot is moving in.21.2. Challenges in two-arm motion planningFor a wheeled robot, this may refer to the floor it is travelling on. For a roboticarm, this may refer to the 3D-space within reach of the hand. Taskspace can be anabstract space representing configurations of variables relevant to the task at hand.For example, the taskspace for the drink-pouring task might consist of the relativedistance between the pouring and receiving cup or the amount of fluid in either.Often, motion planning involves more than one of these spaces. For exam-ple, a basic approach to planning a motion is to identify the desired Cartesian(workspace) coordinates of the beginning and end of the motion that will achievea (taskspace) goal. Then, use an inverse-kinematics approach, e.g., taking the in-verse of the Jacobian [5], to obtain the start- and end-coordinates in the robot’sjointspace. Then a trajectory can be created by linearly interpolating between thestart- and end-joint coordinates. This set of time-stamped coordinates is then sentto a lower-level controller to determine the appropriate power to send to each mo-tor.1.2 Challenges in two-arm motion planningGenerally, planning motions for two arms is more difficult than planning motionsfor a single arm. Motions must be planned for twice the number of joints and thisincreases the computational complexity. These motions must be coordinated spa-tially and temporally; this increases the importance of accounting for uncertaintiesin robot performance and the environment.My overall contribution through this thesis is a proposed framework for two-handed coordination in robots that addresses these challenges. Subsections 1.2.1and 1.2.2 below detail these challenges. Section 1.3 provides context for addressing31.2. Challenges in two-arm motion planningthis challenge, and 1.4 provides details of the specific contributions of this work.1.2.1 Computational difficultyPlanning two-arm motions tends to be more complex than planning one-arm mo-tions at every layer of motion planning: the taskspace, workspace, and jointspace.Examples of difficulties in each of these areas are described below.Tasks that require two arms typically have more sophisticated taskspaces thanthose that only involve one arm. In addition to having to plan for a second arm, thetask spaces of each arm may need to be coordinated, as described in the openingof this chapter. For example, in hitting nails in with a hammer, it is not sufficientto bring nails to their desired location, since the hammer strike must be spatiallycoordinated with this location.The workspaces of two-arm robots can also be more complex. Robots with dif-ferent geometries will also have different workspaces. If the same robots are used,the workspaces will still differ because the robots must be placed in different lo-cations. Additionally, if the workspaces of the arms overlap, unintended collisionsmust be avoided.Consider the jointspace for a robot with j joints, each of which may be atany of N discrete values during m time steps. The jointspace then contains Nmjoptions.2 Therefore, the jointspace grows exponentially with the number of jointsthat require simultaneous planning.2If one makes the simplifying assumption that a robot can move from any configuration to anyother in one timestep.41.2. Challenges in two-arm motion planning1.2.2 Uncertainties in robot execution and environmentThe execution of robot motions does not always follow the planned motion. Forexample, if planning does not account for the physical limitations of the robot’smotors and joints, the performed trajectory can differ from the planned trajectorytemporally and/or spatially. Additionally, the control software may not be capableof executing the desired trajectory. Static friction can also impact the trajectorybetween executions.The environment may be unreliable. If operating outdoors, weather may affecttrajectory performance. Inside a factory, variations in noise or light could affectsensor readings and therefore servoing performance.Other collaborating robots may perform differently depending on their jobqueue or available processing power. Collaborating humans may perform differ-ently depending on their current mental state or how much priority they give thetask at hand.Planning motions to affect objects in the task space is an even more difficulttask; these plans require an understanding of the dynamics of external objects.In some cases, the tool may have complex dynamics such that a minor variationin trajectory performance creates a large variation in task performance. In othercases, the tool may not be identical; for example, if a previously used wrench wasreplaced with a different wrench.Perfect situational knowledge, modelling, and control would be required to en-tirely overcome these substantial obstacles. However, all approaches should con-sider these challenges and attempt to mitigate them.51.3. Why a ping-pong serve?1.3 Why a ping-pong serve?To assess the performance of two-armed motion-planning schemes, I propose thetask of serving a ping-pong ball. In this task, a robot must use one hand to tossa ball in the air, and use the other arm to strike it using a ping-pong paddle. Forthe serve to be legal by International Table Tennis Federation (ITTF) rules [8], theball must bounce exactly once on the server’s side of the table, clear the net, andbounce at least once on the opponent’s side of the table.This task is chosen for its difficulty. It is very sensitive to timing, positioning,and velocity of each arm and their coordination. If the paddle is swung too earlyor too late, the ball will be missed. If the paddle is swung to the wrong location,the ball will be missed. If the paddle and ball do not have appropriate relativeand absolute velocity, the ball will not follow a legal trajectory. Additionally, thephysics and strategy of ping-pong remain an open problem. This is demonstratedby the abundance of research investigating, e.g., numerical trajectory prediction ofa spinning, bouncing ball [9], human biomechanics during ping-pong [10, 11], and,explicitly, ping-pong serving strategy [12]. Ping-pong serving provides a rich areaof study in developing two-handed strategy, motion-planning, and actuation.Several other works have touched on the robot serving problem. In [13], a ball-launching mechanism is described for the “tossing” portion of the task. Wu andKong, at the Institute of Cyber-Systems and Control, simply drop the ball from onehand at a set height and swing the paddle using the other.3 However, in both casesworkarounds have been used in place of coordinating a throw with a hit.It is worth noting that ping-pong has a long history as a benchmark in robotics.3A peer-reviewed source is not available at the time of writing. However, a video of the robot’sserve can be seen at [14].61.4. ContributionsThe task was proposed at least as early as 1984 by John Billingsley [15]. At thetime of writing, a search for “robot ping pong” returns over 400 results on theacademic article database, Google Scholar. A book has been published on thetopic [16]. Despite this attention, interesting problems remain including ball-spintracking [17], movement generation [18], and much more.Extending this benchmark to a two-armed task is a natural progression. Thestarting point is to serve a ball legally. However, competitive ping-pong playerswill try to serve the ball with:• Spin, so that its trajectory curves in the air or so that it changes directionswhen it bounces.• Varying speeds.• Varying placement.These are natural extensions to the challenge. An additional challenge is decisionmaking, e.g., choosing the best serve given the opponent’s past performance andpositioning. In this thesis, basic, legal serves are demonstrated.1.4 Contributions1.4.1 A conceptual framework for planning two-armed motionsI develop a 4-component framework to build a collection of one-armed trajecto-ries, model their outcomes in taskspace, and combine the trajectories to performa two-armed task. The framework includes procedures for assessing the relativelikelihood of success of different combinations, producing timing offsets for the71.4. Contributionstwo one-armed motions, and testing, in advance, for collisions between the arms.This framework is applicable to any task in which:• Relevant single-arm trajectories can be generated.4• The relationship between the taskspace outcomes of each arm’s movementto the desired two-armed taskspace goal can be quantified.1.4.2 An implementation of this framework to demonstrate itscapabilities and feasibilityI present my open-source software suite that implements the framework on twoWAM (Barrett Technology, LLC. Newton, MA, USA) arms. This software suiteincludes modules to:• Capture the taskspace outcome (ping-pong ball and paddle location) using astereo-vision camera.• Model taskspace outcomes probabilistically.• Combine two single-arm trajectories to complete a two-armed task. Thisincludes providing a time-offset for the start times of the two trajectories.• Predict the relative success rates of different combinations.• Check combinations of single-arm trajectories for collisions.• Visualize multi-dimensional taskspace outcomes.• Simultaneously control two WAM robots.4The implementation in this thesis is open-loop. As such, the task must be such that these trajec-tories can be generated before the motion is executed.81.5. Thesis layoutI also provide open-source hardware designs for:• A laser-cut robot end-effector that can be used to toss a ball.• A 3D-printed robot end-effector that can be used to grip a ping-pong paddle• A ball dispenser that allows a robot to autonomously retrieve balls.51.4.3 Validation of the framework by using it to complete aping-pong serving taskTo show the framework’s application to a problem, I demonstrate its capabilitiesin a ping-pong serving task. I conduct experiments and analyses of the ability topredict which combinations of trajectories are more likely to succeed as well asits ability to pick the optimal time delay between the start times of the two single-armed trajectories.1.5 Thesis layoutIn Chapter 2, I discuss existing approaches to two-arm motion planning and high-light some of the key remaining challenges. Chapter 3 describes the conceptof my proposed framework. This includes the generation of trajectories, creat-ing taskspace probability distributions, higher-level decision making, and collisionavoidance. The discussion in this chapter is designed to be task-agnostic. Chapter4 discusses how the framework is applied to the ping-pong serving task. Task-specific hardware and software is also discussed in this chapter. Chapter 5 presents5Unfortunately, the ball dispenser was not compatible with the hand used for the final analysis.However, it is a novel design and a hand could be designed to be compatible with it. The design ofthe ball dispenser is available at the author’s BitBucket software repository [1].91.5. Thesis layoutthe results of the ping-pong serving task. This includes commentary on the effi-ciency of the scheme and task-performance. Chapter 6 discusses the conclusionsthat can be drawn from this thesis as well as ideas for extensions to the framework.10Chapter 2BackgroundTwo-handed coordination is crucial to completing many tasks. In designing con-trol schemes for two-handed robots, one must overcome the challenges of compu-tational difficulty, collision avoidance, and uncertainty in robot performance andenvironment (Subsection 1.2.2). Researchers have attacked this problem from dif-ferent angles. This chapter is dedicated to reviewing approaches to two-armedmotion planning. The strengths and scope of these approaches are discussed andcompared with the proposed approach. The review in this chapter focuses on theaspects of motion planning that are directly relevant to the system developed withinthis thesis. See Section 1.3 for the origin of the robot ping-pong challenge and ap-proaches to robot ping-pong serving. Necessary mathematical tools are discussedin Chapter 3. For a thorough review of more aspects of two-handed motion plan-ning, including kinematics and control equations, see [19] and [20].This section compares concepts used in the previous work to decisions madeabout the proposed work. While the full description of the framework is in thesubsequent chapter, for this discussion, it suffices to know:• Each arm has a collection of trajectories.• Each trajectory is associated with a taskspace outcome.• Decisions are made about which trajectories to combine based on probabilis-11Chapter 2. Backgroundtic models of the taskspace outcomes.• Optimal timing of the two one-armed trajectories is calculated using theseprobabilistic models.Closed-form solutions have been devised for certain two-armed tasks. Thesesolutions have a specific set of instructions that guarantee the completion of a task.For example, in [21], an algorithm for completing a two-arm pick-and-place taskis described for two SCARA robots. They describe an order for the movement ofeach joint of each robot that ensures the objects can be picked off a conveyor beltand placed in a bin without the two arms colliding. In particular, the planning isbroken into multiple, lower-dimensional configuration spaces; one for each robotand one for each part on the table. They also make several assumptions, such as therobot being clear of the table and obstacles when at maximum height, that allowsthem to reduce the configuration space of each robot to two dimensions. Closedform solutions are typically efficient, as they don’t require iterative approaches.However, they are problem-specific and robot-specific.Another common approach to coordinating two arms is a master/slave ap-proach (e.g., [22], [23], and [24]). One arm is the leader, and the other arm followsto reduce loading of the object being manipulated. The authors of one of the ear-lier works, [3], used this approach for a box carrying application. Two arms weregiven the instruction to grasp a box. The master arm was given a trajectory to afinal destination. The second arm’s path is updated in increments to reduce theloading on first arm’s end-effector. In particular, the position of the second arm, P′is incremented on the j+1 iteration by12Chapter 2. BackgroundFigure 2.1: A sample two-handed trajectory generated by the master-slave ap-proach in [3].7∆P′G, j+1 =Fx j/(Fx j−Fx j−1) 0 00 Fy j/(Fy j−Fy j−1) 00 0 Fz j/(Fz j−Fz j−1)∆P′G, jwhere Fj = ( fx j fy j fz j ) is the force on the first arm’s end-effector oniteration j. A trajectory generated by this scheme is depicted in Figure 2.1. Thisapproach has been useful for many tasks; for example, similar methods have beenused to allow a humanoid to maintain balance while using both arms to slide anobject in a plane [25]. While these techniques are very useful for certain tasks, theforce and/or position following algorithms must be designed on a per-task basis,and they tend to work better for tasks where the two arm motions are similar. Thisexcludes many tasks where the arms have asymmetric roles.Because this thesis aims to address coping with uncertainties in the robots7This is Figure 11 in the cited article.13Chapter 2. Backgroundand the environment, it is worth highlighting two particular papers using the mas-ter/slave approach. In [26] and [27], the authors present fuzzy control schemes tocompensate for model uncertainties and external disturbances. These papers focuson force and position control of an end-effector for jointly manipulating an object.The fuzzy-control approaches are responding to one of they key challenges oftwo-handed motions: multi-robot systems are inherently high-degree of freedom(DOF) systems and require compensation for uncertainty in the environment andthe other robot’s performance. The cited works develop force and position con-trollers for the task of jointly manipulating a single object. In this framework,we cope with uncertainty by building probabilistic models of taskspace outcomes.The monitored taskspace variables are chosen on a per-task basis. This has theadvantage of being applicable to a wider-variety of tasks, including those withasymmetric roles for each arm.For more generic two-arm tasks, often machine learning is used to simultane-ously plan the motions of both arms. To generate trajectories using machine learn-ing, the joint space is explored until a desirable outcome is achieved. For example,[28] proposes using a co-evolutionary algorithm to plan two-armed motions. Eacharm starts with its own set of trajectories that co-evolve with the other arm’s trajec-tories until a suitable, collision-free two-arm trajectory is found. In their implemen-tation, each chromosome consists of alleles representing configurations at each dis-cretized point in time:[{q(∆t,G)11 , ...,q(∆t,G)i j }, ...,{q((n−2)∆t,G)11 , ...,q((n−2)∆t,G)i j }] wheren is the total number of time steps, and qi j represents the jth DOF of the ith robot,and G is the current generation. These chromosomes are assessed for fitness usingan optimization function based on the number of collisions between the two robots,total joint-distance travelled, total Cartesian-distance travelled, and acceleration.14Chapter 2. BackgroundThis requires O(population size× number of generations× n× number of DOF)operations. While this is only linearly dependent on the DOF term, the requiredpopulation size and number of generations are implicitly dependent on DOF; themore complicated the configuration space, the more generations and the larger thepopulation size are required to reach a satisfactory trajectory. In fact, the imple-mentation of the algorithm uses the simplified case of two robots working in aplane – using only two DOF each – and the algorithm still required approximately1000 iterations to converge. This is a general difficulty with computing two-handedtrajectories. As mentioned in Subsection 1.2.1, the total number of configurationsof a robot, or several robots, tends to grow exponentially with the number of joints.As a result, machine learning can be slow for planning two-arm trajectories.In the co-evolution work, each robot’s one-armed trajectories are consideredseparately. However, they still must evolve to be mutually useful – requiring manyiterations. In the proposed framework, the trajectories are treated separately untila decision must be made about which combination to use. The proposed approachhas the advantage that is is easier to generate the one-armed trajectories and thedisadvantage that it applies only to tasks in which the desired outcome of the two-armed task can be described in terms of the outcomes of the one-armed tasks.However, many useful tasks meet this requirement.To expedite the search, a technique called teach by demonstration can be used.In teach by demonstration, humans demonstrate a possible solution to provide astarting point for the search. This can be done by physically manipulating therobots (kinesthetic teaching). For example, in [29], subjects guide a humanoidrobot’s end-effectors in an effort to complete two tasks: flipping a box using chop-sticks and hitting a ball with a pool cue. They use Policy Improvement with Path15Chapter 2. BackgroundIntegrals (PI2) [30] to adjust parameters of the robot’s movement. The robot’s tra-jectories are encoded as Dynamic Movement Primitives (DMP) [31] to reduce thenumber of parameters to describe the motion. DMPs encode movement using adynamic system with a series of perturbations in a non-linear function. For thepool task, they choose to encode relevant taskspace variables such as roll, pitch,and yaw of the cue around the bridge. PI2 adds noise to the weights of differentcomponents of this non-linear function to explore the trajectory space. While en-coding the movements as DMPs decreases the size of the search space, it is stillsubstantially larger, and slower to search, than the space that would be required formachine learning of one-armed trajectories.The encoding of movements in taskspace variables in [29] is useful for reduc-ing the number of dimensions that need to be explored. It also focuses computa-tional effort on what is, arguably, most important: what happens in the taskspaceas a result of the robot’s movement. Pastor et al. also take advantage of this pa-rameterization to enforce certain constraints on the generated trajectories – such asthe pool cue remaining in the bridge. In [29], they encode intermediate steps inthe taskspace (i.e. the pool-cue movement). The proposed framework stores tra-jectories with their final taskspace outcomes (i.e. where the ball goes, rather thanhow the throwing-hand moves). While enforcing constraints is not a priority in thisthesis, reducing dimensionality and making decisions using taskspace outcomes is.One challenge to kinesthetic teaching, is that physically manipulating two armsat once can be cumbersome, especially if the timing needs to be precise. Indeed,in [29], each subject achieved an average of three successes out of twenty attemptsfor the box-flipping task. A consortium of European universities has begun theX-act project [32] to expand the applications of two-handed robots. They propose16Chapter 2. Backgroundteaching two-handed motions via body gestures recognized by a Microsoft (Red-mond, WA, USA) Kinect and through 3D mice. These techniques are promising;however, even with a good seed for the machine learning algorithm, exploration ofthis space will still be significantly more computationally expensive and challeng-ing than exploring the jointspace of a single-arm trajectory with an equally goodseed due to the size of the parameter space.One approach to combining two one-arm trajectories to accomplish a two-armtask is known as prioritized planning, in which an ordering is assigned to multi-ple robots [33]. The robot with the highest priority generates a trajectory, then thesecond robot generates a trajectory that treats the first robot as a dynamic obstaclewith a known trajectory. This process can be extended for additional robots. [34]explores prioritizing by total path distance, i.e., the robot with the longest distancefrom its start to end position is given highest priority. A variation on prioritizedplanning is to change the speeds at which trajectories are executed rather than gen-erating new trajectories [35]. This allows, for example, speeding up the trajectoryof a robot in a shared workspace, so it will exit the space more quickly, and slow-ing the trajectory of a robot en route to the shared workspace. The authors of [35]present an optimized method of manipulating the speed that generates continuousvelocity profiles that respect dynamic constraints from given trajectories.The prioritized planning works manipulate timing to achieve collision-free tra-jectories. This can be an effective way to perform given multi-arm spatial trajecto-ries for situations in which timing is not critical. However, for tasks in which thetwo-armed motions must have a particular relative timing, this method cannot beused. In the proposed work, timing offsets between movements are calculated tooptimize the likelihood of success.17Chapter 2. BackgroundResearchers have attempted to generalize two-handed motions. Hidden MarkovModels have been used to determine key temporal, joint-space, and work-spacepoints in two-handed motions demonstrated by humans [36]. To do this, a seriesof demonstrations are performed, and key points are extracted based on certainheuristics (e.g. a change in the direction of the tool center point). These key pointsare used to train HMMs for the movement. If a key point is representative of statesin 4/5 of the demonstrations, it is considered a common key point for the arm.This is done for both hands to create two sets of common key points. If, in ev-ery demonstration, a key point for one arm precedes a key point for the arm, itis assumed to represent a temporal dependency in the two-arm task. These com-mon key points are interpolated to generate trajectories for a simulated robot. Theygenerated feasible-looking trajectories for a box pick-and-place task as well as awater-pouring task. This work demonstrate a technique for identifying temporal re-lationships between two-armed motions demonstrated for a robot. This could be avaluable tool for researchers working on two-handed motion generation. Though,this work has not yet been incorporated in motion planning outside of replayingdemonstrated trajectories.Ureche and Billard have studied automated methods to extract relationshipsbetween each arm’s movement in two-armed tasks [37] [38]. In their research,[37], they determine the key variable of interest at each point in time. Transitionsbetween these variables indicate segmentation points. For each segment, the ac-tive and passive arm in the two armed task is determined allowing them to switchmodelling schemes for the two arms – with the passive arm being modelled withrespect to the forces generated by the first arm. In [38] they describe the controlof a melon-scooping task in which one arm holds a melon, and the other a scoop.18Chapter 2. BackgroundThe task is performed by a human. The human uses one of her arms, wearinga position- and force-sensing glove, directly to perform half of the task, and theother manipulates a robot arm to complete the other half of the task. Through-out the demonstrations, they record the position and force on the glove as well asthe robot end-effector. They optically track the position of the melon. They useGranger Causality, a tool that determines if one variable can predict another, to de-termine causality between robot pose, wrench, object (scoop) pose, sensor signals,between each of the two robot hands. This work also has yet to be incorporated ina motion planning scheme.This chapter has discussed the scope of different two-handed motion planningschemes. Some of the key challenges that have not been addressed in a singleframework include: compatibility with tasks that have asymmetrical movementsbetween the two arms, computational complexity, and optimizing timing for taskperformance. Chapter 3 presents a novel framework for two-handed coordinationthat addresses these areas. Additionally, this chapter has introduced a few valuableanalytical tools for ascertaining important relationships between the motions oftwo arms. In the concluding chapter, I propose incorporating these techniques withthe motion-planning framework presented in this thesis.19Chapter 3FrameworkIn this chapter, I present the proposed framework for combining two single-armedtrajectories to perform a two-armed task. This framework is designed to be applica-ble to a broad range of two-arm trajectories; that is, any task for which single-armtrajectories can be generated, and the relationship between the taskspace outcomesof each arm’s movement to the desired two-armed taskspace goal can be quantified.For such tasks, this framework overcomes some of the challenges discussed in theprevious chapter.The aim of this chapter is to demonstrate different ways in which the frame-work can be applied, as such it is intentionally unspecific. For the sake of sim-plicity, I use low-dimensional, simulated data. However, Chapter 4 includes fullimplementations and concrete steps to address the robot ping-pong serving prob-lem and invokes many strategies in this chapter. Chapter 5 reports the results ofreal-robot experiments.Each of Sections 3.1 through 3.4 addresses one component of the 4-componentframework for creating two-handed trajectories. Each section explains the roleof the component in the framework and introduces relevant notation. Examplesunrelated to ping-pong serving are used to demonstrate how this framework couldbe applied to other tasks. Where relevant, the advantages of the approach taken are20Chapter 3. Frameworkhighlighed.Sections 3.1 and 3.2 explain how to build a trajectory library and model theeffects of these trajectories on the taskspace. Building this library is a prerequi-site to executing two-armed tasks. Section 3.3 describes problem-generic tools forcombining trajectories based on their effect on the taskspace. Section 3.4 discussesavoiding collisions between the arms and the environment. Figure 3 depicts howthese components fit together and serves as a roadmap/guide for the chapter.21Chapter3.FrameworkKinestheticteachingMachinelearningInversekinematics& inter-polationOptimalcontrolmethodsetc.TrajectoriesT 1s1T 2s1T 3s1T 1s2T 2s2T 3s2...CameraAccelerom-eterVoltmeterThermometeretc.Outcomes in taskspacefor several executionsof each trajectory(1O1s1,2O1s1,3O1s1)(1O2s1,2O2s1,3O2s1)(1O3s1,2O3s1,3O3s1)(1O1s2,2O1s2,3O1s2)(1O2s2,2O2s2,3O2s2)(1O3s2,2O3s2,3O3s2)...GaussianmixturemodelModifycaptureddataProbabilistic modelsof taskspace outcomesfor each trajectoryO1s1O2s1O3s1O1s2O2s2O3s2...ConditionalModefindingJointprobabilityMarginaletc.Modified probabilistic mod-els (ready to be optimized)O1′s1O2′s1O3′s1O1′s2O2′s2O3′s2O1′s1,2O2′s1,2O3′s1,2...Time-shifting,Mode-finding,& RankingCombinations of trajec-tories with optimal timeshifts and their corre-sponding "probabilities".(T 3s1,T2s2,∆t = 1.5s, p= 0.7)(T 3s1,T3s2,∆t = 2.5s, p= 0.6)(T 1s1,T2s2,∆t = 0.5s, p= 0.5)(T 3s1,T2s2,∆t = 1.7s, p= 0.1)...AnaylticalmethodNumericalmethodSimulationCollision-free combinationsof trajectories with optimaltime shifts and their cor-responding "probabilities".(T 3s1,T3s2,∆t = 2.5s, p= 0.6)(T 3s1,T2s2,∆t = 1.7s, p= 0.1)...Tool/TechniqueState/Quantity/DataLegend:Obtain one-armed trajectoriesGenerate motions for the robots Capture the effects ofeach motion in taskspaceGenerate probabilistic modelsCombine two one-armed trajectories Test combinations for collisionsFigure 3.1: The two-handed coordination framework. The headers in each box/circle loosely define each of the variableslisted; for formal definitions see Sections 3.1, 3.2, 3.3, and Obtaining one-armed trajectories3.1 Obtaining one-armed trajectoriesAs discussed above, the proposed framework combines two one-armed motions tocreate a two-armed motion. This section discusses the process of obtaining one-armed trajectories.To start, the two-armed task must be divided into two one-armed tasks. Eachone-armed task will be referred to as a subtask. For example in a two-armed sweep-ing task, one subtask might be manipulating a dustpan, the other manipulating abroom. The first subtask will be denoted s1, the second, s2.Multiple one-armed trajectories are generated for each subtask. A trajectory isdefined as a time-dependent path that can be followed by a robot, either in task- orjoint-space. Often, these take the form of a list of time-stamped joint coordinates.Each trajectory can correspond to different ways to accomplish the same goal,or different goals. For example, two trajectories could move the dustpan to thesame location through different paths, whereas another trajectory might move itto a different location. We define T is j to represent the ith trajectory for subtaskj. Typically, each arm performs only one subtask, so the subtask numbering alsoidentifies the arm being used.Independently generating trajectories for each subtask has several advantagesover simultaneously creating trajectories for both subtasks.• There are many methods that already exist to create and modify one-armedtrajectories. Any method to generate a one-armed trajectory can be usedwithin this framework (e.g. optimal control methods [39] or learning fromdemonstration [40]).• The trajectories can be stored in any representation. Some of these repre-233.2. Probabilistic taskspace trajectory representationssentations allow for the generation of multiple additional trajectories. Forexample, using DMP allows for generating trajectories that are qualitativelysimilar, but with different end points [41].• Trajectories for one subtask can be combined with multiple trajectories forother subtasks. This creates a quadratic growth in possible combinationswhen compared to training independent sets of two-armed trajectories.8• For any training method that requires physically manipulating the robots,this approach eliminates the challenge of moving two robots accurately si-multaneously.3.2 Probabilistic taskspace trajectory representationsThe result of a robot’s movement in taskspace can be difficult to calculate andinconsistent. In this framework, instead of predicting the taskspace outcomes ofrobot movements, they are empirically determined. These outcomes are modelledas probability distributions.Many tools and objects have complex dynamics that are difficult to calculate.In some cases, a minor variation in initial conditions can result in a drasticallydifferent outcome. Even if the dynamics of the object and the initial conditions areperfectly known, calculations must also involve the dynamics and kinematics of therobots. Assuming an appropriate trajectory can be generated, the robot may not beable to successfully execute it due to limitations in the stiffness, power, sensing8That is, if there are n trajectories for the left arm and n trajectories for the right arm there are n2combinations. Compared to creating n pairs of two-armed trajectories, this is a quadratic growth. Insome cases, however, not every left-arm trajectories will be able to be combined with every right-armtrajectory which results in a sub-quadratic growth.243.2. Probabilistic taskspace trajectory representationsaccuracy and control bandwidth of the robot.The exact motion of the robot and the taskspace outcome may vary from ex-ecution to execution. The motion of the robot can vary with temperature, static,and dynamic friction. Additionally, environmental factors can affect the outcome.Varying lighting could affect sensor servoing. Wind and precipitation could alsoaffect the taskspace outcome. For example, consider an environment where air cur-rents vary due to doors and windows being opened or closed. An air current couldinterfere with the trajectory of a thrown object. For one trajectory, the taskspaceeffects could differ greatly depending on whether or not the currents are strong.However, for another trajectory that moves the arm in such a way that the projec-tile is shielded from the air current, the performance will be more consistent.To compensate for these uncertainties, I use a probabilistic model of the taskspaceoutcome. This approach allows for tracking the overall effect of robot, tool, andenvironmental inconsistencies.Modelling outcomes in taskspace has parallels to human motor learning. Stud-ies suggest that focusing on the effect of a movement, rather than how to executethe movement, results in improved task performance [42]. Considering the out-come in task space allows for higher-level planning to be done. Instead of consider-ing the exact joint angles required to move the arm in a certain way, computationaleffort can be dedicated to determining the most effective trajectory for completingthe task.To model the effects in taskspace, each trajectory is executed multiple times.The method of capturing the taskspace effects could involve, e.g., accelerometers,position sensors, cameras, magnetometers, etc. The taskspace outcome for the mthexecution of trajectory T is j is denotedmOis j. These outcomes can store the taskspace253.2. Probabilistic taskspace trajectory representationsstate after the motion has finished executing, or the state throughout the motion. Ifonly the final state is of interest, a vector can be used:mOis j =s jq1s jq2...s jqd jwhere d j is the dimensionality of the taskspace associated with subtask s j and eachqi is a taskspace variable.If one is interested in how the state changes during and after the movement,from the initial time t0 to final time t f , this can be represented as a matrix:mOis j =s jq1(t0) s jq1(t1) · · · s jq1(t f )s jq2(t0) s jq2(t1) · · · s jq2(t f )...... · · · ...s jqd j−1(t0) s jqd j−1(t1) · · · s jqd j−1(t f )t0 t1 · · · t f. (3.1)A probabilistic model is built for the outcome of each trajectory. mOis j,9 foreach m, are used to train the probabilistic model for the effect of a particular trajec-tory. This model is denoted Ois j. This is done for all i such that each trajectory Tis jhas an associated probabilistic model, Ois j. This model can be queried to determinethe probability of trajectory T is j yielding taskspace state9The model need not be fitted to the raw captured data. The captured data can be modified or usedto calculate additional data before a probabilistic model is created. For example, if a tool’s position iscaptured over time, one could pre-compute the velocity by taking the derivative. The velocity couldthen be modelled in addition to the position. Another example would be transforming the referenceframe of captured Cartesian coordinates.263.3. Combining two single-armed trajectoriessjq =s jq1s jq2...s jqd j.This probability is denoted Ois j(sjq).In this work, I use full-covariance GMMs to model the outcomes. A GMMis a weighted sum of arbitrary-dimensional Gaussians. Each taskspace variable,s jqi, can be represented by one of these dimensions. An expectation-maximization(EM) [43] algorithm is used to fit the means and standard-deviations of the Gaus-sians.Because GMMs are sums of distributions (opposed to a single, e.g., normaldistribution), they can be used to represent multi-modal distributions.10 The trajec-tory of a thrown object in an environment with a sporadic air current is an exampleof a multimodal outcome. The projectile will either veer in the direction of thecurrent, or it will maintain its normal trajectory. Probabilistic models that do notallow for bi-modal distributions may try to average the two outcomes, resulting ina trajectory between the normal and the offset trajectories – something that is notat all representative of either actual taskspace outcome.3.3 Combining two single-armed trajectoriesSections 3.1 and 3.2 discuss how to build a library of one-armed trajectories (T is j)for different subtasks (s j) and associated probabilistic models (Ois j). This section10These multi-modal distributions have found other uses in robotics. For example, in [44], Chanuses them to model different pathways for a manipulator to take through a nonconvex environemnt.273.3. Combining two single-armed trajectoriesdiscusses how to combine two one-arm trajectories to complete a two-armed task.In doing this, we must consider: maximizing the probability of success, the qualityof the outcome, and timing.The exact approach to combining trajectories is inherently task-dependent.However, many problems can be organized in a way that is compatible with thisframework. First, subsection 3.3.1 explains the desired form for the problem. Sub-sections 3.3.2 through 3.3.8 introduce tools that can be used to phrase the problemin the desired form. Subsection 3.3.7 offers advice on coordinating relative andabsolute timing of the two arms’ motions. Finally, 3.3.8 discusses ranking combi-nations of single-arm trajectories.3.3.1 GoalThe goal of this framework is to combine two-single arm trajectories into a two-armtrajectory in order to achieve some goal. Due to the uncertainties and inconsisten-cies in performing a two-armed task, this is done by modelling the effects of thetrajectories probabilistically.Ultimately, the goal is to rank combinations of single-arm trajectories in orderof the desirability of the taskspace outcome and the likelihood that the combinationwill produce that outcome. Using the earlier notation, the aim is to choose trajec-tories T is1 and Ti′s2 based on optimizations of their respective taskspace models, Ois1and Oi′s2 , or a combined space, Oi,i′s1s2.Criteria for desirability of taskspace outcome can include:• Areas/points reached in each arm’s individual taskspace.• Areas/points reached in a combined taskspace.283.3. Combining two single-armed trajectories• Probability of reaching these areas.3.3.2 Tool - Mode findingA mode of a probability density function refers to a local or global maximum. Ifall the modes are found, one can compare the probabilities in order to determinethe global maximum. See Figure 3.2.In mixture models, finding the modes is generally a numerical problem and dif-ferent optimization techniques are used for different probability models. However,in this section I use the notation mode() to indicate an operation used to find allmodes and their corresponding probabilities.3.3.3 Tool - Marginal distributionIn statistics, a marginal distribution is the probability distribution of a subset ofvariables of the original distribution. Consider a distribution of variablesOis j(s jq1,s j q2).The marginal distribution, Ois j(s jq1), is a representation of the distribution’s depen-dence on s jq1. An example is illustrated in Figure 3.3.When one wishes to ignore a variable in the task space, it can be marginalizedout of the distribution. For example, if one captures the velocity of a tool as wellas the position, but only wishes to use the position information in the optimizationprocess for a particular task. Marginal distributions can also be used in determiningwhen to start a trajectory. This is explained in subsection Combining two single-armed trajectoriesFigure 3.2: A 2-component, 1-dimensional GMM. The black crosses indicate themodes of the distribution.303.3. Combining two single-armed trajectoriesFigure 3.3: A marginal distribution. The black dots are 1000 samples from a 3-component, 2-dimensional GMM. The green line indicates the marginal probabilitydistribution. Note, the marginal distribution is one-dimensional, as it representsonly the dependence of the distribution on s jq1.313.3. Combining two single-armed trajectoriesFigure 3.4: A conditional distribution. The black dots are 1000 samples from a3-component, 2-dimensional GMM. The red line indicates the conditional proba-bility distribution. This represents the probability for different values of s jq1 giventhat s jq2 =− Tool - Conditional distributionIn statistics, a conditional distribution is used to find the probability of an eventwhen it is known that the outcome is in some part of the sample space. For example,consider a probability distribution that is a function of two variables Ois j(s jq1,s j q2).If one wants to determine the probability distribution over s jq1 for a given value,C, of s jq2, one may consider the conditional distribution Ois j(s jq1|s jq2 = C). Anexample is illustrated in Figure 3.4.12This figure is adapted from the documentation of the software [45].323.3. Combining two single-armed trajectoriesConditional distributions are useful for considering the distribution of otherstate variables when one or several are constrained. For example, consider a robotpouring a drink from a bottle into a stationary cup. The taskspace might consistof bottle velocity and bottle position. Presumably, one wants the bottle velocity tobe zero to initiate the pour. One could then consider the conditional distributionOis j(s jqbp|s jqbv = 0), where qbp is the variable representing the bottle position andqbv is the variable representing the bottle velocity, to examine where the bottle ismost likely to be when it comes to a rest in a particular trajectory.Conditional distributions can also be used in determining when to start a tra-jectory. This is explained in subsection Tool - Joint distributionA joint distribution represents the probability distribution of multiple events occur-ring. If these events are uncorrelated, the joint distribution is the product of eachevent’s probability distribution. An example is illustrated in Figure 3.5. Note thatthe mode of the joint distribution corresponds with the most likely value of s jq1 tobe picked simultaneously from both distributions.Creating the joint distribution of two taskspaces allows for optimizing relation-ships between the two subtask outcomes.Consider a nail-hammering task. One arm swings a hammer (and its taskspaceis the location of the face of the hammer) and another arm is holding a nail (and itstaskspace is the location of the head of the nail). The joint probability distributionwould represent overlaps between the location of the hammer and the location ofthe nail.333.3. Combining two single-armed trajectoriesFigure 3.5: A joint probability distribution. The joint distribution is the product ofeach individual distribution.3.3.6 Tool - External modificationsIn some cases, the taskspaces are easy to manipulate in a predictable way.Consider the nail-hammering example, but with the hammer-holding arm on amobile platform. One can find the most likely location of the hammer for trajectoryi using mode(Ois1) and the most likely location of the nail for trajectory i′ usingmode(Oi′s2). If the locations of the two maxima differ, the mobile platform couldbe moved until they agree.3.3.7 Tool - Time shiftingTo achieve a different starting time for the two-handed motions, one can choose tolaunch the robots earlier or later. One can also use relative time shifts between thetwo robots to optimize the probability of success.343.3. Combining two single-armed trajectoriesFor example, consider the task of passing an item from one hand to the other(“regrasping”).13 For the sake of clarity in the accompanying figures, we will con-fine the movement of each hand to the x axis.14 In this discussion we assume theexistence of a separate re-grasping routine that can be executed when the hands arein close proximity and travelling slowly.Let s1 represent the Cartesian position of the left hand. Let s2 represent theCartesian position of the right hand. Consider trajectories T 1s1 and T1s2 with proba-bilistic models O1s1(x,y,z, t) and O1s2(x,y,z, t). Sample data has been generated andplotted in Figure 3.6. Note that, unaltered, they intersect at t ≈ 1.6s. Observing thetangents of the position-time curves, we can obtain approximate velocities. Theright hand has its lowest velocity at t ≈ 0.0s to t ≈ 0.5s and is at a higher velocityat the point of intersection. The left hand is at its slowest from t ≈ 3.0s to t ≈ 3.5s(after the intersection). To increase reliability of the re-grasp, it makes sense toexecute the transfer when the hands have their lowest velocities.15To find the spatial location where the trajectories are most likely to overlap:one can marginalize out time from each distribution and multiply the resultingdistributions. These are plotted in Figure 3.7. One can see that the two are mostlikely to overlap at x ≈ 0.78dm (i.e. mode(O1s1(x) ∗O1s2(x)) ≈ 0.78dm). This isconsistent with our previous observations – this is the location where each arm istravelling the slowest.We now strive to determine the optimal time delay for the two trajectories in13The authors of [46] consider a gradient-descent approach to the inverse kinematics of both armsto solve this problem.14This technique is used in Chapter 4 in order to time the movements of the ping-pong serve. Inthat case, all three spatial dimensions and time are considered.15The relative velocity between the end-effectors is the relevant quantity, but for simplicity, weconsider velocity of each end-effector relative to a fixed frame. Considering the relationship of twoarms’ state-spaces is dealt with in Chapter 4.353.3. Combining two single-armed trajectoriesFigure 3.6: The spatial coordinates of the left and right hands for an example re-grasping task.363.3. Combining two single-armed trajectoriesFigure 3.7: Marginal probability density functions for each hand for the regrasptask (top and middle) and joint probability density (bottom).373.4. Collision Avoidanceorder for them to intersect at x ≈ 0.78dm. To do this, we consider the conditionaldistributions O1s1(t|x = 0.78dm) and O1s2(t|x = 0.78dm). These distributions areplotted in Figure 3.8. We find that T 1s1 is most likely to be at this coordinate whent ≈ 3.32s and T 2s1 is most likely to be at this coordinate when t ≈ 0.18s. This isagain consistent with our initial observations. So the right hand should be delayedfrom the left hand by ∆t ≈ 3.32s− 0.18s = 3.14s to optimize the probability ofthe two hands meeting.The agreement of these results with the intuition from Figure 3.7 demonstratesthat this is a powerful technique. This approach provides an algorithmic processfor determining delays.3.3.8 RankingThese tools can all be used to generate lists of combinations of single-armed tra-jectories. Once this list is generated, the pairs should be ranked.For example, the procedure in Subsection 3.3.7 could be iterated for every com-bination of trajectories forming a list of re-grasping points and associated probabil-ities of success. These could be ranked based on proximity to a desired re-graspingpoint, total amount of time required, total amount of distance travelled by eacharm.3.4 Collision AvoidanceBecause coordination is only determined by taskspace compatibility, the robot armscould collide with each other or with their environment. In this framework, wecheck the top-ranked combinations for collisions before executing them.383.4. Collision AvoidanceFigure 3.8: Conditional (temporal) probability densities of the left and right handsfor an example re-grasping task.393.5. SummaryChecking can be accomplished in many ways. Proposed schemes include neu-ral networks [47, 48], GPU-based numerical algorithms [49], OBB trees [50], orphysics-simulators (see Section 4.4), and many more.If the top ranked combination is found to produce a collision, the next highest-ranked pair can be tested. This can be repeated until a non-colliding pair is found.3.5 SummaryThis chapter described the framework as a whole and several tools that can be usedwith it. This discussion was intentionally general and only considered simplifiedcases. However, the following chapter will demonstrate a concrete, real-robot ap-plication of the framework. Many of the tools discussed here are implemented andapplied to the challenging case of a ping-pong serve.40Chapter 4ImplementationThe previous chapter described the concept of the framework and touched on var-ious example applications. This chapter gives a complete implementation of theframework to achieve the challenging goal of coordinating two robot arms to servea ping-pong ball. Sections 4.1 through 4.4 explain the specific implementation ofthe concept described in the corresponding sections of Chapter 3. The methodsdescribed in this chapter are used to execute the experiments described in Chapter5. The final section provides a summary of these methods.The ping-pong serving task is divided into two subtasks: s1: throwing the ping-pong ball and s2: swinging the paddle to hit the thrown ball. A laser-cut acrylicball-holder is used for the throwing task, see Figure 4.1. A 3D-printed paddleholder is used for the second subtask with a modified ping-pong paddle.16 Thepaddle is depicted in Figure 4.2.All software described developed by the author is available in a Bitbucketrepository [1]. For each reference to a software package, the ROS package (ifapplicable) and the source filename are included in a footnote.16The rubber is stripped from one face of the paddle. This is done to increase the sound of contactbetween the ball and paddle to improve accuracy for the measurements reported in Chapter 5.41Chapter 4. ImplementationFigure 4.1: Laser-cut acrylic ball-throwing hand. The joints were reinforced withJ-B Weld (J-B Weld Company, Sulphur Springs, TX, USA) and the ball-holdingcup was shimmed to a loose fit using cardboard and construction paper.Figure 4.2: 3D-printed ping-pong paddle holder. The joints were reinforced usinghot-melt glue. The 4 "fingers" that encompass the paddle handle are tightenedusing a strip of Velcro.424.1. Obtaining one-armed trajectories4.1 Obtaining one-armed trajectoriesThe control framework performs the two-armed ping-pong service task by choos-ing a set of two optimal single-armed trajectories from a library. This sectionpresents a procedure for generating the one-armed trajectories in this library.4.1.1 Generating robot motionsThis section describes how to obtain joint trajectories for the robot arms. Theworkflow is depicted in Figure 4.3.For both s1 and s2, one-armed trajectories are generated by hand-tuning startand end joint-coordinates using my WAM control software17 and GUI front end.18The GUI front end includes dials for start and end joint-coordinates and a dialfor total trajectory time. After setting the coordinates, the user presses a buttonin the GUI to create a trajectory and send it to the WAM control software viaROS. The trajectory is a quintic interpolation19 [52] between the start and endjoint-coordinates over the total trajectory time. First, the throwing trajectories arecreated. Using the GUI, the user tunes the quintic trajectory parameters until sat-isfied that the ball launches in an arc that intersects possible paths for the paddle.Subsequently, paddle trajectories are generated that intersect the ball trajectories.This tuning process is similar to coaching practices in tennis where the athletewill practice, and the coach will tune, through observation and correction, the indi-17Robot Operating System (ROS) [51] package: wam_control, source files: ros_one_wam.cppand ros_one_wam_right.cpp18ROS package: serve_ping_pong_balls source files: gui_generate_traj.py19ROS package: g_c, source file: quintic_trajectories.py . To construct the trajectory, the coef-ficients (a0, a1, a2, a3, a4, and a5) of the quintic trajectory for each joint, q(t) = a0 + a1 ∗ t+ a2 ∗t2 + a3 ∗ t3 + a4 ∗ t4 + a5 ∗ t5 where t is time, are set to constrain the initial and final velocity andacceleration to zero.434.1. Obtaining one-armed trajectoriesFigure 4.3: Workflow for generating one-armed trajectories. The diamond repre-sents the user’s best-guess if the throw/swing is in range of the swing/throw fromthe other arm. The decision represented by the diamond does not need to be perfectas the algorithm can cope with extraneous trajectories. The number of trajectories,I and I′ for subtasks s1 and s2, respectively, varies for the different experimentsperformed in Chapter 5.444.1. Obtaining one-armed trajectoriesvidual throwing and striking performances of the athlete with repeated drills beforebringing the two together to practice the combined two-arm service motions.The WAM control software is written in C++ and launches a ROS node. Thenode listens for a message that contains a joint trajectory in the form of time-stamped joint-coordinates. Once it receives a trajectory, it encodes it as a time-dependent spline20 and executes the trajectory while using internal joint sensors torecord the achieved joint positions.214.1.2 Capturing the effects of robot motions in taskspaceIn order to make decisions about which trajectories to combine to achieve the two-armed goal, the framework must have information about the effect of each one-armed trajectory on the taskspace. This section discusses the approach used tocapture the effects of robot trajectories on the taskspace. The process is depictedin Figure 4.4.A Bumblebee (Point Grey Research. Vancouver, BC, Canada) stereo-visioncamera is used to capture the effects in taskspace. A ROS node22 is used to captureimages using the FireWire-camera protocol, libdc1394 [55] and to stream them asa ROS topic. These images are read and rectified using a second ROS node23.A ROS node to process the rectified images is launched.24 The image-processing20This encoding is required by and done using the WAM native library for servoing, Libbarrett[53].21The achieved joint positions are not necessarily equal to the commanded positions. The achievedpositions are recorded for the sake of collision checking as described in Section 4.4.22ROS package: bumblebee_original, source file: grab_and_stream.cpp. This software is modifiedfrom the original software [54] to allow capturing from a stereo camera and to publish the imagesvia ROS.23ROS package: stereo_image_proc, source file: stereo_image_proc.cpp. This node is included inmy repository for convenience but is a verbatim copy of [56].24ROS package: camera_control, source file: time.py454.1. Obtaining one-armed trajectoriesFigure 4.4: Workflow for capturing taskspace outcomes. This procedure is repeatedfor every i for both s1 and s2.464.1. Obtaining one-armed trajectoriesnode awaits a request specifying a start time, a recording length, and a filename.This node checks the global ROS time. Once the time has reached the messagetime, videos are recorded in RAM for the specified duration. The node then ap-plies HSV filtering to the rectified video frames in order to track the orange colourof the ball. The disparity of the center location of this colour blob in the left vsright images is used to calculate the 3D position of the ball. The video, with visualindicators of the detected ball location, is then written to disk using the specifiedfile name. The calculated Cartesian ball coordinates are also written to disk as acomma-separated-values file. Having a node await a start time allows the node tocomplete all set up work in advance. This improves consistency in start time andallows for a global synchronizing with other nodes involved in the serving task.An additional node for robot control is launched.25 This robot-control nodediffers from that discussed in Subsection 4.1.1 in two key ways. First, this nodecan control two robots simultaneously. This is the same software used to control therobots during the task execution discussed in Chapter 5. Using identical softwarehelps limit the variance between the captured taskspace outcome and the taskspaceperformance during task execution. Secondly, this node uses the same scheme fortime-synchronization as the image-processing node. It awaits a message containingtwo trajectories (a dummy, stationary trajectory is generated for the other arm), anda start time. The node completes all set-up work, then once the global ROS clockreaches the start time, executes the trajectory.To capture the motion of the paddle swing for s2, a ping-bong ball is attachedto the center of the paddle using Velcro.After separately running M and M′ executions of, respectively, I25ROS package: wam_control, source file: ros_two_wam.cpp .474.2. Probabilistic taskspace trajectory representationsand I′ different robot trajectories for s1 and s2, we have a collec-tion of CSV (Comma Separated Value) files of taskspace (time-stampedCartesian) trajectories representing {1O1s1, . . . ,MO1s1}, . . . ,{1OIs1, . . . ,MOIs1} and{1O1s2, . . . ,MO1s2}, . . . ,{1OI′s2, . . . ,M′OI′s2}.4.2 Probabilistic taskspace trajectory representationsIn order to compensate for variances in the taskspace effect of a particular trajec-tory, their outcomes must be modelled probabilistically. This section explains howto create probabilistic models to represent the outcomes of each robot trajectoryfrom the collection of individual outcomes. The process is depicted in Figure 4.5.A GMM, Ois1, is created for the outcomes of each arm trajectory associatedwith a ball throw, T is1. Similarly, a GMM, Oi′s2, is created for the outcomes of eacharm trajectory associated with a swing, T i′s2. To do so, all the CSV files representingtaskspace outcomes are loaded into a python script.26 For the throw trajectories,this script first truncates27 each file to the section of values representing the down-ward travel of ball.28 The script then fits the GMM using the EM algorithm imple-mented in Scikit-learn [57]. The GMM consists of 529, 4-dimensional Gaussians(one for each Cartesian direction and one for time).26ROS package: serve_ping_pong_balls, source file: gui_controller.py .27ROS package: camera_control, source file: reference_frame_transform/sustained_downward_vz.py.28This is determined by establishing the velocity of the ball by taking the derivative of the ballposition found using the camera, projecting the velocity on a unit vector known to correspond to thenegative z-direction in the world frame, determining its sign, and taking the longest temporal stretchwith positive signs.29This number was chosen by experimentation with the aim of keeping the number low whilestill capturing the trajectory shape. When two GMM are multiplied, as is done in finding a jointprobability, the product GMM has as many Gaussians as the product of each of its constituents’Gaussian counts. This increases computational difficulty of mode finding substantially.484.2. Probabilistic taskspace trajectory representationsFigure 4.5: Create Gaussian Mixture Models from taskspace outcomes. This pro-cedure is repeated to create a probabilistic model for every trajectory.494.3. Choosing trajectory combinations4.3 Choosing trajectory combinationsHaving each available single-arm trajectory’s effect on taskspace quantified, thegoal is now to use this information to combine single-arm trajectories with theappropriate timing to achieve a taskspace goal, in this case, performing a legalping-pong serve. The procedure is demonstrated in Figure 4.6.Figure 4.6: Pairing trajectories and ranking them by their effect on the combinedtaskspace.The immediate goal is to identify the throwing and paddle-swinging trajectoriesthat effect an overlap between the ping-pong ball and the paddle.To do so, we marginalize out time from each distribution30 (see Subsection30ROS package: gmm_center, source file: matlab_gmm_tools.py . This package calls the Matlabscript [58].504.3. Choosing trajectory combinations3.3.3), create the joint probability of each pair of time-free models Ois1 and Oi′s2for each i and i′31 (see Subsection 3.3.5), and find their modes32 (see Subsection3.3.2). After this step, we have acquired a list of spatial coordinates for the modesof each swing and throw combination, mode(Ois1 ∗Oi′s2) for each i and i′.For each spatial mode, we find the optimal offset, ∆t, in starting times of thetwo trajectories as described in Subsection 3.3.7. To calculate the probability33of obtaining contact between the ball and paddle, one can use a Monte Carlo ap-proach. First, time can be discretized into intervals of dt. For each interval, theprobabilistic models for the throw and swing (with the appropriate time offset)can be sampled L times. For each pair of samples, the Cartesian distance canbe calculated. If this distance is smaller than the width of the ping-pong paddle,that sample pair can be considered a hit. Let H represent the number of sam-ples that are hits. Then the probability of a collision in time interval, ti, is given bypticollision =HL . Therefore, the probability of no collision in that time interval is givenby ptino collision = 1− pticollision. Finally, to calculate the probability of a collision inany time window, we compute:34 pcollision = 1− p0no collision ∗ p1no collision . . .. Thisleaves us with a probability of the ball and paddle intersecting at, approximately,31ROS package: gmm_center, source file: gmm_python_tools.py .32ROS package: gmm_center, source file: interface_mode_findcpp.cpp . I ported this code fromthe Matlab script [58] to C++ to improve performance. The C++ performs 10x faster than the matlabscript on the computer used for the experiment described in Chapter 5.33The numbers produced here do not correspond to actual probabilities, however are an indicatorof relative probability of success as compared to other trajectory combinations. See Chapter 5 forexperimental results.34Note: for small time windows and large numbers of samples, this method is very time con-suming. However, one can quickly calculate a non-normalized approximation to these values usingthe equation: punnormalizedcollision =∫ ∫ ∫ ∫ allx,y,z,t Oi′s2(x,y,z, t) ∗Ois1(x,y,z, t+∆t)dxdydzdt. This equationcan be efficiently computed using the technique for integration of Gaussians presented in [59]. Thisapproach has been tested on the experimental data presented in the following chapter as well as onsimulated data. It yields different probabilities, but the same probability ranking as the Monte Carloapproach. Both approaches are implemented in ROS package: gmm_center, source file: integra-tion_techniques.py514.4. Collision checkingthat spatial mode when the robot trajectories are executed with the calculated timeoffset.4.4 Collision checkingHaving pairs of trajectories and a time-offset to perform the two-armed task, wemust ensure that the arms will not collide with each other or obstacles in the en-vironment. This is done using the robot simulation tool, GazeboSim [60]. For aflowchart of this procedure, see Figure 4.7.The GazeboSim world file35 created for this purpose contains the two WAMarms,36 a ping-pong table,37 and the desks the robots are mounted on.38 This simu-lation is controlled via a ROS node.39 This node mirrors the interface for the nodeused to control the robot arms discussed in Subsection 4.1.2.The trajectory pairs (and corresponding time-delays) produced as described inthe last section are ranked according to probability. In order of highest to low-est probability, the trajectory pairs are run in the simulation environment.40 Anytrajectory combination that results in an unintended collision is dismissed, and thenext best choice is investigated. The first pair of throw and swing trajectories foundthat do not collide then represent the combination with the highest probability ofproducing a paddle-ball hit and may be executed on the actual robots.35ROS package: caris_ping_pong_setup_gazebo .36ROS packages: left_wam_description, right_wam_description .37ROS package: ping_pong_table .38ROS packages: robot_island_wood_bridge, robot_island_table .39ROS package: g_c, source file: gazebo_control.py .40We use the achieved joint trajectories rather than the commanded joint trajectories.524.4. Collision checkingFigure 4.7: Collision-checking procedure. Each pair of trajectories is tested withtheir optimal, relative time delay until a collision-free pair is found.534.5. Summary of contributions4.5 Summary of contributionsThis chapter presents an implementation of the framework explained in Chapter 3.This includes a novel algorithm for combining two single-arm trajectories, novelhardware, and novel software. In this section, I summarize the contributions dis-cussed in this chapter.In the introduction to the chapter, I introduced novel hardware designs forrobot-attachments. I introduced a laser-cut, acrylic ball thrower that attaches to aWAM robot. I also introduced a 3D-printed paddle holder that attaches to a WAMrobot.In Section 4.1, I introduced a novel GUI for tuning the parameters of a quintictrajectory for the WAM robot. I also discussed novel software to control a WAMarm using ROS. The section also describes novel software to capture images froma Bumblebee and publish them via a ROS topic, and software to perform HSVfiltering and track the ball location. The section also discusses a scheme for syn-chronizing the camera and robot start times.Section 4.2 describes a front-end for controlling existing GMM libraries and amethod for capturing only the downward portion of ball trajectory.41Section 4.3 explains my software implementation of my novel algorithm forcombining two-arm trajectories. This algorithm includes methods for estimatingthe best pair of single-arm trajectories to perform a two-armed task as well asoptimal time delays between the two single-arm trajectory starts.Finally, Section 4.4 explains my novel plugins to check two WAM trajectoriesfor collisions in simulation.41This is important as a ping-pong serve is only legal if the paddle strikes the ball while it istravelling downwards.54Chapter 5Experiments and DemonstrationsIn the previous chapters, several challenges in two-arm motion planning were dis-cussed. In this chapter, I make specific hypotheses about the framework’s capabil-ity of addressing these challenges and experimentally test them.5.1 Experiment 1: Ability to choose the best trajectorycombinationFor each pair of s1 (throwing) and s2 (paddle-swinging) trajectories, the algorithmprovides a probability of ball-paddle contact. As discussed in Section 4.3, thisvalue is found using a Monte Carlo technique.42 This provides a measure of howlikely the ball and paddle are to intersect for a given throw, swing, and time de-lay. In this experiment, we assess if these probabilities can be used to rank theswing/throw combinations in terms of likelihood of a ball/paddle hit. Subsection5.1.4 provides a discussion on why only the ordering of these probabilities is mean-ingful.42In particular, these values were used: dt = 0.01s, L = 1000000, and the integration was con-ducted over −55m < x,y,z < 55m and within 0.1s of the temporal mode. The spatial limits werechosen to encompass all recorded position values. The temporal limits were chosen to encompassthe time with the highest probability of overlap (while keeping computation times manageable).555.1. Experiment 1: Ability to choose the best trajectory combination5.1.1 HypothesisClaim 1. The algorithm outputs a probability for each swing/throw combinationto successfully strike a ball; the ordering of these probabilities corresponds to theactual ordering of swing/throw combination success.5.1.2 MethodA throw and swing trajectory were created such that the ball’s trajectory inter-sected, approximately, the paddle’s center point.The ball’s trajectory was changed by adjusting both the start- and end- coordi-nates of the throwing robot’s joint 6 together. By changing this joint value from0.17 rad to 0.37 rad in increments of 0.05 rad, the intersection of the ball and pad-dle ranged from one side of the paddle to the other. These trajectories were chosenbecause of the paddle’s oval shape. Changing the mean horizontal point of contactbrings the ball closer to the edge of the paddle, simultaneously bringing it closerto one side, the top, and the bottom. Therefore, this array of contact points corre-sponds to an array of allowed variance while still obtaining a hit.For each value of joint 6, a probabilistic model of the throw was trained us-ing M = 30 executions. Additionally, for each run, the swing model was retrainedusing M′ = 30 runs.43 From these models, a temporal mode was found. A to-tal of n = 30 executions of the combination were performed with the algorithm-recommended time delay.The number of hits was counted by an experimenter positioned close to the43The same swing trajectory was used for each throw. This retraining was done to compensate forany possible deviation in the calibration of the robot.565.1. Experiment 1: Ability to choose the best trajectory combinationTable 5.1: The number of successful hits for each different throwing trajectory(Experiment 1).Joint 6 (rad) 0.17 0.22 0.27 0.32 0.37Contacts between ball andpaddle (out of 30 attempts) (c)0 18 29 29 24Algorithm-predictedprobability (p)4.00×10−69.90×10−58.55×10−32.24×10−12.77×10−1Time delay (∆t) (s) -0.561 -0.511 -0.547 -0.499 -0.570Table 5.2: Standard deviation of the ball and paddle positions at the temporal mode(Experiment 1).Joint 6 (rad) 0.17 0.22 0.27 0.32 0.37Standard deviation ofpaddle location(x,y,z) (σp) (m)(0.053,0.029,0.093)(0.077,0.058,0.127)(0.068,0.051,0.112)(0.060,0.042,0.100)(0.028,0.060,0.037)Standard deviation ofball location (x,y,z)(σb) (m)(0.096,0.176,0.080)(0.076,0.177,0.117)(0.067,0.177,0.079)(0.880,0.745,1.27)(0.114,0.387,0.121)Magnitude of σb (m) 0.216 0.226 0.205 1.712 0.421Magnitude of σp (m) 0.076 0.111 0.160 0.140 0.123contact point using both audio44 and visual cues.5.1.3 Results and analysisThe results are documented in Table 5.1. Additionally, for each pair of throwingand swinging trajectories, the standard deviations of the position of the paddle andball at the temporal mode were calculated.45Let the number of contacts be denoted c, and the number of attempts be de-noted n(= 30). Let the true (experimental) probability be denoted r = cn . For our44The rubber was stripped off the contact face of the paddle to increase the sound of a hit.45Specifically, the temporal modes of the throw and swing trajectories refer to the time when theball and the paddle, respectively, are most likely to be at the spatial mode where the two trajectoriesare most likely to intersect. Refer to Subsection 3.3.7 for details.575.1. Experiment 1: Ability to choose the best trajectory combinationanalysis, we wish to find the uncertainty in the experimentally-determined proba-bilities. This problem is equivalent to finding the true probability and variance in abinomial distribution. The standard variance is given by σ =√(1−r)rn . Multiplyingthe standard variance by a z-score of 2 gives us the 95% confidence window. Letthe uncertainty in the experimental probability be denoted ∆r. Then,46∆r = 2√(1− r)rn.These results are plotted in Figure DiscussionThe results show that the algorithm successfully predicted the order of swing-throwcombinations from most likely to least likely to achieve ball-paddle contact for fourout of five combinations (with the exception being within experimental uncertaintyof being correct).With the exception of the anomalous point, the two swing-throw combinationswith the highest predicted probabilities (2.24×10−1 and 2.77×10−1) had the bestperformance with, identically, c = 29 contacts out of 30 tries. It is possible thatthere is a threshold probability beyond which performance no longer improves.The probability was generated using a very simple metric – overlapping pointsin taskspace. It is possible that a more complex metric – perhaps including paddleorientation and relative and absolute velocities – could eliminate the one inaccurateprediction.Some of the algorithm-predicted probabilities are several orders of magnitude46 Note: this formula fails for r= 0. In this case, a more conservative estimate on the error is usedand r is set to r = 130 .585.1. Experiment 1: Ability to choose the best trajectory combinationFigure 5.1: Experimentally-determined probability of success vs algorithm-predicted probability of success. Each datapoint corresponds to M = 30 andM′ = 30 training trials for the throw and swing respectively and n= 30 attemptedserves. The annotations on the plot indicate the angle of the robot’s joint 6. Theerror bars indicate a 95.45% confidence interval. See text for the derivation of theuncertainty.595.1. Experiment 1: Ability to choose the best trajectory combinationsmaller than the experimentally-determined probabilities. To determine the causeof this, the standard deviation for the ball and paddle location were calculated atthe temporal modes – the time when the ball and paddle were expected to contact.These values are given in Table 5.2. One can note that the standard deviations werevery large. The smallest standard deviation in ball position is 0.205 m – a distanceapproximately 10 times the radius of the ping-pong ball. The greatest standarddeviation is over 85 times the radius. This indicates that the data captured by thecamera was noisy so the GMM had very widely spatio-temporally distributed datayielding only a small probability of sampled values overlapping. Indeed, this canbe seen from the ball and paddle position data plotted as Figures 5.2 through 5.6.Figure 5.2: Cartesian locations of the ball and paddle (Experiment 1) for j6 =0.17radians captured during all M = 30 executions. The blue line indicates thetime when the paddle and ball are most likely to intersect.605.1. Experiment 1: Ability to choose the best trajectory combinationFigure 5.3: Cartesian locations of the ball and paddle (Experiment 1) for j6 =0.22radians. Data represent all M = 30 executions. The blue line indicates thetime when the paddle and ball are most likely to intersect.615.1. Experiment 1: Ability to choose the best trajectory combinationFigure 5.4: Cartesian locations of the ball and paddle (Experiment 1) for j6 =0.27radians. Data represent all M = 30 executions. The blue line indicates thetime when the paddle and ball are most likely to intersect.625.1. Experiment 1: Ability to choose the best trajectory combinationFigure 5.5: Cartesian locations of the ball and paddle (Experiment 1) for j6 =0.32radians. Data represent all M = 30 executions. The blue line indicates thetime when the paddle and ball are most likely to intersect.635.2. Experiment 2: TimingFigure 5.6: Cartesian locations of the ball and paddle (Experiment 1) for j6 =0.37radians. Data represent all M = 30 executions. The blue line indicates thetime when the paddle and ball are most likely to intersect.5.1.5 ConclusionThese results support the claim that the algorithm-generated probabilities are use-ful for predicting relative performance between throw/swing trajectory combina-tions. The algorithm was successful in the rank prediction for 4 out 5 pairs despiteextensive noise in the taskspace data captured by the camera.5.2 Experiment 2: TimingFor each pair of s1 and s2 trajectories, the algorithm calculates a relative timingoffset using the corresponding probabilistic models. The swing execution is de-layed from the throw by this amount of time. This is calculated as discussed in645.2. Experiment 2: TimingSubsection 3.3.7. In this experiment, we assess the performance of the algorithm-recommendations for offsets.5.2.1 HypothesisClaim 2. The algorithm can determine time offsets between the start of the left-armtrajectory and the right-arm trajectory that allow it to perform two-handed tasks.5.2.2 MethodThe aim of this experiment was to determine how close the algorithm-recommendedtime delay between the throw and the swing trajectories is to the optimal delay.In particular, the aim was to determine the time window (within ±0.0075s) thatwould yield satisfactory performance (defined as achieving c≥ 25 ball and paddlecontacts out of n = 30 attempts) and compare this to the algorithm-recommendedtime.The following procedure47,48 is used to determine the upper bound of the win-dow:From the time recommended by the algorithm, To, a search (each trial consistedof adjusting the time delay, and 30 serve attempts) was conducted in increments of0.003 s until 25 or more ball/paddle contacts were achieved (c ≥ 25). The search47Note: the T 5s1 and T5s2 combination (results in Table 5.3) has more data points than the proceduredictates. The investigation of this combination was done while still characterizing the system andextra data points were taken to ensure the time-window precision and the definition of satisfactoryperformance were appropriate for the ping-pong serving task.48This procedure is similar to binary search on an infinite-length array in the field of data struc-tures and algorithms. Binary search is a notoriously difficult algorithm to implement correctly. Ina study of 25 computer science textbooks, only 5 implementations were found to be correct [61].The procedure in this Section is meant to outline an efficient technique to find the time bounds ofsatisfactory performance but is not guaranteed to be the most efficient or cover all corner cases. Theimportant part of the procedure is to ensure the bounds are found within 0.0075 s, which has beendone for this experiment.655.2. Experiment 2: Timingwas continued with the same increment until performance dropped to c< 25. Thenanother search was conducted in between the last point before performance fell(c< 25) and the last point with c≥ 25 using an increment of 0.015 s. This processyields two points within 0.015 s of each other straddling the point where perfor-mance became less than satisfactory (c< 25). The mean between the two numbersindicates the upper edge of the timing window with an uncertainty of ±0.0075s.The procedure was repeated in reverse to determine the lower time bound of thesatisfactory performance window.The data are reported in Subsection 5.2.3 in the order they were collected. Thesymbol ∆t is used in the tables and figures to denote the offset from the algorithm-recommended time. As noted in the footnote above, the first swing/throw pair datawas collected using a different procedure.5.2.3 Results and analysisThe experiment was carried out for 5 different swing and throw combinations.These results are shown in Tables 5.3 through 5.7 and plotted in Figure 5.7. Table5.8 summarizes the satisfactory-performance time window for each pair of throw-ing and swinging trajectories.49In three of the five pairs (T 1s j, T4s j, and T5s j) the suggested time is within the satis-factory performance window. The best performing pair, T 5s j, was optimal to withinless than the accuracy of the experiment; t5m = −0.0008s was less than the small-est increment used to determine the window size, (0.015 s). The worst performingpair, T 2s j, varied from the optimal time by t2m = 0.0750s. The average (absolute)49If a measurement was made that yielded c = 25, the time for that measurement is used insteadof an average.665.2. Experiment 2: TimingTable 5.3: The number of hits for different offsets from the algorithm-recommended time using trajectories T 5s1 and T5s2. The data points are reportedin the order they were collected. See Footnote 47 for more information on thisdataset.Delay from recommended time (∆t) (s) Number of contacts (out of 30) (c)0.000 30-0.001 30-0.003 28-0.005 30-0.007 30-0.015 29-0.031 27-0.063 7-0.047 14-0.039 210.030 270.060 80.045 210.037 22Table 5.4: The number of hits for different offsets from the algorithm-recommended time using trajectories T 4s1 and T4s2. The data points are reportedin the order they were collected.Delay from recommended time (∆t) (s) Number of contacts (out of 30) (c)0.000 30-0.030 22-0.015 290.030 300.060 280.090 110.075 23675.2. Experiment 2: TimingTable 5.5: The number of hits for different offsets from the algorithm-recommended time using trajectories T 3s1 and T3s2. The data points are reportedin the order they were collected.Delay from recommended time (∆t) (s) Number of contacts (out of 30) (c)0.000 10-0.030 27-0.060 30-0.090 29-0.120 12-0.015 25-0.105 27Table 5.6: The number of hits for different offsets from the algorithm-recommended time using trajectories T 2s1 and T2s2. The data points are reportedin the order they were collected.Delay from recommended time (∆t) (s) Number of contacts (out of 30) (c)0.000 00.030 150.060 300.090 300.120 170.105 280.045 23Table 5.7: The number of hits for different offsets from the algorithm-recommended time using trajectories T 1s1 and T1s2. The data points are reportedin the order they were collected.Delay from recommended time (∆t) (s) Number of contacts (out of 30) (c)0.000 270.030 300.060 300.090 40.075 24-0.030 0-0.015 6685.2. Experiment 2: TimingFigure 5.7: Success vs time shift from the algorithm-recommended time delaybetween throwing the ball and swinging the paddle. Vertical lines are placed at0.0 s to indicate the algorithm-recommended time. If the time was exactly optimal,the peak in performance would occur at this x-value. Horizontal lines are placed at0.83 indicating c= 25 successful hits out of n= 30 attempts.695.2. Experiment 2: TimingTable 5.8: The lower and upper bound on the satisfactory performance windowsfor different combinations of swing and throwing trajectories. These values aregiven as additions to the algorithm-recommended time.Trajectory pairLower bound(t il ) (s)Upper bound(t iu) (s)Middle of window (t im) (s)(average of (t il ) and tiu)T 1s1 and T1s2 -0.0075 0.0675 0.0300T 2s1 and T2s2 0.0375 0.1125 0.0750T 3s1 and T3s2 -0.015 -0.1175 -0.0663T 4s1 and T4s2 -0.0225 0.0675 0.0225T 5s1 and T5s2 -0.0350 0.0335 -0.0008distance from the optimal time taken over all 5 pairs was 0.0389 s.5.2.4 DiscussionThis experiment has provided insight into the accuracy of the algorithm’s suggestedtime offsets. The camera used to capture the ball and paddle locations operated at30 hz. The average distance from the optimal time was then only 1 13 frames. Onaverage then, the algorithm performed at least as well as the data from the camerawould allow. In some cases, the algorithm performed as well as 0.024 frames.The worst performance was 2.25 frames. This suggests that harnessing multipledata sets and creating a GMM allowed for performance on the order of the sensorcapabilities.5.2.5 ConclusionThe algorithm performed well at recommending timings. In three throw/swingcombinations, the algorithm’s time was more precise than sensor data from a singletrial would have allowed. This indicates that using a probabilistic model improvesperformance.705.3. Demonstration: Ping-pong serve5.3 Demonstration: Ping-pong serveExecuting a ping-pong serve is a difficult task requiring precise timing. Throughoutthis work, this task has been used as a benchmark for the algorithm. In the previoustwo experiments, the task was used to demonstrate the ability of the algorithmto choose the best available trajectory combinations and to choose a correct timeoffset. Those two experiments measured the success at achieving a ball and paddlecontact. This leaves the question: Can this framework be used to accomplish taskswith more sophisticated objectives? To illustrate that it can, the algorithm was usedto perform a legal ping-pong serve.5.3.1 HypothesisClaim 3. This method of generating trajectories and the algorithms ability to com-bine two one-armed trajectories can be used to perform a legal ping-pong serve.5.3.2 MethodOne trajectory for each subtask was trained such that the throw caused the ball tointersect the paddle path. A probabilistic model was trained for each.A Stiga (Eskilstuna, Sweden) ping-pong table with regulation Joola (Godram-stein, Germany) net was set up in front of the robot arms (Figure 5.8).The algorithm-recommended offset was used to perform a serve. Upon seeingthe trajectory of the serve, a correction of 0.2 radians was added to the start andend coordinates of the swinging robot’s joint 7. This adjustment made the servefollow a legal trajectory. Thirty serves were attempted.715.3. Demonstration: Ping-pong serveFigure 5.8: Setup for the ping-pong serve demonstration.5.3.3 ResultsOf the 30 serves attempted, all yielded contact between the ball and the paddle. In25 of the serve attempts, the serve was legal – the throw was approximately verticaland was at least 16 cm high. The ball was struck while falling, bounced once onthe robot’s side of the table, cleared the net, and landed on the opposite side of thetable. This is an 83% success rate.5.3.4 DiscussionWith minor adjustment to one of the initial trajectories, the algorithm was capableof coordinating a legal ping-pong serve. In this case, the experimenter made the ad-justment. This tweak was specific to the ping-pong task, though the algorithm wasdesigned to have broader application. However, if one were to focus on solely theping-pong serving task, such single-armed trajectory tweaks could be incorporatedinto design and the parameters could be tuned automatically considering, for exam-ple, the orientation of the face of the paddle, relative velocity between the paddle725.4. Experiments 1 & 2 and service demonstration: discussion and conclusionand ball, etc. The framework has no dependency on how one-armed trajectoriesare generated or modified. Therefore, more sophisticated, automated methods thanmanually tuning quintic start- and end- points could be used for different tasks.5.3.5 ConclusionThe algorithm correctly identified the timing offset to perform a legal ping-pongserve 25 times out of 30 attempts. This demonstrated the ability of the algorithmto be used for tasks with more complicated taskspace objectives.5.4 Experiments 1 & 2 and service demonstration:discussion and conclusionIn this chapter, we have demonstrated that the algorithm is capable of predictingthe relative performance of different combinations of trajectories, accurately de-termining the timing offset between two single-armed trajectories, and using theseabilities to perform a two-armed task.Additionally, it was shown that the algorithm could perform well with limiteddata. In one case, the recommended timing was more precise than the frame rateof the camera. The average precision in recommended timing was on the order ofthe frame rate of the camera.The algorithm achieved these successes in recommending timing offsets andpredicting the relative success of throw/swing combinations despite significantnoise in the taskspace data captured by the camera. This demonstrates the use-fulness of modelling taskspace outcomes probabilistically and using a collectionof outcomes to make predictions about future outcomes.735.4. Experiments 1 & 2 and service demonstration: discussion and conclusionWhile the benchmark used in this thesis is performing a ping-pong serve, thealgorithm is agnostic to the task being performed. The optimization was basedsolely on the relationship between two taskspace states – with no knowledge of thetask being performed. It is important to note that the optimization was done withno knowledge of the physical process that generated the data; time-dependent listsof taskspace coordinates were generated by a stereovision camera and given to thetiming and probability-predicting routines without any knowledge of how the datasets were generated. These results suggest that the algorithm could be used for avariety of two-armed tasks.74Chapter 6Conclusion6.1 Summary of contributionsThis thesis proposed a four-component framework for coordinating two-handedmotions for robots. Chapter 2 presented a comparison of the concepts behindthe proposed framework and existing schemes. I advocated for particular designchoices to allow for reduced computational cost, the ability to optimize the tim-ing of the coordinated movement, and compensating for uncertainties. Throughoutthis work, this framework has been developed and implemented. Ping-pong serv-ing tasks were used as experiments to prove that the proposed approaches wereviable in practice. The experimental results were generally positive and are in-voked in this section to support claims made about the contributions of this work.The following section presents a summary of the experiments and the results.This framework combined two one-armed trajectories to perform a two-armedtask. This is more computationally efficient than planning two-armed trajectories,even when good seeds are provided to machine learning algorithms using meth-ods such as, e.g., teach by demonstration. A key requirement to using this ap-proach is the ability to decide which one-armed trajectories should be combined toachieve the two-armed task. The framework’s ability to predict the relative success756.1. Summary of contributionsof different combinations of ball-throwing and paddle-swinging trajectories for theserving task was demonstrated.The framework leveraged relative timing of the two one-armed motions toachieve optimal two-handed task performance. This contrasts with the approachtaken in prioritized planning, which manipulates relative timing to avoid colli-sions. The implementation achieved strong success in manipulating the timing ofthe two motions. It was ensured that no trajectories that would collide were used to-gether via novel plugins written for robotic simulation software. The framework’sability to optimize timing was demonstrated by comparing performance using thealgorithm-recommended times against other times in their neighbourhoods.Uncertainty in robot execution, the environment, and the tools being usedpose a significant challenge in coordinating two robot arms. For this reason, eachsingle-armed robot trajectory was paired with a probabilistic representation of itstaskspace outcome. Considering the final effect on the taskspace of each arm re-moves the necessity for modelling particular sources of uncertainty. This is incontrast to, for example, fuzzy-logic force/position controllers developed for a par-ticular task, wherein the controllers are developed for a particular task and compen-sate for particular errors. The framework’s ability to compensate for uncertaintywas demonstrated through an analysis that showed the successes in the ping-pongserving task were achieved despite large variances in the data used for decisionmaking.The implementation includes an open-source software suite for capturing taskspaceoutcomes using a stereovision camera, modelling taskspace outcomes probabilisti-cally, predicting the relative success of different combinations of one-armed trajec-tories, optimizing relative start times of one-armed trajectories, visualizing multi-766.2. Summary of resultsdimensional taskspace outcomes, and simultaneously controlling two Barrett WAMrobots. While some of the lower-level software is task- and robot-specific, theprobabilistic modelling, success-prediction, and timing generator are designed tobe task-agnostic.Overall, the contributions of this thesis were a novel conceptual frameworkthat allows for circumventing the difficulties in planning two-armed trajectoriesand compensating for uncertainties in robot performance, tool dynamics, and envi-ronment. An open-source software suite allowing others to utilize the frameworkwas developed. The software suite and the framework were experimentally tested.6.2 Summary of resultsThe framework’s performance was tested using a ping-pong serving task. In par-ticular, tests were carried out to assess two important components of combiningsingle-armed trajectories: predicting the relative performance of different one-armed trajectory combinations and the algorithm’s ability to generate relative tim-ing offsets in the start times of the one armed trajectories.To test the framework’s ability to combine one-arm trajectories, the algorithm-recommended timing offsets were compared to the optimal time offsets. It wasfound that the difference between the optimal and algorithm-recommended timeswere approximately equal to the time between frames of the 30Hz camera. In otherwords, the optimization was accurate to the limitation of the ball-tracking sensor.The algorithm’s ability to predict the relative success of different combinationsof one-armed trajectories was measured. It correctly predicted the relative successof four out of five combinations (with the fifth combination within uncertainty of776.3. Future workbeing correct).These results were achieved despite significant noise in the data recorded by thecamera. The recorded positions of the ball and paddle varied from trial to trial bysignificant amounts (standard deviations greater than 1m and 0.16m respectively).The algorithm’s performance in spite of such noisy data shows the framework’sability to leverage probabilistic models to achieve good performance despite noisein the data collected.It should be noted that the tasks were carried out without computing the physicsof the interaction. This offers support to the idea that the algorithm would becompatible with other two-armed tasks.Finally, using this controller architecture approach, the first-ever ITTF-legalping-pong serve was demonstrated by a robot.50 For the serving demonstration,the ball was thrown in the air by one robot, contacted the paddle held by the otherrobot during its descent, bounced once on the server’s side, cleared the net, andbounced on the opposing side of the regulation-size table. The ability to performsuch a spatio-temporally sensitive task is indicative of the framework’s ability toaddress other challenging two-handed coordination tasks.6.3 Future workOne of the limitations to the framework is that it requires a human to specify thedesired relationship between the taskspace states of each hand. Having to find andsupply this information limits the applications of the framework to tasks where thetaskspace relationship has a known, analytic form. There are many tasks that do50To the author’s knowledge, this is the first legal serve. Robots have served ping-pong balls usingworkarounds as described in Section 1.3.786.4. Ping-pong serving challengemeet this requirement – for example, hammering a nail, using a fork and knife,and buttoning a shirt. However, this requirement could make it difficult for an end-user to teach the robot to perform different types of tasks. It would be desirable toautomate this process.As discussed in Chapter 2, Ureche and Billard [37] [38] as well as Asfour et al.[36] have studied automated methods to extract relationships between each arm’smovement in two-armed tasks. In future work, methods for determining the rela-tionship between single-arm taskspaces could be incorporated with the framework.This could allow applications of the framework to tasks in which the relationship isnot easy to determine. Additionally, it could allow for non-technical end-users toteach new abilities to a two-armed robot system. This would allow for applicationsof the framework to many new tasks and new robots.6.4 Ping-pong serving challengeIn my closing remarks, I would like to encourage other roboticists to engage inthe robot ping-pong serve challenge, either through improvements on the workpresented here, or entirely novel frameworks.Much progress has been made by researchers answering John Billingsley’s1984 call [15] to create hardware and algorithms to allow a robot to return a ping-pong ball. I believe that through the serving challenge, we can achieve great resultsand further the goal of having robots help humans in both functional and fun en-deavours.79Bibliography[1] B. A. Blumer, “Two-handed coordination in robots repository,” 2016.[Online]. Available: https://bitbucket.org/Benjamin_Blumer/two_handed_coordination_in_robots[2] B. A. Blumer, “Two robots working together to serve a ping-pong ball.”[Online]. Available: https://www.youtube.com/watch?v=0kEe-8C2xec[3] E. Nakano, S. Ozaki, T. Ishida, and I. Kato, “Cooperational control of theanthropomorphous manipulator melarm,” in Proc. 4th Int. Symp. IndustrialRobots, 1974, pp. 251–260.[4] D. Reynolds, “Gaussian mixture models,” Encyclopedia of Biometrics, p.659–663, 2009.[5] M. W. Spong, S. Hutchinson, and M. Vidyasagar, Robot Modeling and Con-trol. Wiley, 2006.[6] P. Pastor, H. Hoffmann, T. Asfour, and S. Schaal, “Learning and generaliza-tion of motor skills by learning from demonstration,” in IEEE InternationalConference on Robotics and Automation, 2009, pp. 763–768.[7] J. Maitin-Shepard, M. Cusumano-Towner, J. Lei, and P. Abbeel, “Cloth grasppoint detection based on multiple-view geometric cues with application to80Bibliographyrobotic towel folding,” in IEEE International Conference on Robotics andAutomation, 2010, pp. 2308–2315.[8] “International Table Tennis Federation Handbook.” [Online]. Available:http://www.ittf.com[9] Y. Huang, D. Xu, M. Tan, and H. Su, “Trajectory prediction of spinning ballfor ping-pong player robot,” in IEEE/RSJ International Conference on Intel-ligent Robots and Systems, 2011, pp. 3434–3439.[10] Y. Iino and T. Kojima, “Kinematics of table tennis topspin forehands:effects of performance level and ball spin,” Journal of Sports Sciences,vol. 27, no. 12, pp. 1311–1321, 2009. [Online]. Available: http://dx.doi.org/10.1080/02640410903264458[11] Y. Iino, T. Mori, and T. Kojima, “Contributions of upper limb rotationsto racket velocity in table tennis backhands against topspin and backspin,”Journal of Sports Sciences, vol. 26, no. 3, pp. 287–293, 2008. [Online].Available: http://dx.doi.org/10.1080/02640410701501705[12] J.-l. Li, X. Zhao, and C.-h. Zhang, “Influence of new rules on thedevelopment of table tennis techniques [j],” Journal of Beijing Universityof Physical Education, vol. 10, p. 043, 2005. [Online]. Available:http://en.cnki.com.cn/Article_en/CJFDTOTAL-BJTD200510043.htm[13] G. Zhang, C. Wang, B. Li, and H. Zheng, “Motion planning of a dualmanipulator system for table tennis,” in Intelligent Autonomous Systems 12,ser. Advances in Intelligent Systems and Computing, S. Lee, H. Cho, K.-J.81BibliographyYoon, and J. Lee, Eds. Springer Berlin Heidelberg, 2013, no. 194, pp.335–344. [Online]. Available: http://link.springer.com/chapter/10.1007/978-3-642-33932-5_32[14] YouTube user: Fo3oX, “Robot plays table tennis (vs robot, vs human).”[Online]. Available: https://www.youtube.com/watch?v=t_qN3dgYGqE[15] D. F. Salisbury, “Challenging robot makers to build a ping-pongchampion,” Christian Science Monitor, 1984. [Online]. Available: http://www.csmonitor.com/1984/0706/070635.html[16] R. L. Andersson, A Robot Ping-Pong Player: Experiments in Real-Time In-telligent Control. Cambridge, Mass.: The MIT Press, 2003.[17] J. Glover and L. P. Kaelbling, “Tracking 3-d rotations with the quaternionbingham filter,” 2013. [Online]. Available: http://dspace.mit.edu/handle/1721.1/78248[18] J. Kober, K. Mülling, O. Krömer, C. H. Lampert, B. Scholkopf, and J. Peters,“Movement templates for learning of hitting and batting,” in IEEE Interna-tional Conference on Robotics and Automation, 2010, pp. 853–858.[19] F. Caccavale and M. Uchiyama, “Cooperative Manipulation,” in SpringerHandbook of Robotics, B. Siciliano and O. Khatib, Eds. SpringerInternational Publishing, 2016, pp. 989–1006, dOI: 10.1007/978-3-319-32552-1_39. [Online]. Available: http://link.springer.com/chapter/10.1007/978-3-319-32552-1_3982Bibliography[20] C. Smith, Y. Karayiannidis, L. Nalpantidis, X. Gratal, P. Qi, D. V.Dimarogonas, and D. Kragic, “Dual arm manipulation—A survey,”Robotics and Autonomous Systems, vol. 60, no. 10, pp. 1340–1353, Oct.2012. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S092188901200108X[21] T.-Y. Li and J.-C. Latombe, “On-line manipulation planning for tworobot arms in a dynamic environment,” The International Journal ofRobotics Research, vol. 16, no. 2, pp. 144–167, 1997. [Online]. Available:http://ijr.sagepub.com/cgi/content/abstract/16/2/144[22] K. R. Horspool, “Cartesian-space adaptive control for dual-arm force controlusing industrial robots,” Ph.D., The University of New Mexico, United States– New Mexico, 2003.[23] G. Chen, W. Chang, and P. Zhang, “Object-oriented dynamics and hybridposition/force control for dual-arm symmetric coordination,” in The Interna-tional Conference on Control and Automation, 2002, pp. 135–135.[24] A. S. alYahmadi and T. C. Hsia, “Internal force-based impedance control ofdual-arm manipulation of flexible objects,” in IEEE International Conferenceon Robotics and Automation, vol. 4, 2000, pp. 3296–3301 vol.4.[25] S. Nozawa, Y. Kakiuchi, K. Okada, and M. Inaba, “Controlling the planarmotion of a heavy object by pushing with a humanoid robot using dual-armforce control,” in IEEE International Conference on Robotics and Automa-tion, 2012, pp. 1428–1435.83Bibliography[26] K.-Y. Lian, C.-S. Chiu, and P. Liu, “Semi-decentralized adaptive fuzzy con-trol for cooperative multirobot systems with H infin; motion/internal forcetracking performance,” IEEE Transactions on Systems, Man, and Cybernet-ics, Part B (Cybernetics), vol. 32, no. 3, pp. 269–280, 2002.[27] W. Gueaieb, F. Karray, and S. Al-Sharhan, “A robust adaptive fuzzy posi-tion/force control scheme for cooperative manipulators,” IEEE Transactionson Control Systems Technology, vol. 11, no. 4, pp. 516–528, 2003.[28] P. Curkovic, B. Jerbic, and T. Stipancic, “Co-evolutionary algorithm formotion planning of two industrial robots with overlapping workspaces,”International Journal of Advanced Robotic Systems, p. 1, 2013. [Online].Available: http://www.intechopen.com/journals/international_journal_of_advanced_robotic_systems/co-evolutionary-algorithm-for-motion-planning-of-two-industrial-robots-with-overlapping-workspaces[29] P. Pastor, M. Kalakrishnan, S. Chitta, E. Theodorou, and S. Schaal, “Skilllearning and task outcome prediction for manipulation,” in IEEE Interna-tional Conference on Robotics and Automation, 2011, pp. 3828–3834.[30] E. Theodorou, J. Buchli, and S. Schaal, “Reinforcement learning of motorskills in high dimensions: A path integral approach,” in IEEE InternationalConference on Robotics and Automation, 2010, pp. 2397–2403.[31] A. Ijspeert, J. Nakanishi, and S. Schaal, “Movement imitation with nonlineardynamical systems in humanoid robots,” in IEEE International Conferenceon Robotics and Automation, vol. 2, 2002, pp. 1398–1403.84Bibliography[32] Community Research and Development Information Service, “CognitiveHuman robot cooperation and safety – Design.” [Online]. Available:http://cordis.europa.eu/project/rcn/105917_en.html[33] M. Erdmann and T. Lozano-Perez, “On multiple moving objects,” in IEEE In-ternational Conference on Robotics and Automation, vol. 3, 1986, pp. 1419–1424.[34] J. van den Berg and M. Overmars, “Prioritized motion planning for multiplerobots,” in 2005 IEEE/RSJ International Conference on Intelligent Robotsand Systems, 2005, pp. 430–435.[35] J. Peng and S. Akella, “Coordinating multiple robots with kinodynamicconstraints along specified paths,” The International Journal of RoboticsResearch, vol. 24, no. 4, pp. 295–310, 2005. [Online]. Available:http://ijr.sagepub.com/content/24/4/295[36] T. Asfour, F. Gyarfas, P. Azad, and R. Dillmann, “Imitation Learning of Dual-Arm Manipulation Tasks in Humanoid Robots,” in IEEE-RAS InternationalConference on Humanoid Robots, 2006, pp. 40–47.[37] A.-L. Pais and A. Billard, “Encoding Bi-manual Coordination Patternsfrom Human Demonstrations,” in ACM/IEEE International Conferenceon Human-robot Interaction, 2014, pp. 264–265. [Online]. Available:http://doi.acm.org/10.1145/2559636.2559844[38] A.-L. Pais Ureche and A. Billard, “Learning Bimanual Coordinated TasksFrom Human Demonstrations,” in ACM/IEEE International Conference on85BibliographyHuman-Robot Interaction Extended Abstracts, 2015, pp. 141–142. [Online].Available: http://doi.acm.org/10.1145/2701973.2702007[39] E. Theodorou, J. Buchli, and S. Schaal, “A generalized path integral controlapproach to reinforcement learning,” J. Mach. Learn. Res., vol. 11, p.3137–3181, 2010. [Online]. Available: http://dl.acm.org/citation.cfm?id=1756006.1953033[40] B. D. Argall, S. Chernova, M. Veloso, and B. Browning, “A survey of robotlearning from demonstration,” Robotics and Autonomous Systems, vol. 57,no. 5, pp. 469–483, 2009. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0921889008001772[41] H. Hoffmann, P. Pastor, D.-H. Park, and S. Schaal, “Biologically-inspired dy-namical systems for movement generation: Automatic real-time goal adapta-tion and obstacle avoidance,” in IEEE International Conference on Roboticsand Automation, 2009, pp. 2587–2592.[42] G. Wulf, “Attentional focus and motor learning: a review of 15 years,”International Review of Sport and Exercise Psychology, vol. 6, no. 1,pp. 77–104, 2013. [Online]. Available: http://dx.doi.org/10.1080/1750984X.2012.723728[43] A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum Likelihood fromIncomplete Data via the EM Algorithm,” Journal of the Royal StatisticalSociety. Series B (Methodological), vol. 39, no. 1, pp. 1–38, 1977. [Online].Available: http://www.jstor.org/stable/298487586Bibliography[44] A. Chan, E. Croft, and J. Little, “Modeling nonconvex workspace constraintsfrom diverse demonstration sets for Constrained Manipulator Visual Servo-ing,” in IEEE International Conference on Robotics and Automation, 2013,pp. 3062–3068.[45] J. P. Petersen, Mining of Ship Operation Data for Energy Conservation, ser.IMM-PHD-2011. Technical University of Denmark (DTU), 2011.[46] N. Vahrenkamp, D. Berenson, T. Asfour, J. Kuffner, and R. Dillmann, “Hu-manoid motion planning for dual-arm manipulation and re-grasping tasks,” inIEEE/RSJ International Conference on Intelligent Robots and Systems, 2009,pp. 2464–2470.[47] X. Yang and M. Meng, “A neural network approach to real-time motion plan-ning and control of robot manipulators,” in IEEE International Conferenceon Systems, Man, and Cybernetics, vol. 4, 1999, pp. 674–679 vol.4.[48] P. Bendahan and P. Gorce, “A neural network architecture to learn the armreach motion planning in a static cluttered environment,” in IEEE Interna-tional Conference on Systems, Man and Cybernetics, vol. 1, 2004, pp. 762–767 vol.1.[49] J. Pan and D. Manocha, “GPU-based parallel collision detection for fastmotion planning,” The International Journal of Robotics Research, vol. 31,no. 2, pp. 187–200, 2012. [Online]. Available: http://gamma.cs.unc.edu/gplanner/ijrr.pdf[50] S. Gottschalk, M. C. Lin, and D. Manocha, “OBBTree: A hierarchicalstructure for rapid interference detectionw,” in Conference on Computer87BibliographyGraphics and Interactive Techniques. ACM, 1996, pp. 171–180. [Online].Available: http://doi.acm.org/10.1145/237170.237244[51] M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler,and A. Y. Ng, “ROS: an open-source robot operating system,” ICRA workshopon open source software, vol. 3, no. 3.2, p. 5, 2009.[52] N. Hogan, “Adaptive control of mechanical impedance by coactivation ofantagonist muscles,” IEEE Transactions on Automatic Control, vol. 29, no. 8,pp. 681–690, Aug 1984.[53] Barrett Technology, “Libbarrett.” [Online]. Available: http://support.barrett.com/wiki/Libbarrett[54] D. Murray, “Point Grey Research’s triclops library – simple grab.” [Online].Available: http://www.ptgrey.com[55] D. Douxchamps and G. Peters, “IIDC Camera Control Library.” [Online].Available: https://sourceforge.net/projects/libdc1394/[56] P. Mihelich, K. Konolige, and J. Leibs, “stereo_image_proc - ROS Wiki.”[Online]. Available: http://wiki.ros.org/stereo_image_proc[57] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel,M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos,D. Cournapeau, M. Brucher, M. Perrot, and É. Duchesnay, “Scikit-learn:Machine Learning in Python,” J. Mach. Learn. Res., vol. 12, pp. 2825–2830, 2011. [Online]. Available: http://dl.acm.org/citation.cfm?id=1953048.207819588Bibliography[58] M. Carreira-Perpiñán, “Mode-finding for mixtures of gaussian distributions,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22,no. 11, pp. 1318–1323, 2000.[59] A. Genz, “Numerical computation of multivariate normal probabilities,”Journal of Computational and Graphical Statistics, vol. 1, no. 2, pp.141–149, 1992. [Online]. Available: http://www.tandfonline.com/doi/abs/10.1080/10618600.1992.10477010[60] N. Koenig and A. Howard, “Design and use paradigms for gazebo, an open-source multi-robot simulator,” in IEEE/RSJ International Conference on In-telligent Robots and Systems), vol. 3, 2004, pp. 2149–2154 vol.3.[61] R. E. Pattis, “Textbook errors in binary searching,” SIGCSE Bull., vol. 20,no. 1, pp. 190–194, 1988. [Online]. Available: http://doi.acm.org/10.1145/52965.5301289


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items