UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

What should a robot do? : design and implementation of human-like hesitation gestures as a response mechanism… Moon, AJung 2012

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2012_spring_moon_ajung.pdf [ 5.96MB ]
Metadata
JSON: 24-1.0103462.json
JSON-LD: 24-1.0103462-ld.json
RDF/XML (Pretty): 24-1.0103462-rdf.xml
RDF/JSON: 24-1.0103462-rdf.json
Turtle: 24-1.0103462-turtle.txt
N-Triples: 24-1.0103462-rdf-ntriples.txt
Original Record: 24-1.0103462-source.json
Full Text
24-1.0103462-fulltext.txt
Citation
24-1.0103462.ris

Full Text

What Should a Robot Do? Design and Implementation of Human-like Hesitation Gestures as a Response Mechanism for Human-Robot Resource Conflicts by AJung Moon B.A.Sc., The University of Waterloo, 2009 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Applied Science in THE FACULTY OF GRADUATE STUDIES (Mechanical Engineering) The University Of British Columbia (Vancouver) April 2012 © AJung Moon, 2012 Abstract Resource conflict arises when people share spaces and objects with each other. People easily resolve such conflicts using verbal/nonverbal communication. With the advent of robots entering homes and offices, this thesis builds a framework to develop a natural means of managing shared resources in human-robot collabora- tion contexts. In this thesis, hesitation gestures are developed as a communicative mechanism for robots to respond to human-robot resource conflicts. In the first of the three studies presented in this thesis (Study I), a pilot ex- periment and six online surveys provided empirical demonstrations that humans perceive hesitations from robot trajectories mimicking human hesitation motions. Using the set of human motions recorded from Study I, a characteristic acceler- ation profile of hesitation gestures was extracted and distilled into a trajectory design specification representing hesitation, namely the Acceleration-based Hes- itation Profile (AHP). In Study II, the efficacy of AHP was tested and validated. In Study III, the impact of AHP-based robot motions was investigated in a Human- Robot Shared-Task (HRST) experiment. The results from these studies indicate that AHP-based robot responses are per- ceived by human observers to convey hesitation, both in observational and in situ contexts. The results also demonstrate that AHP-based responses, when compared with the abrupt collision avoidance responses typical of industrial robots, do not significantly improve or hinder human perception of the robot and human-robot team performance. The main contribution of this work is an empirically validated trajectory de- sign that can be used to convey a robot’s state of hesitation in real-time to human observers, while achieving the same collision avoidance function as a traditional ii collision avoidance trajectory. iii Preface This thesis is submitted in partial fulfillment of the requirements for the degree of Master of Applied Science in Mechanical Engineering at the University of British Columbia. An outline of the three experiments presented in this thesis has been published as a position paper at theWorkshop on Interactive Communication for Autonomous Intelligent Robots (ICAIR), 2010 International Conference on Robotics and Au- tomation: Moon, A., Panton, B., Van der Loos, H. F. M., & Croft, E. A. (2010). Using Hesitation Gestures for Safe and Ethical Human-Robot Interaction. Work- shop on Interactive Communication for Autonomous Intelligent Robots at the 2010 International Conference on Robotics and Automation (pp. 11-13). Anchorage, United States. The author presented this work at the workshop. A co-author for this publication, Mr. Boyd Panton, was a co-op student at the Collaborative Advanced Robotics and Intelligent Systems Laboratory. Panton was involved in the design of the human- subject interaction task described in Chapter 3. In preparation for Study III, pre- sented in Chapter 6, he investigated different options for setting up the experimen- tal workspace for the study. He proposed using a stereoscopic camera for sensing human motions during the main experiment. However, a different approach was used in the study. He produced a technical report from his work: Panton, B. (2010). The Development of a Human Robot Interaction Project (pp. 1-42). Vancouver. iv Study I, presented in Chapter 3, and the trajectory design specification, Acceleration- based Hesitation Profile (AHP), presented in Chapter 4, are published in a confer- ence proceedings: Moon, A., Parker, C. A. C., Croft, E. A., & Van der Loos, H. F. M. (2011). Did You See It Hesitate? - Empirically Grounded Design of Hesitation Trajec- tories for Collaborative Robots. 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1994-1999). San Francisco, CA1. This work was presented by the author at the 2011 IROS conference. This jointly authored paper involved the work of Dr. Chris A. C. Parker. He has supervised the experiment design of Study I and the process of developing the AHP from a collected set of human motion trajectories (Chapter 4). The controller used to servo the CRS A460 robot in Studies I and II of this thesis is a modified version of a controller developed by Parker. The two studies presented in Chapters 5 and 6 have been submitted as a journal manuscript, which is under review at present: Moon, A., Parker, C. A. C., Croft, E. A., & Van der Loos, H. F. M. (2012). Design and Impact of Hesitation Gestures during Human-Robot Resource Conflicts. Journal of Human Robot Interaction. (Submitted January, 2012). All human-subject experiments described in this thesis were approved by the University of British Columbia Behavioural Research Ethics Board (H10-00503). 1©2011 IEEE v Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Background and Motivating Literature . . . . . . . . . . . . . . . . 6 2.1 Nonverbal Communication in Human-Human Interaction . . . . . 7 2.2 Hesitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 Human-Robot Shared Task . . . . . . . . . . . . . . . . . . . . . 10 2.4 Trajectory Implications in Nonverbal Human-Robot Interaction . . 13 3 Study I:Mimicking Communicative Content Using End-effector Tra- jectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.1 Experimental Methodology . . . . . . . . . . . . . . . . . . . . . 17 3.1.1 Human Subject Pilot . . . . . . . . . . . . . . . . . . . . 17 vi 3.1.2 Robotic Embodiment of Human Motion . . . . . . . . . . 20 3.1.3 Session Video Capture . . . . . . . . . . . . . . . . . . . 22 3.1.4 Survey Design . . . . . . . . . . . . . . . . . . . . . . . 23 3.1.5 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . 25 3.2 Online Survey Results . . . . . . . . . . . . . . . . . . . . . . . 25 3.2.1 Identification of Segments Containing Hesitation Gestures 26 3.2.2 Perception Consistency between Human Gestures and Robot Gestures . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.3.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4 Designing Communicative Robot Hesitations: Acceleration-based Hesitation Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.1.1 Pre-processing – Filtering, Segmentation, and Principal Com- ponent Analysis . . . . . . . . . . . . . . . . . . . . . . . 34 4.1.2 Qualitative Observations and Typology of Hesitation and Non-Hesitation Motions . . . . . . . . . . . . . . . . . . 37 4.1.3 Quantitative Observations and Characterization Approach 39 4.2 Acceleration-based Hesitation Gestures . . . . . . . . . . . . . . 46 4.2.1 AHP-based Trajectory Generation . . . . . . . . . . . . . 47 4.2.2 Real-time Implementation . . . . . . . . . . . . . . . . . 49 4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.3.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 5 Study II: Evaluating Extracted Communicative Content from Hesi- tations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 5.1 Experimental Methodology . . . . . . . . . . . . . . . . . . . . . 56 5.1.1 Trajectory Generation . . . . . . . . . . . . . . . . . . . 57 5.1.2 Video Capture . . . . . . . . . . . . . . . . . . . . . . . 59 5.1.3 Survey Design . . . . . . . . . . . . . . . . . . . . . . . 60 vii 5.1.4 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . 61 5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.2.1 H2.1: AHP-based Robot Motions are Perceived as Hesitant 63 5.2.2 H2.2: AHP-based Robot Motions are More Human-like than Robotic Avoidance Motions . . . . . . . . . . . . . . 63 5.2.3 H2.3: Non-Expert Observations of AHP-based Motions are Robust to Changes in Acceleration Parameters . . . . 64 5.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 5.3.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 66 5.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 6 Study III - Evaluating the Impact of Communicative Content . . . . 68 6.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 6.1.1 Experimental Task and Procedure . . . . . . . . . . . . . 70 6.1.2 Measuring Human Perception and Task Performance . . . 77 6.1.3 System Design and Implementation . . . . . . . . . . . . 79 6.1.4 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . 85 6.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 6.2.1 H3.1: Can Humans Recognize AHP-basedMotions as Hes- itations in Situ? . . . . . . . . . . . . . . . . . . . . . . . 86 6.2.2 H3.2: Do Humans Perceive Hesitations More Positively? . 86 6.2.3 H3.3: Does Hesitation Elicit Improved Performance? . . 93 6.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.3.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 98 6.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 7.1 Can an Articulated Industrial Robot Arm Communicate Hesitation? 101 7.2 Can an Empirically Grounded Acceleration Profile of Human Hes- itations be Used to Generate Robot Hesitations? . . . . . . . . . . 101 7.3 What is the Impact of a Robot’s Hesitation Response to Resource Conflicts in a Human-Robot Shared-Task? . . . . . . . . . . . . . 103 7.4 Recommendations and Future Work . . . . . . . . . . . . . . . . 104 viii Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 A CRS A460 Robot Specifications . . . . . . . . . . . . . . . . . . . . . 117 B Human Motion Trajectory Characteristics . . . . . . . . . . . . . . 121 B.1 Segmentation of Recorded Human Motions . . . . . . . . . . . . 122 B.1.1 Butterworth Filtering algorithm . . . . . . . . . . . . . . 122 B.1.2 Acceleration-based Segmentation Algorithm . . . . . . . 122 B.2 Overview of Position Profiles . . . . . . . . . . . . . . . . . . . . 124 B.3 Descriptive Statistics of Principal Component Analysis Errors . . 128 B.4 AHP Parameter Values from Human Motions . . . . . . . . . . . . 128 C Advertisements, Consents, and Surveys . . . . . . . . . . . . . . . . 131 C.1 Study I Advertisements, Online Surveys, and Consents . . . . . . 131 C.2 Study II Advertisement, Online Surveys, and Consent . . . . . . . 145 C.3 Study III Advertisements, Questionnaires, and Consent . . . . . . 149 D Acceleration-based Hesitation Profile Trajectory Characterisation and Implementation Algorithms . . . . . . . . . . . . . . . . . . . . 155 D.1 Offline AHP-based Trajectory Generation . . . . . . . . . . . . . 156 D.2 AHP-based Trajectory Implementation for Real-time Human-Robot Shared Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 D.2.1 Management of the Robot’s Task . . . . . . . . . . . . . 160 D.2.2 Management of Real-time Gesture Trajectories . . . . . . 162 D.2.3 Calculation of a1 and t1 Parameters for AHP-based Trajec- tories . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 D.2.4 Generation of AHP Spline Coefficients . . . . . . . . . . . 163 D.2.5 Human State Tracking and Decision Making . . . . . . . 164 E Human Perception of AHP-based Mechanism and its Impact on Per- formance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 E.1 Video Observation of Jerkiness and Success from Robot Motions . 168 E.1.1 Perceived Success of Robot Motions . . . . . . . . . . . . 168 E.1.2 Perceived Jerkiness of Robot Motions . . . . . . . . . . . 169 ix E.2 In Situ Perception of AHP-based Motions . . . . . . . . . . . . . 170 E.2.1 Usefulness . . . . . . . . . . . . . . . . . . . . . . . . . 171 E.2.2 Emotional Satisfaction . . . . . . . . . . . . . . . . . . . 172 E.3 Non-parametric Comparison of Performance Impact of the AHP- based Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . 173 E.3.1 Counts of Mistakes . . . . . . . . . . . . . . . . . . . . . 173 E.3.2 Counts of Collisions . . . . . . . . . . . . . . . . . . . . 174 x List of Tables Table 3.1 Number of online respondents per survey . . . . . . . . . . . . 25 Table 3.2 Repeated-measures one-way ANOVA results for all six surveys in Study I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Table 4.1 The mean values and ANOVA results of the halting ratio (C1) and yielding ratio (C2). . . . . . . . . . . . . . . . . . . . . . 45 Table 4.2 ANOVA results on B1 and B2 ratios. . . . . . . . . . . . . . . . 46 Table 5.1 Study II two-way repeated-measures ANOVA results on hesita- tion and anthropomorphism scores. . . . . . . . . . . . . . . . 64 Table 6.1 Conditions for identifying the four states of task-related human motion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Table 6.2 Internal reliabilities of the eight self-reported measures. . . . . 88 Table 6.3 Two-way repeated-measures ANOVA results for Study III hu- man perception measures. . . . . . . . . . . . . . . . . . . . . 89 Table 6.4 Study III mean and standard error of human perception and task performance measures by Condition. . . . . . . . . . . . . . . 90 Table 6.5 Study III mean and standard error of human perception and task performance measures by Encounter. . . . . . . . . . . . . . . 91 Table 6.6 Study III two-way repeated-measures ANOVA results for task performance measures. . . . . . . . . . . . . . . . . . . . . . 95 Table 6.7 Distribution of the number of collisions occurred. . . . . . . . 96 Table 6.8 Number of mistakes observed in each condition. . . . . . . . . 96 xi Table A.1 Soft limits in position, q, velocity, q̇, and acceleration, q̈, set for the CRS A460 robot arm. . . . . . . . . . . . . . . . . . . . . 118 Table B.1 Range of motion of the three pilot subjects who participated in Study I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Table B.2 Sum of squared errors from PCA simplification of Chapter 3 subject motion data. . . . . . . . . . . . . . . . . . . . . . . . 128 Table B.3 Descriptive statistics on a1 values of all three subject data from Chapter 3 presented by motion type. . . . . . . . . . . . . . . 129 Table B.4 Descriptive statistics on the temporal values of acceleration peaks.130 Table E.1 Cross tabulation outlining the differences in the counts of mis- takes by Condition as a factor. . . . . . . . . . . . . . . . . . . 174 Table E.2 Chi-Square tests of counts of mistake differences by Condition. 174 Table E.3 Cross tabulation outlining the differences in the counts of colli- sions by Condition as a factor. . . . . . . . . . . . . . . . . . . 175 Table E.4 Chi-Square tests of counts of collisions differences by Condition. 175 xii List of Figures Figure 3.1 Study I experiment set-up for the human-human interactive pilot. 18 Figure 3.2 Illustration of a three-joint kinematic model approximating the human arm. . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Figure 3.3 6-DOF robot arm used for Studies I and II in the elbow-up configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Figure 3.4 Control diagram showing interpolation and replication of hu- man motion. . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Figure 3.5 Screen captures of human-human vs. human-robot interaction videos for Study I. . . . . . . . . . . . . . . . . . . . . . . . 23 Figure 3.6 An example screen capture of the online surveys employed in Study I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Figure 3.7 Session 1 hesitation perception scores summary for Study I. . 27 Figure 3.8 Session 2 hesitation perception scores summary for Study I. . 28 Figure 3.9 Session 3 hesitation perception scores summary in Study I. . . 29 Figure 4.1 Illustration of the trajectory characterization process. . . . . . 35 Figure 4.2 Segmentation of trajectories using the acceleration-based method. 36 Figure 4.3 A successful reach-retract human motion shown in a side view and a top view with its principal plane. . . . . . . . . . . . . . 38 Figure 4.4 Graphical overview of typology of hesitation and non-hesitation motions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Figure 4.5 Examples of Butterworth-filtered Xo-axis wrist motions. . . . 40 Figure 4.6 Jerk trajectory in Xo-axis. . . . . . . . . . . . . . . . . . . . 41 Figure 4.7 Acceleration profiles of example R-type motions and an S-type motion in the primary (Xo) axis. . . . . . . . . . . . . . . . . 43 xiii Figure 5.1 Reference trajectories generated for Study II. . . . . . . . . . 58 Figure 5.2 Screenshot from one of the twelve survey pages shown to Study II online participants. . . . . . . . . . . . . . . . . . . . . . . 62 Figure 5.3 Overview of Study II hesitation and anthropomorphism scores. 65 Figure 6.1 Overview of the Study III experiment process. . . . . . . . . . 70 Figure 6.2 Overview of experimental setup for Study III. . . . . . . . . . 72 Figure 6.3 Overview of the robot’s behaviours in the three conditions. . . 74 Figure 6.4 Time series plots of trials with the Blind, Hesitation, and Robotic Avoidance Conditions. . . . . . . . . . . . . . . . . . . . . . 75 Figure 6.5 Overview of the software architecture that interface the high- and low-level control algorithms. . . . . . . . . . . . . . . . . 80 Figure 6.6 Study III experimental setup of the 7-DOF robot. . . . . . . . 81 Figure 6.7 The software system architecture of Study III. . . . . . . . . . 83 Figure 6.8 Overview of the seven significant human perception measures. 92 Figure A.1 Schematics of the 6-DOF robot arm used in Studies I and II. . 118 Figure A.2 Screen capture of the control scheme used for Studies I and II. 120 Figure B.1 Examples of Butterworth-filtered Xo-axis wrist motions. . . . 125 Figure B.2 Examples of Butterworth-filtered Yo-axis wrist motions. . . . 126 Figure B.3 Examples of Butterworth-filtered Zo-axis wrist motions. . . . 127 Figure C.1 Consent form used for the human-human interaction pilot ex- periment (page 1). . . . . . . . . . . . . . . . . . . . . . . . 133 Figure C.2 Consent form used for the human-human interaction pilot ex- periment (page 2). . . . . . . . . . . . . . . . . . . . . . . . 134 Figure C.3 Contents of the online advertisement used to recruit subjects for Study I, human-human online surveys. . . . . . . . . . . . 135 Figure C.4 Screen capture of the consent form used for the human-human online surveys. . . . . . . . . . . . . . . . . . . . . . . . . . 136 Figure C.5 Screen capture of online survey for human-human condition, Session 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 xiv Figure C.6 Screen capture of online survey for human-human condition, Session 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 Figure C.7 Screen capture of online survey for human-human condition, Session 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Figure C.8 Contents of the online advertisement used to recruit subjects for Study I, human-robot condition online survey. . . . . . . . 140 Figure C.9 Screen capture of the consent form used for the human-robot interaction online surveys. . . . . . . . . . . . . . . . . . . . 141 Figure C.10 Screen capture of online survey for human-robot condition, Session 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Figure C.11 Screen capture of online survey for human-robot condition, Session 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Figure C.12 Screen capture of online survey for human-robot condition, Session 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Figure C.13 Contents of the online advertisement used to recruit subjects for Study II. . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Figure C.14 Screen capture of the consent form used for Study II. . . . . . 147 Figure C.15 Sample page from Study II online survey. . . . . . . . . . . . 148 Figure C.16 Contents of the online advertisement used to recruit subjects for Study III. . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Figure C.17 Advertisement posted at the University of British Columbia campus to recruit subjects for Study III. . . . . . . . . . . . . 151 Figure C.18 Consent form used for Study III. . . . . . . . . . . . . . . . . 152 Figure C.19 Pre-questionnaire used to collect demographic information from the Study III subjects. . . . . . . . . . . . . . . . . . . . . . . 153 Figure C.20 Main questionnaire used to collect the subject’s perception of the robot in Study III. . . . . . . . . . . . . . . . . . . . . . . 154 Figure D.1 Overview of the AHP-based trajectory generation process. . . 159 Figure D.2 The software system architecture of Study III. . . . . . . . . . 161 Figure E.1 Overview of the success score collected from a five-point Lik- ert scale question in Study II. . . . . . . . . . . . . . . . . . . 169 xv Figure E.2 Overview of the jerkiness score collected from a five-point Likert scale question in Study II. . . . . . . . . . . . . . . . . 170 Figure E.3 Overview of perceived intelligence scores collected from five- point Likert scale questions in Study III. . . . . . . . . . . . . 171 Figure E.4 Overview of usefulness scores collected from five-point Likert scale questions in Study III. . . . . . . . . . . . . . . . . . . 172 Figure E.5 Overview of emotional satisfaction scores collected from five- point Likert scale questions in Study III. . . . . . . . . . . . . 173 xvi Glossary AHP Acceleration-based Hesitation Profile, a characteristic trajectory profile commonly observed in a particular type of hesitation gesture as elaborated in Chapter 4. ANOVA Analysis of Variance, a set of statistical techniques to identify sources of variability between groups PCA Principal Component Analysis ROS Robot Operating System HH Human-Human condition HR Human-Robot condition HRI Human-Robot Interaction HCI Human-Computer Interaction HRST Human-Robot Shared-Task xvii Acknowledgments I would like to thank my supervisors, Drs. Elizabeth A. Croft and Machiel Van der Loos. They have patiently provided me with guidance and support not only for the development of this thesis work, but also for helping me to navigate through academia as a novice researcher. More importantly, they provided me with the freedom to explore the field of Human-Robot Interaction (HRI), while continuing to support my interests in Roboethics. I would also like to thank Dr. Chris A. C. Parker for his mentorship that I sought on a nearly daily basis. He has inspired this thesis project on developing hesitation gestures for Human-Robot Shared-Task (HRST), and his technical assistance and insight for the project have been invaluable. My thanks also go to Drs. Karon MacLean (Department of Computer Science, UBC) and Craig Chapman (Department of Psychology, UBC) for their help in designing the online surveys for Studies I and II, respectively; Dr. John Petkau (Department of Statistics, UBC), Mr. Lei Hua (Department of Statistics, UBC), Dr. Michael R. Borich (Brain Behavior Laboratory, UBC), and Ms. Susana Zoghbi for their statistical consultation of data analysis of Studies I and II; and Dr. Peter Danielson (Centre for Applied Ethics, UBC) and his team for providing me with opportunities to learn qualitative and mixed-methods approaches that enriched the experiment design and data analysis of Study III. The help and support from the members of the CARIS lab have been invalu- able. Ergun Calisgan volunteered his time to explore integration of a vision system for Study III. Although the vision system was not employed in the study due to technical issues, his help on investigating the system was very helpful in choos- ing an alternative approach. Numerous individuals proofread this thesis, including xviii Tom Huryn, Matthew Pan, Eric Pospisil, Navid Shirzad, Aidin Mirsaeidi, and Dr. Brian Gleeson (Department of Computer Science, UBC). These individuals have also provided valuable feedback throughout my thesis work. I would also like to acknowledge the work of two co-op students, Boyd Panton and Shalaleh Rismani. Boyd Panton helped develop the experimental task of Study I that became the foundation for designing experimental tasks in subsequent stud- ies. Shalaleh Rismani participated in the process of producing videos for Study II, and helped recruit subjects for Studies II and III. Many thanks go to the numerous individuals – especially, Mr. Jason Yip (Uni- versity of Maryland) – who helped recruit subjects for the three studies. I would also like to thank all of the online survey participants and experiment subjects who volunteered their valuable time for this research. I would like to acknowledge the financial support of the Natural Sciences and Engineering Research Council of Canada, and the Institute for Computing, Infor- mation and Cognitive Systems. Finally, I would like to thank my parents and my sister for their love, prayers, and support. xix Chapter 1 Introduction Collaborating agents often share spaces, parts, tools and equipment and, in the normal course of their work, encounter conflicts when accessing such shared re- sources. Humans often resolve such conflicts using both verbal and nonverbal communication to express their intentions and negotiate a solution. However, when such conflict arises between a human and a robot, then what should the robot do? Answers to this question depend on the context. More deeply, the answers to what an agent should do in the context of human interaction are grounded by a set of morals, i.e., “standards of behavior or beliefs concerning what is and is not accept- able” [50]. With the advent of robots entering society as assistants and teammates, it is important to frame the answer to this question before robots are widely de- ployed in homes and workplaces. While current industrial robots are designed for boring, repetitive and danger- ous tasks, their capacity to make context-based decisions and moral judgments remains highly limited compared to that of humans. Robots that share spaces and resources with humans today (e.g., autonomous guided vehicles) typically use collision avoidance mechanisms to deal with such conflicts. Many mobile robot platforms are designed to stop or find an alternate path of travel when dynamic obstacles, such as humans, interfere with their course. These robot behaviours are designed with human safety as the highest priority and have been an effective means for avoiding conflicts and collisions. By design, such systems default to avoidance as the single predetermined solution to human-robot resource conflicts. 1 But what if, similar to human-human interaction, a robot could attempt to ne- gotiate a solution with its human user? Such a system could leverage the high-level decision making skills of humans without undermining the technological benefits that a robot can provide. For such human-robot negotiation to take place, human- robot teammates must fluently and bidirectionally communicate with each other; a robot needs to communicate its intentions and behaviour states to its users and readily understand human expressions of intentions and internal states. This the- sis develops a framework to enable such interactive resolution of human-robot re- source conflicts. In particular, this work focuses on how to program a robot to display uncertainty to human observers using hesitation gestures during a conflict in a Human-Robot Shared-Task (HRST) context. The result of allowing a robot to communicate uncertainty opens up the possibility of the robot practising alternate moral behaviours acceptable and understandable to humans in the face of a shared resource conflict. This work is motivated by the nonverbal communication humans use to com- municate uncertainty and dynamically resolve conflicts with one another. Hesita- tion gestures are frequently observed in human-human resource conflict situations. When multiple people reach for the same object at the same time, one or more of the engaged parties often exhibit jerky, stopping hand motions mid-reach. They often resolve the resource conflict via a verbal/nonverbal dialogue involving these hesitation gestures. During resource conflicts, hesitation gestures not only serve the function of avoiding collisions, but also serve as a communication mechanism that help express the intentions of the person who exhibits the gesture. Hesitation is one of many nonverbal cues that humans use to communicate their internal states [1]. Numerous studies in psychology have found that these communicative behaviours also influence the perception and behaviours of their observers. For example, Becchio and colleagues studied the impact of social and psychological elements on the kinematics of human reach-to-grasp motions [5, 6, 62]. Results from their studies demonstrate that the kinematics of these motions, while achieving the same function, vary according to the purpose of the motion, the intentions of the person exhibiting the motion, and the intentions expressed in the motions of another person. A number of nonverbal gestures have also been studied in Human-Robot In- 2 teraction (HRI) contexts. A large body of work focuses on robot recognition of human nonverbal cues and human recognition of nonverbal cues expressed by a robot. Similarly to the way in which different human motions that serve the same function can communicate different internal states and intentions, a study by Kulić and Croft demonstrated that different functional robot trajectories can elicit dif- ferent human responses to the robot [40]. This finding and many others support the notion that the manner in which a robot collaborates with people in a shared re- sources environment affects the user’s perception of the robot. A study by Burgoon et al. [13] suggests that, in positive teamwork, each team member has a positive perception of the other and the collaborative task yields a positive output. There- fore, ensuring positive user perception of a robotic partner/teammate is particularly important for improving human-robot collaboration. The contributions of this thesis, comprised of three studies, extend the body of work in nonverbal HRI. Prior work has not investigated whether the commu- nicative content of human hesitation gestures can be represented in the motions of an articulated robot. To fill this knowledge gap, this thesis establishes empiri- cal support for the hypothesis that humans observe a robot’s replication of human hesitation motions as hesitations (Study I). This work also provides an empirically grounded design specification for generating anthromimetic hesitation gesture tra- jectories on a robot (Study II). This trajectory specification is devised such that, when implemented as a real-time conflict response mechanism, it can be used to generate robot motions that are recognized as hesitations by human observers in situ. The outcome of these two studies enabled the creation of human observable communicative hesitation on a robot arm. This new behaviour permitted the im- plementation of Study III such that the impact of hesitation as a conflict response mechanism could be investigated. In particular, Study III was conducted to as- certain whether the devised conflict response mechanism, when compared with a traditional collision avoidance mechanism, has a positive impact on human-robot collaboration. 1.1 Thesis Outline This section describes the organization and contents of the chapters in this thesis. 3 Chapter 2 discusses related works from the field of psychology, Human-Computer Interaction (HCI), and HRI. The chapter mainly focuses on studies that discuss nonverbal human-robot communication and human-robot collaboration. There has been limited research focused on hesitations as kinesic1 hand gestures. Hence, in order to design and implement hesitation gestures on a robot, it is necessary to understand which human motions are perceived as hesitations. Chapter 3 presents the first of three human-subject studies, Study I, designed to empirically identify and record human motions that are perceived as hesitations by human observers. This study uses recorded human motions to test whether a simplified version of human hesitation gestures implemented on a robotic manipu- lator is also seen as a hesitation gesture. This study hypothesizes that when a robot mimics only the wrist trajectories of human hesitation motions, the robot can be perceived as being hesitant. However, this study does not explore how hesitation motions are different from other types of motions. Based on the positive findings from Study I, Chapter 4 presents qualitative and quantitative observations of the different types of human motions recorded and identified in Study I. This chapter describes the process of extracting key differences between hesitation and non-hesitation trajectories. The extracted tra- jectory features are formulated into a trajectory design specification, called the Acceleration-based Hesitation Profile (AHP), for generating human-recognizable robot hesitation motions. Chapter 5 presents the second human-subject experiment, Study II, which em- pirically tests the efficacy of the suggested hesitation trajectory design. Human perception of videos of different AHP-based robot trajectories are empirically com- pared against videos of other types of robot motions via an online survey. This study tests the hypothesis that AHP-based robot motions are perceived as human- like hesitations by human observers. The study confirms this hypothesis within the anthropometric range of AHP parameter values used to generate the motions. Based on the empirical foundations of Studies I and II, the aforementioned 1According to Birdwhistell, “kinesics is concerned with abstracting from the continuous muscular shifts which are characteristic of living physiological systems those groupings of movements which are of significance to the communication process and thus to the interactional systems of particular groups” (in [42], p. 67).” 4 AHP trajectory specification is implemented in a HRST experiment, Study III, as a real-time resource conflict response mechanism on a 7-DOF robot. This study, presented in Chapter 6, explores the impact that robot-exhibited hesitation ges- tures have on the performance of a human-robot team and human perception of the robot teammate. The following questions are investigated: Can humans recognize AHP-based robot motions as hesitations in situ? Do humans perceive a robot more positively when it hesitates in comparison to when it does not? Does hesitation elicit improved performance of the collaborative task? Functionally, hesitation gestures used in a resource conflict situation achieve the same output as other robot motions that avoid collisions with human users. However, the anthromimetic hesitation gestures designed, implemented, and tested in this thesis caused the users to have a more “human-like” perception of the robot’s behaviour while the robot achieved the same functional task. Chapter 7 discusses the implications of this research in the field of HRI, with a focus on improving the human-robot interaction experience in the HRST domain, and presents the overall conclusions of this thesis. 5 Chapter 2 Background and Motivating Literature This chapter reviews previous studies in psychology and Human-Robot Interac- tion (HRI) to motivate and inform the development of human-like hesitation ges- tures for a robot in Human-Robot Shared-Task (HRST) contexts. A summary of key findings in the psychology literature discussing human nonverbal behaviours leads this chapter (Section 2.1). Findings reported in the relevant literature empha- size the power of nonverbal communication in human-human interaction. Subse- quently, Section 2.2 provides an interdisciplinary overview of previous work dis- cussing hesitations in general and then outlines the need to further understand hes- itation gestures in human-human interaction contexts. Section 2.3 introduces the concept of collaboration as discussed in the literature and provides an overview of human-robot communication studies in collaboration contexts. Finally, Section 2.4 reviews literature on how different features of robot motions impact human per- ception of, and interaction with, a robot. This review provides support from the literature that even an industrial articulated robotic manipulator (a robot arm) can convey anthropomorphic behaviour state to a human observer. 6 2.1 Nonverbal Communication in Human-Human Interaction Research in psychology suggests that people reveal their intentions and internal states to human observers even through simple motions such as walking or reaching for an object [5, 6, 44, 54]. This is complemented by the natural human ability to infer information from other people’s motions [1, 22, 67]. Results from numerous studies indicate that the human ability to display and understand nonverbal cues is an effective (and even necessary) means of influencing social interactions in an interpersonal setting [1, 51]. On the other hand, persons with deficits in displaying or understanding nonverbal social cues, as often exhibited by children with autism spectrum disorder, experience significant difficulties successfully interacting with others [27]. Psychologists have further explored the extent to which humans recognize in- tent or infer internal states specifically from human generated motions. In one study, Johansson recorded various human motions under the point-light1 condi- tion, effectively representing the motions as ten simultaneously moving dots. Re- sults from his experiment demonstrate that humans are able to accurately identify human motions even from such a simplified representation [33]. Subsequently, much research demonstrates that humans ascribe animacy and intention not only to motions of biological beings, but also to moving objects, even when such objects are simple geometric shapes [17, 28, 68]. A study by Ju and Takayama demonstrates that even the automatic opening motions of doors are interpreted by humans as exhibiting a gesture [34]. These findings have inspired research into attribution of animacy by humans in the fields of Human-Computer Interaction (HCI) and HRI [23, 66]. In HCI, in particular, Reeves and Nass demonstrated the highly cited finding that humans treat machines as real, social beings [57]2. 1Johansson attached lights and reflective tape on the joints of an actor’s body while the actor demonstrated natural walking, running, and other motions in the dark. Recordings of this motion showed only the joint positions of the actor as point-lights. 2Reeves and Nass’s work consisted of a series of human-machine interaction experiments that were modified versions of human-human interaction experiments in psychology. They devised a theory from their findings, called the media equation, which states that “People’s responses to media are fundamentally social and natural.” [57] 7 Leveraging on the human ability to ascribe animacy and intentions to moving bodies, this thesis explores how hesitation gestures – one of many nonverbal ges- tures humans use – can be synthesized into robot motions that communicate a state of uncertainty recognizable by humans. The following section defines and provides a summary of this particular human behaviour. 2.2 Hesitation Studies in psychology indicate that cognitive conflicts or internal states of uncer- tainty in humans and animals are often expressed nonverbally. In humans, such nonverbal expressions include shrugs, frowns, palm-up gestures, self-touch ges- tures and hesitations [18, 24]. Hesitations, in particular, are a type of communica- tive cue that humans recognize not only from the behaviour of another person, but also that of animals and insects [64, 70]. Literature suggests several causes of hesitation behaviours: cognitive conflicts [64], difficulty in cognitive processing [63] and reluctance to act [59]. These sources of hesitation manifest themselves as a variety of nonverbal cues. Of these cues, discussions on human hesitations have mainly been focused on pauses in speech [32, 43, 47] and periods of indecisiveness during high-level decision mak- ing processes [18, 49]. Doob defines hesitation as a temporal measure: “... the time elapsing between the external and internal stimulation of an organism and his/her or its internal/external response.” [18] Consistent with Doob’s definition, most studies that investigate hesitations in humans characterize the behaviour in terms of delays. For example, Klapp et al. conducted a study to investigate hesitations that humans exhibit while concurrently performing discrete and continuous tasks [38]. They measured hesitations in hu- man hand motions as 1/3 seconds3 or more of pause in the subject’s hand while multitasking. This study demonstrates that hesitations in the hand appear as sud- den tensing, rather than relaxing, of the muscles and that the number of times a subject hesitates decreases with practice. Measuring hesitations as delays is also found in the HRI domain. Bartneck et al. measured human hesitation as the time 3Klapp et al.[38] empirically determined this value by intentionally interrupting the human sub- jects, engaged in a continuous task, with an auditory tone. 8 taken for a subject to turn off a robot when instructed to do so [3]. Bartneck and colleagues’ study used this measure to investigate whether human attribution of animacy on a robot is correlated with the subject’s cognitive dilemma of turning off the robot. In another study, Kazuaki et al. programmed hesitations on a robot as the duration of time it takes for a robot (in this case, the AIBO, Sony, Japan) to initiate actions after a human demonstrates to the robot how to shake hands with a person [35]. The results of their study indicate that the management of delays in the robot’s response helps improve people’s experience of teaching a robot. Build- ing on the results of [35], this thesis tests whether robot hesitations manifested as kinesic gestures, rather than delays, will lead to improvements in human-robot collaboration. Although delayed response to a stimulus may occur due to an agent’s hesita- tion, hesitation does not equal to a delay. For example, communication latency (a type of delay) is not due to the aforementioned sources of hesitation, such as uncertainty, or cognitive conflicts, although communication latency also qualifies as hesitations according to Doob’s definition. This thesis addresses the challenge of designing human-like hesitation motions for a robot. Hence, the model of hes- itation as a time delay is likely to be insufficient for generating robot motions that convey a state of uncertainty. Only few studies have measured and investigated the kinematic manifestation of hesitations. In entomology, hesitation behaviours in hoverflies have been defined andmeasured as the number of forward and backward motions the insect exhibits in the vicinity of a flower before it lands [70]. This definition and characterization of hesitation was arbitrarily selected as a convenient measure for the study in [70], and does not sufficiently describe the nuance of hesitations as gestures humans perceive when observing reaching motions. In addition, this study involved the motions of one fly, rather than two or more flies working as social actors. In primatology, a study investigating cognitive conflict behaviours in apes defined and measured hesitation behaviours of apes as pointing to two different choices simultaneously or altering of their choices [64]. Such behaviour, however, occurs as part of activities that involve deictic gestures, and is not necessarily transferable to communication in resource conflict contexts. In summary, while hesitation has been measured in terms of involuntary and 9 voluntary time delays in humans, and in terms of motions in some biological stud- ies, it has not been well defined in a multi-agent resource conflict situation. There- fore, a more sophisticated understanding of human hesitation as kinesic gestures is necessary before implementing human-recognizable hesitation gestures on a robot in a HRST context. 2.3 Human-Robot Shared Task In the psychology, HCI and HRI literature, the words joint activity [15], collabora- tion [26], teamwork [15], and shared cooperative activity [10] are often used inter- changeably. These words refer to activities that involve two or more agents having joint intentions and who work together toward a common goal [15]. This the- sis considers joint activities that involve collaborative agents (humans and robots) sharing the same physical environment and resources to complete a task. This thesis uses the term Human-Robot Shared-Task (HRST) to refer to this subset of collaborative activities. Human-human collaboration typically involves people with different intentions and capabilities. Without a means to effectively communicate with each other, the collaborating partners would neither be able to establish a common ground nor interweave subplans to achieve the shared goal [15]. In Bratman’s model of successful collaboration, mutual responsiveness, commitment to the joint activity and commitment to mutual support are necessary. None of these can be established without communication between the collaborating agents [10]. Likewise, in order for human-machine collaboration to be successful, communication mechanisms that allow the collaborating agents to interweave plans and actions and to establish mutual understanding are required [26]. Studies demonstrate that joint intentions of collaboration can be established via nonverbal communication. In an experiment by Reed and colleagues, two people worked as a haptically linked dyad to rotate a disk to a target location collabora- tively [55]. They found that, even without verbal communication, people quickly negotiate each other’s role within the team using only haptic cues. This study also demonstrated that, in comparison to completing the task alone, there is a signifi- cant increase in performance when people worked together as a team. However, 10 when the study was repeated with human-robot dyads, human subjects did not take on a specific role within the collaborative task nor did the dyad yield an improved task performance [56]. The authors suggest that these negative results may be due to the lack of subtle haptic negotiations in the human-robot dyad condition. These studies not only demonstrate the power of nonverbal communication in human- human collaboration, but also point out the importance of designing and exploring communication and negotiation mechanisms to improve HRST systems. Human-robot collaboration studies also suggest that user perception and ac- ceptance of a robotic partner increase when the robot behaves or appears more anthropomorphic. In a Wizard of Oz4 experiment involving a collaborative part retrieval task, Hinds and colleagues found that people exhibit more reliance and attribute more credit to their robotic partner when it appears more human-like [29]. Goetz et al. investigated the impact that a humanoid’s social cues have on hu- man acceptance of the robot as a partner [25]. In their Wizard-of-Oz experiment, the robot’s demeanour (playful vs. serious) and the nature of the cooperative task (playful vs. serious) were varied. The results suggest that the subject’s compliance with the robot increases when the robot displays a demeanour that matches the nature of the task. While the HRI in [29] and [25] was verbal, a number of studies have demon- strated the utility of using nonverbal gestures in conjunction with verbal commu- nications in human-robot collaboration tasks [12, 30, 31, 61]. Holroyd and col- leagues implemented a set of policies that help select a set of nonverbal gestures that should accompany the robot’s speech in order to effectively communicate with its human partner [30]. They demonstrated the effectiveness of their verbal/non- verbal management system in the collaborative solving of a tangram puzzle. The positive results from the study indicate that more natural management of robot ges- tures improves user perception of the robot and helps establish a sense of mutual understanding with the robot. Huang and Thomaz also employed nonverbal cues to supplement verbal communication with a robot [31]. They found that such ver- bal communication, together with supplemental nonverbal gestures, is an effective means to acknowledge establishment of joint attention between human and robot. 4A popular method of conducting an experiment in HRI where human confederate(s) controls a robot behind the scene and unknownst to the participant [36]. 11 This approach improved human understanding of the intended robot behaviour and human-robot task performance [31]. Breazeal and her colleagues conducted an ex- periment with an expressive 65-DOF robot, Leo, that used shoulder shrug gestures and facial expressions to convey its state of uncertainty to a human collaborator in a joint activity [12]. The results from this study provide strong evidence that combined use of nonverbal gestures and speech to display a robot’s behaviour state can be more effective in improving task performance than using speech alone as the only communication modality. While the findings in [30], [31] and [12] emphasize the power of nonverbal communication in human-robot collaboration, the nature of the collaborative tasks involved implied turn-taking rules between the human and robot that may not be present in many potential human-robot collaboration scenarios. The subject’s role in [12] was to supervise and instruct the robot to perform a manipulation task while the robot waited for the subject’s instruction before performing the task. These roles were reversed in [30]. If fixed rules exist on turn-taking or right of way and both humans and robots follow these rules perfectly, resource conflicts, such as reaching for the same object at the same time, would not occur. However, when such rules are not in place, or if at least one of the collaborating agents is not aware of, or does not comply with, the predefined rules, transparent communication of each agent’s intentions and behaviour states (dominant, submissive, collaborative, pesky) becomes even more essential for navigating the interaction. This thesis contributes to the HRI body of work by exploring the effects of nonverbal com- munication in collaborative scenarios without such predefined/implied hierarchy and turn-taking rules. Hence, the nature of the HRST designed for this thesis is distinguished from these previous studies in that it features a lack of predefined turn-taking rules. In addition, in the case of humans and robots collaborating in noisy indus- trial environments, human-robot communication involving only nonverbal gestures can be especially important. However, unlike many of the high-DOF robots used in human-robot collaboration studies, most robots in industrial settings are not equipped to display facial gestures representing the robot’s state. Often, it is also impractical for a robot to have a face [9]; moreover, in industry, it is necessary that the worker pay attention to the task at hand, i.e., the workpiece and poten- 12 tially the robot’s hand or gripper, rather than the robot’s body or face (if present). Nonetheless, recent literature suggests that humans, when interacting with robots, naturally expect robots to follow social conventions even if they are non-facial and non-anthropomorphic [20]. This emphasizes the need to design natural HRI for appearance-constrained robots [9]. The following section describes some of the studies in HRI that demonstrate how different qualities or parameters of robot motion trajectories elicit different human responses or convey different behaviour states to human observers. These contributions, in addition to the psychology literature described in Section 2.1, sug- gest that motions of even non-facial, non-anthropomorphic robots can be designed to be communicative and expressive. 2.4 Trajectory Implications in Nonverbal Human-Robot Interaction Many studies in nonverbal HRI have focused on generating human-like robot ex- pression of internal states using full-bodied or head-torso humanoid robots. Typ- ically, recorded human motions are mapped onto joint trajectories of humanoid robots [45, 46, 52]. While studies suggest that this approach is valid in generating human-like robot motions [11, 52], this is not a feasible approach for low-DOF non-humanoid robots that have a significantly different kinematic configuration from the human body. However, research in HRI suggests that replicating joint trajectories is not es- sential to eliciting different human responses to, or perception of, a robot. Flash and Hogan [21] famously proposed a model of human reaching motions as a minimum- jerk trajectory of the hand. Kim and colleagues demonstrated, using a humanoid robot, that people ascribe different personalities to the robot when the trajectory parameters of the robot’s gestures, including velocity and frequency, are varied [37]. Complementing the study of robot personality expressed in motion parame- ters, and also using a humanoid torso robot, Riek and colleagues studied subjects’ attitudes towards, and responsiveness to, three different nonverbal robot gestures with varying smoothness and orientations [58]. The results of the study showed quicker human response to abrupt gestures and front-oriented gestures than smooth 13 or side-oriented gestures. Recent findings by Saerbeck and Bartneck used two dif- ferent robotic platforms and echo the importance of robot motion quality in elicit- ing different human responses [60]. Using a facial robot (iCat, Philips Research, Eindhoven, the Netherlands) and a 2-DOF mobile robot (Roomba, iRobot, Mas- sachusetts, USA) this study demonstrated that acceleration and curvature of robot motions have a significant impact in conveying different affect to human observers, whereas the type of robot used does not. In particular, a study by Kulić and Croft is especially relevant to understand- ing the relationship between robot motions and observer response. In their study, human subjects watched an articulated 6-DOF robot (CRS A460, Burlington, ON, Canada) perform a series of pick-and-place and reach-retract motions. Keeping the end positions of the motions the same, two different trajectory planning strategies were used to control the robot [40]. Results of this study indicate that a human observer’s affective response (as recorded by physiological sensors) to the robot changes significantly based on the type of trajectory used to control the robot’s motion even when the trajectories functionally obtain the same result. The results of [37, 40, 58, 60] are consistent with findings from the psychol- ogy literature that show that perceived affect is significantly correlated with the kinematics of motion rather than with the shape/form of the moving object [53]. However, these studies focused on the expression of affect in various robot mo- tions, rather than intention or state. In contrast, the recent study by Ende and colleagues focused on conveying communicative messages, rather than just affect, via nonverbal communication. Recordings of a humanoid’s (Justin) and a 7-DOF manipulator’s (SAM) human- like gestures were used in an online survey [19]. Results of this study show high levels of human identification for a robot’s use of deictic gestures, such as pointing, and terminating gestures, conveying ‘Stop’ or ‘No’, for both types of robots. This result demonstrates that articulated robotic manipulator motions can effectively convey communicative messages as well. In the context of HRST, this thesis explores a research question not yet an- swered by the substantial body of work in this domain: given that people will ascribe animacy and recognize affect and behaviour states from non-facial robots, can we leverage this phenomenon to communicate hesitant states of an articulated 14 industrial robot arm? 15 Chapter 3 Study I: Mimicking Communicative Content Using End-effector Trajectories This chapter1 considers whether the communicative content within human hesita- tion gestures can be represented in the motions of an articulated robot arm. Three human subjects participated in a pilot experiment, in which they exhibited hesita- tion and non-hesitation motions in response to the presence and absence of human- human(HH) conflict of resources. The experimenter then programmed a robot arm to replicate these motions in a human-robot (HR) conflict of resources context. In an online survey, 121 participants provided their observations of the videos of hu- man gestures collected from the human subject trials and the videos of the gestures replicated by a robot arm. The hypothesis for this study was that humans will recognize hesitation ges- tures equally well in robots as in humans. Confirmation of this hypothesis will demonstrate that anthromimetic hesitation in robot gestures can be used as a viable communication mechanism in human-robot interactive domains. This study led to 1©2011 IEEE. The majority of this chapter has been modified/reproduced, with permission, from Moon, A., Parker, C. A. C., Croft, E. A., & Van der Loos, H. F. M. (2011). Did You See It Hesitate? - Empirically Grounded Design of Hesitation Trajectories for Collaborative Robots. 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1994-1999). San Francisco, CA. 16 significant results that support our hypothesis. The remainder of this section is organized as follows. The details of the exper- imental methodology are provided in Section 3.1. Section 3.2 presents the results of the surveys, followed by a discussion of their implications to the field of HRI and relationship to the remaining chapters of this thesis in Section 3.3 and Section 3.4. 3.1 Experimental Methodology This section describes how hesitation gestures are generated in a human-human interactive domain (Section 3.1.1), and how these gestures are reproduced on a 6-DOF robotic arm (Section 3.1.2). Survey respondents watched muted video recordings of both the human and robot motions, and attempted to identify the human and robot motions that they perceived to contain hesitation gestures. Our survey methodology is described in Section 3.1.4. 3.1.1 Human Subject Pilot In this pilot experiment, the experimenter and participant engaged in a simple task in which conflicts over a shared resource between the participant and the experi- menter naturally occurred. Figure 3.1 shows the experimental set-up. The experimenter and each participant wore noise canceling headphones, and for each session, sat on opposite sides of a table with a small rectangular target (a sponge) at the table centre. In each session, each time the participant and the experimenter heard a beep through their headphones, they reached for and touched the target and then returned their hands to the resting locations as fast as they could. Each person heard independently randomized sequences of beeps such that, by chance, both people would sometimes reach for the target at approximately the same time. One female and two male right-handed undergraduate engineering students participated in this pilot experiment. Each participant engaged in one session of the experiment. Each human-human (HH) session was video recorded and labeled HH-1, HH-2, and HH-3. The experimenter captured the participant’s arm movements using two inertial sensors (Xsens MTx, Enschede, Netherlands) at 50 Hz. Inertial sensors have been 17 Target ParticipantExperimenter Sensor 1 Sensor 2 Headphones Headphones Video Camera Resting Locations Figure 3.1: Study I experiment set-up for the human-human interactive pilot. The participant sits opposite the experimenter and wears two inertial sensors on his/her dominant arm. The participant’s resting location and the location of the target mark the two endpoints of the participant’s reach-and-retract motions. (©2011 IEEE) widely used and exploited to study human upper limb movements [65, 71, 72]. As illustrated in Figure 3.1 and Figure 3.2, the experimenter strapped these sensors on the participant’s dominant arm: one between the shoulder and the elbow, and the other between the elbow and the wrist. Prior to each session, the experimenter ini- tialized the sensors to a reference inertial frame. To calculate the wrist trajectories via forward kinematics, the experimenter measured the lengths of the participant’s upper arm and forearm (lse and lew). The shoulder marked the location of a global frame, and was approximated as a purely spherical joint with zero displacement. Calculation of 3D Cartesian co- ordinates of the participant’s wrist positions with respect to the shoulder involved gyroscope measurements from the two sensors and the arm lengths of the partic- ipant. Converting the gyroscope rate of turn measurements to rotational matrices yielded HR o 1 and HR o 2 2. These are the orientation of sensor frames F1 and F2 with 2The prescript ‘H’ denote that the variable/value pertains to the human subject(s) and are de- scribed in terms of the human’s coordinate frame. Similarly, the prescript ‘R’ denotes that the vari- able/value pertains to the robot’s coordinate frame. For vectors, superscripts denote origin of the 18 Xo Yo Zo O X1Y1 Z1 Sensor 1 X2 Y2 Z2 S e n s o r 2 lse le w p p pow w e Fo F1 F2 o e e w Figure 3.2: Illustration of a three-joint kinematic model approximating the human arm. The origin of the global frame is located on the right shoul- der. The positive Xo-axis points towards the front of the person, and the Yo-axis points towards the left shoulder. Variables lse and lew represent the upper arm and forearm lengths. (©2011 IEEE) respect to the global frame, Fo. The vector sum of the shoulder-elbow displace- ment and the elbow-wrist displacement provides the wrist position with respect to the shoulder, ~H pow: ~ H poe = HR o e [0 lse 0]T (3.1) ~ H pew = HR o w [0 lew 0]T (3.2) ~ H pow = ~H poe + ~H pew = Hx o w ~i+Hy o w ~j+Hz o w ~k (3.3) vector, whereas the subscript denotes the endpoint of the vector with respect to the origin. 19 3.1.2 Robotic Embodiment of Human Motion An articulated robot arm (CRS A460, Burlington, ON, Canada) with an open controller (Quanser Q8™/Simulink™) embodied human gestures in generating human-robot (HR) equivalents of the HH pilot sessions (see Figure 3.3 for a robot configuration diagram). Joint 6 Joint 3 Joint 4 Joint 5 Joint 2 Joint 1 Z X Y d ϕ -θ O Figure 3.3: 6-DOF CRS A460 robot arm in the elbow-up configuration. In Study I, this robot replicated the wrist trajectories of the human subject’s motion from the human-human interactive experiment. This robot was also used in Study II. Attached at the end of the robot is an unactuated hand with zero degrees of freedom. Variables d;q , and f define the polar coordinate system of the robot. Technical specifications of the robot are outlined in Appendix A. (©2011 IEEE) Robot Trajectory Generation Since human and robot arms do not embody identical kinematics, the robot repro- duced the human wrist trajectory with its wrist in an elbow-up configuration. The maximum reach of the robot used in these experiments is smaller (23.5 cm) than that of participants’ (39.0 cm). Hence, the computed wrist trajectories from the participant’s inertial sensor data were linearly scaled by 60% (b = 0:6) to fit the robot’s range of motion. Appendix A presents the specifications and wrist motion 20 range calculations used in this study. The following equation yields Rx o w(t), Ry o w(t), and Rz o w(t), the human wrist position at time t that are modified to fit within the robot’s range of motion: ~ Rpow(t) = b ( ~H pow(t)min[ ~H pow(t)])+min[ ~Rpow] (3.4) Here, ~H pow(t) is the calculated human wrist position in the Fo-frame at time t. Vari- ablemin[ ~Rpow] represents a minimum reach position of the robot from Fo to its wrist (see Figure 3.3 for the frame definition). A sigmoid function interpolator applied to the resultant discrete 3D Cartesian trajectories provided a smooth high frequency reference trajectory for the robot (1 kHz). Applying a quintic spline smoothing to the position outputs of the forward kinematics, and taking derivatives of the splines yielded the maximum velocity and acceleration of the trajectories. This method has been advanced by Woltring as the most acceptable derivative estimation method for biomimetics applications [69]. The sigmoid interpolator employed these values to generate the reference trajectory. As shown in Figure 3.4, feeding the interpolated 3D Cartesian coordinates to an inverse kinematics routine finally generated the joint-space trajectories. As an in- termediate step, the 3D Cartesian coordinates were converted into polar coordinates (equations (3.5) to (3.7)) for more intuitive visualisation of the robot’s position in its work space (see Figure 3.3 for the polar coordinate frame definition): d = 2 q x2r + y2r + z2r (3.5) q = xr 2 p x2r + y2r (3.6) f = cos1( 2 p x2r + y2r d ) (3.7) The following inverse kinematics calculations ensure that the robot traces the Carte- sian trajectory with its wrist, while its elbow remains up and the wrist maintains a horizontal orientation with respect to the ground: Here, variable ase refers to the robot’s link length between joints 2 and 3, and aew refers to the link length between joints 3 and 5. 21 3D Cartesian Data Continuous Sigmoid Interpolator Inverse Kinematics PID Control 6-DOF Robot q, q qref vref Xref 10Hz . Xref 1kHz Figure 3.4: Control diagram showing sigmoid interpolation of human wrist motions, and the generation of the robot’s wrist motion via a conven- tional PID controller. (©2011 IEEE) A joint-space PID algorithm controlled the robot kinematics. To improve the fidelity of the wrist trajectory motion, given the limitations in the robot’s maxi- mum velocity and acceleration, the commanded trajectories were slowed by five times for video recording. These hardware limits are outlined in Table A.1 in Ap- pendix A. When recording with the robot, an actor demonstrated the corresponding human trajectories at a rate also five times slower than normal to match the robot’s speed. Subsequently, video recordings of the combined human and robot motions in the HR trials were sped up by five times to eliminate speed discrepancies be- tween the HH and HR sessions. An unactuated hand (sponge-filled glove) was affixed to the robot’s wrist. This prop made the context of the task clear to the observers of the HR videos, and ensured the safety of the actor. 3.1.3 Session Video Capture The survey contained three HH and three HR videos – one HH and one HR videos for the three pilot sessions. Both HH and HR videos show only the dominant hand and arm of the participating agents (human or robot) in the workspace. After crop- ping extraneous recordings at the beginning and end of the sessions (and muting all recorded sounds), the generated videos ran for about 2 minutes each. Due to an interruption that occurred during video recording, HH-1 only contained half of the recorded Session 1. The surveys used the complete recorded videos for both Sessions 2 and 3. Each full length video contained about sixty reach-and-retract motions by each participant. The survey structure grouped an average of four consecutive reach-and-retract 22 motions by a participant as a segment of the video. Session 1 was divided into eight segments (A to H), Session 2 into 14 segments (A to N), and Session 3 into 15 segments (A to O). The segment labels appeared in the bottom right-hand corner of the video as shown in Figure 3.5). Participant RobotExperimenter Experimenter Human-Human(HH) Interaction Video Human-Robot(HR) Interaction Video Figure 3.5: Screen captures of human-human (HH) vs. human-robot (HR) interaction videos. In the HR interaction video, the robot replicated the motions of the participant in the HH interaction video. (©2011 IEEE) 3.1.4 Survey Design Collecting data to test the hypothesis involved launching six different online sur- veys, one survey per video. All six surveys consisted of a short lead-in paragraph instructing the respondents to watch the video with special attention to the agent (human or robot) in focus, followed by a question (“Did the person on the left hesitate?” for HH, and “Did the robot on the right hesitate?” for HR videos) and finally one of the six videos. In all surveys, the respondents had the option of choosing ‘No’, ‘Probably Not’, ‘Probably Yes’, and ‘Yes’ to all segments of the video shown. Figure 3.6 shows a screen capture from one of the online surveys. Appendix C shows screen captures of the remaining surveys and their respective consent forms. Recruitment of survey respondents involved a variety of social media tools (Twitter, Facebook, the first author’s website and blog) and distribution of adver- 23 tisements to university students. Survey respondents received no compensation. In total, 121 people participated in the six online surveys. Table 3.1 shows the breakdown of the number of survey respondents. Figure 3.6: An example screen capture of the online surveys employed in Study I. This particular screen capture is from the online survey of HH- 1. 24 Table 3.1: Number of online respondents per survey Session 1 Session 2 Session 3 nHH1 21 nHH2 20 nHH3 17 nHR1 21 nHR2 24 nHR3 18 3.1.5 Data Analysis Statistical analysis of the survey results involved conducting a repeated-measures analysis of variance (ANOVA) and independent t-tests on the numerically coded lev- els of hesitation scores: 0-‘No’, 1-‘Probably Not’, 2-‘Probably Yes’, and 3-‘Yes’. Consequently, a higher mean indicated a greater probability of a video segment containing hesitation gesture(s) that is/are visually apparent to observers. The significance level for all inferential statistics were set to a = 0.05. Obtain- ing a statistical significance from ANOVA of a survey result indicates that at least one of the video segments is perceived as containing hesitation significantly more or less than the other segments of the same video. Since identifying video segments that are perceived to contain hesitation ges- tures in both versions (HH and HR) of a session is of importance in testing the hypothesis, the analysis also involved pairwise comparisons with Bonferroni cor- rection between the mean scores of segments within each survey. This allowed for empirical identification of segments of a video that obtain high mean hesitation scores (above 2-‘Probably Yes’) and exhibit significantly different mean scores from low mean segments (below 1-‘Probably No’). Investigation of the quality of robot-embodied motion involved conducting in- dependent t-tests between HH and HR versions of all video segments. A non- significant result in the t-test of a segment would indicate that the HH and HR versions of the segment are perceived similarly. 3.2 Online Survey Results The results of the Analysis of Variance (ANOVA) on all six surveys show statistical significance (see Table 3.2). Therefore, for each of the surveys, the respondents were able to observe significant presence or absence of hesitation gestures in at 25 least one of the segments. According to the results of Mauchly’s test, the scores on surveys HH-1, HR-2, HH-3, and HR-3 violate the sphericity assumption. Use of ei- ther Greenhouse-Geisser or Huynh-Feldt approaches accounts for these violations. The ANOVA results presented in Table 3.2 summarize the corrected results. Based on this analysis, Section 3.2.1 presents the video segments that are iden- tified as containing hesitations. Section 3.2.2 discusses the level of perception consistency in the HH and HR videos. Table 3.2: Repeated-measures one-way ANOVA results for all six surveys Survey F p HH-1 F(5.81, 116.28) = 21.52 <.05 HR-1 F(7, 133) = 13.56 <.05 HH-2 F(13, 208) = 9.20 <.05 HR-2 F(5.61, 123.42) = 5.48 <.05 HH-3 F(12.95, 220.18) = 6.48 <.05 HR-3 F(5.46, 87.44) = 7.43 <.05 3.2.1 Identification of Segments Containing Hesitation Gestures As shown in Figure 3.7, the segments in HH-1 with mean hesitation scores above 2 (‘Probably Yes’) are segments F and G. Segments with mean values below 1 (‘Probably No’) are D and H. Pairwise comparison with Bonferroni correction in- dicates that the mean scores of segments F and G are significantly different from that of D and H; this demonstrates that segments F and G contain human hesitation gestures that are recognized by human observers. In HR-1, segments F and G are also the only segments with mean scores above 2. The scores of both F and G show significant differences in means from the lowest-mean segments, below a score of 1 (segments D, E, and H). In HH-2, (see Figure 3.8) segments F, J, K, and L show mean values above 2. These values are significantly different from that of segments B, E, G, and M, all of which scored below a mean of 1. In HR-2, however, only segments F and K received mean scores of above 2. They are significantly different from segments scoring below 1, which were B, D, and M. In HH-3, segments I, J, and N show mean scores above 2. However, only I 26 Video Segment HGFEDCBA M ea n  H es it at io n  P er ce p ti o n  S co re 3.00 2.00 1.00 0.00 Error bars: 95% CI n(HH-1) = 21, n(HR-1) = 21 HR HH Figure 3.7: Session 1 hesitation perception scores summary showing the mean scores and 95% CI for all segments. Analyses show that segments F and G contains hesitation gestures in both human and robot motions with statistical significance. (©2011 IEEE) and N show significant differences from the segments having mean scores below 1 (segments A, B, C, and O). As is apparent from Figure 3.9, HR-3 shows relatively low mean scores in general compared to that of HH-3. Only segment N scores above 2, and all other segments show no significant differences from each other. In HR-3, more than half of the segments score below 1. All but segment N score below 1.5. This indicates the possibility that qualitative differences may exist between HH-3 and HR-3 that are not present in the recordings of Sessions 1 and 2. We discuss this point in Section 3.3. 27 Video Segment NMLKJIHGFEDCBA M ea n  H es it at io n  P er ce p ti o n  S co re 3.00 2.00 1.00 0.00 Error bars: 95% CI n(HH-2) = 20, n(HR-2) = 24 HR HH Figure 3.8: Session 2 hesitation perception scores summary showing the mean scores and 95% CI for all segments. Analyses show that segments F and K contain hesitation gestures in both human and robot embodied motion, whereas segments J and L contain the gestures in human motion only. (©2011 IEEE) 3.2.2 Perception Consistency between Human Gestures and Robot Gestures Investigating the consistencies in perception between scores of HH and HR in all three sessions involved conducting independent t-tests on each pair (HH and HR) of mean values for all segments. The results show highly consistent levels of hesitation in all segments of Ses- sion 1; none of the segments show significant differences in scores between HH-1 and HR-1. This provides strong evidence that the robot embodiment of hesitation gestures in this session is equally able to communicate the subtle state of uncer- 28 Video Segment ONMLKJIHGFEDCBA M ea n  H es it at io n  P er ce p ti o n  S co re 3.00 2.00 1.00 0.00 Error bars: 95% CI n(HH-3) = 17, n(HR-3)=18 HR HH Figure 3.9: Session 3 hesitation perception scores summary showing the mean scores and 95% CI for all segments. Analyses show that seg- ment N contain hesitation gestures in both human and robot embodied motion, whereas segment I contain the gestures in human motion only. This particular session shows low level of score consistency between HH and HR compare to Sessions 1 and 2. tainty to human observers as the human produced hesitation gestures. Less consistency in hesitation scores exists between HH-2 and HR-2. Of the four segments that significantly contain hesitation gestures in HH-2, two (segments J and L) show significantly lower mean scores in HR-2. These are the only two segments that show significant differences between HH-2 and HR-2. The mean scores of Session 3 show the least amount of consistency. As Fig- ure 3.9 illustrates, the mean scores of HR-3 are lower than that of HH-3 in general. Results of independent t-tests between the means of HH-3 and HR-3 reflect this 29 observation. One third of recorded Session 3 segments show significant difference from HH to HR, indicating high inconsistencies between the scores of HH-3 and HR-3. However, HH-3 and HR-3 mean hesitation scores of segment N, both of which are above 2, are not significantly different from each other. 3.3 Discussion The results of the analyses provide strong evidence that hesitation gestures em- bodied in a robot arm can convey the same nonverbal communicative messages as human gestures. The survey participants’ scoring of video segments for hesitation is robust against the presence of extraneous motions, such as natural jitters in the wrist and collisions of the agents’ hands. Multiple instances of collision are present in video segments of Sessions 1 and 3. The abovementioned analyses show that these segments are not identified as significantly containing hesitation gestures. In comparison, motions recorded for Session 2 have an observable level of natural jitter of the participant’s wrist (HH-2) between reaching motions; as a result, robot embodiment of this extraneous motion was apparently not perceived as a hesitation gesture by the survey respondents. If information such as finger movements, wrist angle, and stiffness of the arm or the hand are important features in one’s recognition of hesitation gestures, one could expect to see significantly lower mean scores for all HR segments relative to HH segments. However, this is not the case: the recordings of Session 1 do not show any significant differences in means, and robotic embodiment even score higher in some segments (A, D, and H) than human motions, although not signif- icantly. This is also the case for segments of Session 2, except for two segments that significantly contain hesitation in HH-2 but not in HR-2. However, the survey data show lower mean values in all segments of HR-3 compared to HH-3 with the exception of segment N. Segment N show no sig- nificant differences in the two mean values and contains hesitation gestures with significance according to the analysis. The fact that only Session 3 shows such lack of consistencies in the mean scores brings forth the need for further investigation. Future work might allow us to determine qualitative and quantitative differences of 30 motions in Session 3 from those in Sessions 1 and 2, and the key features of mo- tion trajectories that facilitate robust communication of anthromimetic hesitation gestures. 3.3.1 Limitations There are noteworthy discrepancies between robot-embodied motions and the orig- inal recorded human motions. The robotic arm has only 6-DOF, compared to a human arm’s 7-DOF, and this study employed only four of the six robot joints to follow human wrist trajectories. This inevitably generated a simplified and less dexterous embodiment of human motion. The robot’s kinematic configura- tion (elbow-up configuration) is also significantly different from the kinematics of a human arm, resulting in significantly different joint angles to achieve the same wrist trajectories. A few observable differences also exist between the recording of the HH and HR videos. Although the dimensions of the target object are scaled by the same size factor as the reach distances of the robot, the size of the experimenter’s hand could not be scaled. Therefore, the relative sizes of the hands with respect to the target objects are different in HH and HR. The video camera angle was also slightly different, creating observable visual differences in the distances between the two hands, especially when the hands are in the same vertical plane and ap- pear to be touching each other even when they are not in reality. The location of the experimenter’s hand in the video is also different. In the human-human inter- actions, the experimenter’s hand is located on the right side of the screen, where as her hand appears on the left in human-robot interaction. Since recognition of hesitation gestures should not be affected by the location in which the motions ap- pear, the experiment was recorded without changing the location of the non-mobile robotic platform available. This difference is illustrated in Figure 3.5. Hesitation gestures were robustly recognized in both human and robot motions despite these discrepancies. 31 3.4 Summary This chapter described the investigation of whether hesitation gestures exhibited by a robot can be recognized by human observers as being similar to the gestures exhibited by human arms. The results of this study demonstrate that anthromimetic hesitation gestures by an articulated robot can be robustly recognized, even when the humans’ wrist trajectories are the only replicated components of the gestures. This is a strong indication that such simplified replications of human wrist trajec- tories are sufficient to generate robust, visually apparent anthromimetic hesitation gestures. A few segments of motions are recognized as hesitant in a human arm but are not successfully recognized in its robotic embodiment. The next stage of the investigation is to ascertain the fundamental characteristics in the highly correlated segments. This step, presented in Chapter 4, allows the generation of dynamic tra- jectories a robot can use to exhibit the hesitation gestures in a variety of scenarios. 32 Chapter 4 Designing Communicative Robot Hesitations: Acceleration-based Hesitation Profile In Chapter 3, the results of Study I indicated that observers of a 6-DOF manipula- tor mimicking wrist trajectories of human hesitation gestures perceived the robot to be hesitating. This empirical result suggests that, despite kinematic and dy- namic differences, a robotic manipulator can display the communicative features of hesitation gestures. In order to implement human recognizable hesitation gestures on a robot in real-time Human-Robot Shared-Task (HRST) contexts, key communicative features of human hesitation motions must be extracted and converted into a generaliz- able trajectory design specification. Therefore, in this chapter1, key features from recorded human motions are extracted and, based on these features, an end-effector hesitation trajectory specification is proposed. Section 4.1 describes the process of extracting characteristic features from hu- man hesitation trajectories, and outlines the key trajectory differences observed be- 1©2011 IEEE. Parts of this chapter has been modified/reproduced, with permission, from Moon, A., Parker, C. A. C., Croft, E. A., & Van der Loos, H. F. M. (2011). Did You See It Hesitate? - Empirically Grounded Design of Hesitation Trajectories for Collaborative Robots. 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1994-1999). San Francisco, CA. 33 tween hesitation gestures and successful reach-retract motions. Section 4.2 presents the proposed trajectory design specification, referred to as the Acceleration-based Hesitation Profile (AHP). Two implementation methods for the AHP are presented in this section. Section 4.3 presents the strengths and limitations of the AHP, and Section 4.4 provides a summary of this chapter. In the following chapter, Chap- ter 5, the efficacy of the AHP is tested in an online-based study, Study II. 4.1 Method The method for extracting characteristic features from human wrist trajectories in- volves three key steps: a) pre-processing of the trajectory data, b) understanding the differences between hesitations and other motions via qualitative and quan- titative observations and, c) based on this understanding, capturing the observed differences as a trajectory specification in a form that facilitates implementation in a robot controller. Figure 4.1 illustrates this process. Section 4.1.1 outlines the pre-processing techniques employed to filter, seg- ment, and simplify the collected trajectories. Section 4.1.2 presents qualitatively observed differences between hesitations and other types of motions and outlines a typology of hesitation developed from the observation. Section 4.1.3 describes quantitative differences between the motion types. It also provides the rationale for characterizing the gesture trajectories in acceleration space. 4.1.1 Pre-processing – Filtering, Segmentation, and Principal Component Analysis In Study I, the inertial sensor data collected from the pilot experiment were con- verted into 3D Cartesian position time-series data. Along with the position data, the sensors also provided time-stamped linear acceleration trajectories of the hu- man wrist motions in Cartesian space. In order to compare hesitation motions to successful reach-retract motions, these data were filtered post-hoc with a 4th order Butterworth filter with a 6Hz cut-off frequency and zero phase delay – using, re- spectively, the MATLABTM functions: butter and filtfilt. This approach is conventionally used in human arm motion studies. For example, Berman et al. employed the same filtering technique with a cut-off frequency of 5.5Hz [7], Flash 34 Pre-process data Observe quantitative differences Characterize trajectory features Test the characterized features (Study II, Chapter 5) Filter data Segment data Simplify data with principal component analysis (PCA) Video recordings of human motion Quantitative differences in position, velocity, acceleration, and jerk Observe qualitative differences 3D position and acceleration recording of human wrist trajectories Collect human motion data (Study I, Chapter 3) Typology of hesitation and non-hesitation motions Figure 4.1: Illustration of the trajectory characterization process. and Hogan used 5.2Hz [21], and Bernhardt et al. used 6Hz [8]. The MATLABTM script for the filter algorithm is provided in Appendix D, Section B.1.1. Segmentation The filtered human-trajectory time-series data was divided into individual motion segments. The start and end of a segment coincided with the start-of-reach and end- of-retract motions respectively. The segmentation algorithm used Xo-axis magni- tudes of acceleration, the characteristic Xo-axis acceleration extrema in each mo- tion, and a set of threshold values. As defined in Figure 3.2, the Xo-axis points towards the front of the person. Figure 4.2 illustrates the results of the segmenta- 35 1 2 3 4 5 6 −200 −100 0 100 200 X o − ax is   −200 −100 0 100 200 Y o − ax is −200 −100 0 100 200 Time (s) Z o − ax is  −  G ra v it y  S u b tr ac te d Position (cm) Velocity(cm/s) Acceleration (10cm/s , for scale)2 1 2 3 4 5 6 1 2 3 4 5 6 Acceleration-Based Segmentation Results Data from Subject1 First Max. Local Min. Additional Min. Figure 4.2: Segmentation of trajectories using the acceleration-based method described in Section 4.1.1. Three motion segments are depicted. The red dashed lines indicate the beginning, middle, and the end of each motion segment identified from the segmentation algorithm. tion algorithm. The algorithm begins by finding the first instance of maximum magnitude of acceleration in the Xo-axis above a threshold value (set at 1 m/s2 via iterative test- ing). It then backtracks in time from this point to find the closest local Xo-axis acceleration minimum occurring prior to the maximum. This minimum coincides with the starting point of a reaching motion and, therefore, marks the start of a motion segment. From the minimum, the algorithm moves forward in time to find two additional minima, with the last minimum indicating the end of motion. 36 Post-processing of the output from this algorithm was required for hesitation tra- jectories, since they tend to have additional extrema. The pseudo code and MATLAB implementation of this algorithm are provided in Appendix D, Section B.1.2. Principal Components Simplification Motion paths from Study I show movement primarily in the sagittal (X-Z) plane (see Figure 4.3), with relatively small medio-lateral (Yo-axis) components. This is true even though no spatial constraints were imposed on the subjects during the experiment. To simplify the characterization process, the recorded 3D Cartesian trajecto- ries were projected onto 2D planes using Principal Component Analysis (PCA) to extract the key orthogonal components that describe each dataset. When applied to individual motion segments, this yields the orientation of the two principal axes of motion with respect to the original axes. Then, the 3D motion trajectory was pro- jected onto the plane constructed with these two principal axes. This projection was done using MATLAB’s princomp command. The plane shown in Figure 4.3 il- lustrates an example output of the PCA. Descriptive statistics of the sum-of-squared error (SSE) due to projection for each subject’s data are presented in Appendix B, Table B.2. 4.1.2 Qualitative Observations and Typology of Hesitation and Non-Hesitation Motions In order to understand the differences between hesitation and non-hesitation mo- tions, video recordings of all three participants’ motions from Study I were coded for qualitative analysis. Based on this analysis, a typology of human reach-retract motions was developed. Figure 4.4 illustrates the typology. Two types of motions were observed from non-hesitation video segments: suc- cessful reach-retract (S-type) motions and collisions. In S-type motions, partici- pants did not encounter any resource conflict and were successful in touching and returning from the target. In collision-type motions, participants reached for the target and had physical contact with the experimenter’s hand while doing so. 37 −1000 −500 0 500 1000 −1000 0 1000 0 500 1000 1500 2000 A X  (cm/s 2 ) Principal Component Plot:  Subject 1 Reach and Retract Acceleration (SSE: 4.0e+005) A Y  (cm/s 2 ) A Z  ( cm /s 2 ) −1000 −500 0 500 1000 −1000 −500 0 500 1000 A X  (cm/s 2 ) Bird Eye View A Y  (cm/s 2 ) Side View Figure 4.3: A successful reach-retract human motion shown in a side view and a top view with its principal plane. Red data points lie below the plane, and green ones lie above. Two types of hesitation motions were identified from the video segments con- taining hesitation. In both types of hesitations, the participant’s hand launched towards the target, but halted in midair as the experimenter’s hand moved towards the same target. The motion of the participant’s hand after halting differentiated the two types of hesitations. In one type, the participant’s hand retracted back to the starting position, abandoning motion towards the target. This type of hesitation is herein referred to as a retract-type (R-type) hesitation. In the other type, the par- ticipant’s hand, after halting, hovered in place until the experimenter retracted back from the target and then resumed reaching for the target. This type of hesitation is herein referred to as a pause-type (P-type) hesitation. The number of trajectories collected for each motion type are summarized in Figure 4.4. Due to the small sample size of P-type hesitations, it is difficult to find features from the trajectories that are representative of this type of hesitation. Hence, the remainder of the characterization process focuses on R-type hesitations only. 38 Human Reach-Retract Motions Hesitations Non-Hesitations Pause Type (P-type) Hesitations (4) Retract Type (R-type) Hesitations (8) Successful Reach-Retract (S-type) Motions (134) Collisions (9) Figure 4.4: Graphical overview of typology of hesitation and non-hesitation motions. The numbers in parenthesis indicate the number of motion segments collected for the particular motion type. 4.1.3 Quantitative Observations and Characterization Approach The small sample size and the short durations of the gestures provided poor fre- quency content resolution. Thus, the trajectory features analysis was done in the time domain only. As shown in Figure 4.5, there are large variations in position trajectories of hesitation motions compared to that of S-type motions. The start time of the re- traction phase of R-type hesitations ranges between about 35% to 65% of the total reaching motion time. This is true even when comparing within subjects. Due to the lack of consistent features found in position profiles, trajectory characteristics were examined examined in higher order kinematic profiles. Considering that hesitation gestures are often described as ‘jerky’ motions, R- type and S-type motions were studied in the jerk profiles. The jerk profiles are produced by numerically differentiating acceleration profiles collected from the inertial sensor. Similar to position profiles, in the jerk profiles, R-type motions demonstrate much larger variations than S-type motions. As shown in Figure 4.6, large differences are also observed even among S-type motions of the same subject. Hence, in the jerk space, it is difficult to discern what unique trajectory patterns exist in R-type motions. Trajectory differences between R-type and S-type motions were found to be most prominent in the acceleration profiles, specifically in terms of the differences in relative acceleration extrema magnitudes and their time values. As shown in Figure 4.7, a maximum forward acceleration is observed shortly after the start of motion, at time t1, during the launch phase of all R-type motions. Following this 39 0 25% 50% 75% 100% 10 15 20 25 30 35 40 45 50 Time Normalized X o -A x is  P o si ti o n  ( cm ) Subject1 Xo-Axis Wrist Position R-type motion S-type motion   Figure 4.5: A few examples of Butterworth-filtered Xo-axis wrist motions from Subject 1 in Study I. All trajectories are time-normalized to match the slowest (longest) motion segment. launch acceleration, labeled a1, R-type motions reach a maximum deceleration, a2, with magnitude slightly larger than that of the launch acceleration. This decel- eration occurs at time t2 that coincides with braking/halting of the hand. The ratio of a2 to a1 (C1) represents the abruptness of the halting behaviour in a hesitation motion and is referred to as the halting ratio. The abruptness of the motion is also dependent on how long it takes for the hand to reach the braking deceleration, a2, from the launch acceleration. This can be represented as a ratio of durations be- tween a1 to a2 to t1 (B1). A local maximum acceleration, a3, follows at time, t3. This maximum occurs near the start of returning motion, and is much smaller than the launch acceleration. The ratio of a3 to a1 (C2) is referred to as the yielding ratio. The complementing ratio of the duration between a2 and a3 to t1 (B2) rep- resents how quickly or slowly the halting behaviour is led to the return phase of the motion. Typically, an additional local maximum, a4, is also observed after a3 40 0 0.2 0.4 0.6 0.8 1.0 1.2-8000 -6000 -4000 -2000 0 2000 4000 6000 Time (s) Je rk  (c m/ s  )3 Subject 1 Principal Axis Jerk Pro!le 0 0.2 0.4 0.6 0.8 1.0 1.2-8000 -6000 -4000 -2000 0 2000 4000 6000 Time (s) Subject 2 Principal Axis Jerk Pro!le Je rk  (c m/ s  )3 0 0.2 0.4 0.6 0.8 1.0 1.2-8000 -6000 -4000 -2000 0 2000 4000 6000 Time (s) Subject 3 Principal Axis Jerk Pro!le Je rk  (c m/ s  )3 R-type motion S-type motion Figure 4.6: Jerk trajectory in Xo-axis. Interestingly, Subject 3’s motions dis- tinctly show two sub-groups of S-type motions. 41 at the end of the returning phase of the motion. This returning acceleration trails off until the end of the motion, t f . The values of these key accelerations extracted from the recorded human motions are presented in Appendix B, Section B.4. In contrast to R-type hesitations, S-type motions have a braking deceleration, a2, of a magnitude similar to the launch acceleration. The second maximum of S-type motions, a3, occur at the end of the returning phase of the motion; since the return of the hand in S-type motions happens after the subject has successfully reached the target object, the time value of this maximum, t3, takes place much later than t3 of R-type motions. S-type motions typically do not have any additional maximum, a4, after a3, and trail off to zero until the end of motion. For comparison, Figure 4.2 shows an overlay of position, velocity, and acceleration for three S-type motions. Figure 4.7 shows the location of the key acceleration extrema for several R-type motions and an example S-type motion. The halting (C1) and returning (C2) ratios, and the ratios of durations between the acceleration extrema (B1 and B2) can be represented with respect to the launch acceleration: a2 =C1a1 (4.1) a3 =C2a1 (4.2) t2 t1 = B1t1 (4.3) t3 t2 = B2t1 (4.4) An Analysis of Variance (ANOVA) was conducted to ascertain whether the rel- ative magnitudes of and the durations between the acceleration extrema are in- deed significantly different between R-type and S-type motions. As outlined in Table 4.1, despite the small sample size, the mean values of the yielding ratio in R-type motions are significantly smaller than that of S-type motions, and B2 for R-type motions are also significantly smaller than for S-type motions (see Ta- ble 4.2). No significant interaction effect is found between the ratios and motion types (F(1;142) = 0:001; p= :98). The same analysis on the halting ratio, however, yields inconsistent results. Non-significant ANOVA results are obtained for subjects 1 and 2, indicating that R- 42 0 20% 40% 60% 80% 100% -4000 -3000 -2000 -1000 0 1000 2000 3000 Time Normalized A c c e le ra ti o n  ( c m /s 2 ) R-Type Motions: X o -axis Acceleration R-Type Motion 1 R-Type Motion 2 R-Type Motion 3 R-Type Motion 4 S-Type Motion a )1(t ,1 a )2(t ,2 a )3(t ,3 a )1(t ,1 a )3(t ,3 a )2(t ,2 a )4(t ,4 Figure 4.7: Acceleration profiles of example R-type motions and an S-type motion in the primary (Xo) axis. Variables a1;a2;a3; t1; t2, and t3 repre- sent key acceleration extrema their time values. R-type motions show common acceleration profiles distinct from S-type motions. (©2011 IEEE) type and S-type motions show similar acceleration profiles from t0 to t1. Provided that the subjects did not plan to hesitate prior to launching the hand, this result is not surprising. However, Subject 3’s R-type motions demonstrated a significantly lower value of halting ratio than that of S-type motions. This inconsistency in the results necessitated further investigation to study a number of key differences between the trajectories of Subject 3’s motions and that of the remaining subjects. As shown in the jerk profiles (see Figure 4.6), Sub- ject 3’s motions can be classified into two distinctly different S-type trajectories – one having much greater positive and negative jerk extrema than the other – both with higher levels of repeatability than subjects 1 and 2. The subject’s accel- eration trajectories also demonstrate much larger halting and yielding ratios than the other two subjects. Significant inter-subject discrepancies are found from a 43 repeated-measures ANOVA. There is a significant interaction effect between ra- tios and subjects (F(2;142) = 11:70; p < :001), with S-type motions of Subject 3 demonstrating a significantly larger mean halting ratio than the remaining two sub- jects (p< :001 for the pairwise comparisons between subjects 1 and 3, and between subjects 2 and 3). Since empirically identified motion trajectories from more sub- jects were not collected, there is insufficient information to conclude whether this subject’s R-type motions should be treated as outliers. Nonetheless, given that this subject’s motions demonstrated larger human perception discrepancies in Study I, Subject 3’s hesitation trajectories are excluded from further analysis. 44 Table 4.1: The mean values and ANOVA results of the halting ratio (C1) and yielding ratio (C2). The values are calculated for each subject, then with the subjects’ data combined. A repeated-measures ANOVA with motion types and subjects as factors demonstrated significant interac- tion effect between ratios and subjects (F(2;142) = 11:70; p < :001). No significant interaction exists between the ratios and motion types (F(1;142) = 0:001; p = :98). Significant pairwise differences with S- type motions are identified via Bonferroni post-hoc analysis, and indi- cated with the following suffix: t p< :01; p< :05; p< :01,p< :001 Motion n C1 C2 Subject 1 S-type 26 M: -1.40, SD: 0.31 M:0.78, SD:0.16 R-type 4 M: -1.45, SD: 0.04 M: 0.26, SD: 0.04*** Ratio*Motion F(1;28) = 0:98; p= 0:33 F(1;28) = 58:00; p< 0:001 Subject 2 S-type 52 M: -1.43, SD: 0.22 M: 1.09, SD: 0.24 R-type 2 M: -1.37, SD: 0.01 M: 0.17, SD: 0.33*** Ratio*Motion F(2;54) = 0:08; p= 0:92 F(2;54) = 5:73; p< 0:05 Subject 3 S-type 56 M: -1.80, SD: 0.17 M: 0.71, SD: 0.14 R-type 2 M: -1.26, SD: 0.21** M: 0.35, SD: 0.08** Ratio*Motion F(2;56) = 38:58; p< 0:001 F(2;56) = 14:23; p< 0:001 Subject 1 and Subject 2 S-type 78 M: -1.42, SD: 0.25 M: 0.99, SD: 0.26 R-type 6 M: -1.40, SD: 0.12 M: 0.24, SD: 0.07*** Ratio*Motion F(2;84) = 0:84; p= 0:44 F(2;84) = 21:66; p< 0:001 All Three Subjects S-type 134 M: -1.58, SD: 0.29 M: 0.87, SD: 0.26 R-type 8 M: -1.35, SD: 0.15*** M: 0.28, SD: 0.08*** Ratio*Motion F(2;143) = 5:33; p< 0:01 F(2;143) = 24:33; p< 0:001 45 Table 4.2: ANOVA results on B1 and B2 ratios. Significant pairs are identified via post-hoc analysis. Measures showing significant ANOVA results are indicated with the following suffix: t p< :01; p< :05; p< :01,p< :001 Motion n B1 B2 Subject 1 S-type 26 M: 1.40, SD: 0.28 M: 2.04, SD:0.55 R-type 4 M: 0.99, SD: 0.25*** M: 1.05, SD: 0.22*** Ratio*Motion F(1;28) = 13:18; p< 0:01 F(1;28) = 14:49; p< 0:001 Subject 2 S-type 52 M: 0.85, SD: 0.30 M: 1.64, SD: 0.41 R-type 2 M: 0.58, SD: 0.74 M: 1.42, SD: 0.38 Ratio*Motion F(2;54) = 2:28; p= 0:11 F(2;54) = 3:31; p< 0:05 Subject 3 S-type 56 M: 1.08, SD: 0.22 M: 1.44, SD: 0.24 R-type 2 M: 1.62, SD: 0.35*** M: 1.50, SD: 0.53 Ratio*Motion F(2;56) = 21:14; p< 0:001 F(2;56) = 1:99; p= 0:15 Subject 1 and Subject 2 S-type 78 M: 1.03, SD: 0.39 M: 1.78, SD: 0.49 R-type 6 M: 0.89, SD: 0.29 M: 1.14, SD: 0.26 Ratio*Motion F(2;84) = 1:52; p= 0:22 F(2;84) = 4:35; p< 0:05 All Three Subjects S-type 134 M: 1.05, SD: 0.33 M: 1.64, SD: 0.44 R-type 8 M: 0.99, SD: 0.33 M: 1.14, SD: 0.22*** Ratio*Motion F(2;5:74) = 0:14; p= 0:87 F(2;5:66) = 4:303; p= 0:073 4.2 Acceleration-based Hesitation Gestures The ANOVA results support the possibility that the proportions of the extrema and their relative location in time may be key elements for designing hesitation trajec- tories for robots. Hence, the mean value of the halting ratio (C1), yielding ratio 46 (C2), B1, and B2 are extracted from the R-type hesitation acceleration trajectories: C1 =1:40 (4.5) C2 = 0:24 (4.6) B1 = 0:89 (4.7) B2 = 1:14 (4.8) By specifying the values of a1 and t1 and smoothly connecting the acceleration extrema that satisfy (4.5) to (4.8), an acceleration profile similar to human R-type hesitations can be generated. The profile produced from this method is herein referred to as Acceleration-based Hesitation Profile (AHP). As a response mecha- nism, an AHP can be triggered after the robot has already started its motion toward a target position. Section 4.2.1 describes a method for generating an AHP-based position trajectory. Section 4.2.2 outlines how the method from Section 4.2.1 can be integrated into a robotic system as a real-time conflict response mechanism. Using the methods outlined in this section, an AHP can supplement existing pick-and-place and reach-retract motions typical of robot motions. In Chapter 5, Study II uses the method described in Section 4.2.1 to pre-generate AHP-based motions for a robot. In Chapter 6, AHP is implemented on a real-time HRST system in Study III. 4.2.1 AHP-based Trajectory Generation To generate an acceleration profile consistent with AHP, the method described in this section fits four cubic splines through the five key points of the acceleration profile. The first spline, ẍ1(t), fits the start of the motion (zero acceleration) to a1, the second, ẍ2(t), fits a1 to a2, the third, ẍ3(t), fits a2 to a3, and the fourth, ẍ4(t), connects a3 to zero acceleration at the end of the motion while ensuring proper return of the end-effector to the starting location. Using this approach, the initial and final values of acceleration and jerk can be specified for each spline. Since the splines start and end at the critical points of AHP, initial and final values of jerk for all four splines are zero. Cubic Hermite splines in the acceleration domain with zero tangents (jerk) can be generated as 47 follows: ẍ(t) = (2t33t2+1)ai+(2t3+3t2)a f = 2t3(aia f )3t2(aia f )+ai (4.9) Here, ai and a f represent the initial and final accelerations of the spline respec- tively, and the spline parameter, t , represents time, normalized over the total de- sired travel time, t f . Substituting the halting and yielding ratios of AHP into (4.9) yields the first three splines expressed in terms of a1: ẍ1(t1) = 2t31a1+3t21a1+0 (4.10) ẍ2(t2) = 2t32a1(1+C1)3t22a1(1+C1)+a1 (4.11) ẍ3(t3) = 2t33a1(C1C2)3t23a1(C1C2)C1a1 (4.12) Using a1 and the relationship between the durations between acceleration extrema outlined in (4.3) and (4.4), one can determine the start and end times for each spline and generate an AHP-based trajectory that travels the desired distance. The acceleration splines in terms of non-normalised time values can be expressed as follows: ẍ1(t) = 2 t 3 t31 a1+3 t2 t21 a1+0 (4.13) ẍ2(t) = 2 t3 (t2 t1)3 a1(1+C1)3 t2 (t2 t1)2 a1(1+C1)+a1 (4.14) ẍ3(t) = 2 t3 (t3 t2)3 a1(C1C2)3 t2 (t3 t2)2 a1(C1C2)C1a1(4.15) This series of smoothly connected cubic splines can be integrated twice to pro- duce a set of quintic splines in position space. Integrating (4.13), (4.14) and (4.15) once, and assuming zero velocity at the onset of the motion, provides quartic ve- locity splines. Integrating them once more yields position splines of the AHP-based 48 motion: x1(t) =  a1t 5 10t31 + a1t4 4t21 +0 (4.16) x2(t) = a1t5 10(t2 t1)3 (1+C1) a1t4 4(t2 t1)2 (1+C1) + a1t2 2 + ẋ1 f t+ x1 f (4.17) x3(t) = a1t5 10(t3 t2)3 (C1C2) a1t4 4(t3 t2)2 (C1C2) + a1C1t2 2 + ẋ2 f t+ x2 f (4.18) Here, ẋ1 f , ẋ2 f , x1 f , and x2 f represent final values of ẋ1(t), ẋ2(t), x1(t), and x2(t), respectively. The last spline, x4(t) is generated after the first three splines have been calcu- lated. This is to ensure that the ẍ3 f ; ẋ3 f ; and x3 f are used as initial conditions, and ẍ4 f = ẋ4 f = 0;x4 f = x0 as final conditions of x4(t) for a smooth returning motion to x0. To meet all six boundary conditions, a quintic Hermite spline is generated in position space as follows: x4 = (110t34 +15t44 6t54 )x3 f +(t46t34 +8t44 3t54 )ẋ3 f +( 1 2 t24  3 2 t34 + 3 2 t44  1 2 t54 )ẍ3 f +(10t34 15t44 +6t54 )x0 (4.19) Consistent with the previous nomenclature, the spline parameter, t4, represents time normalized by the total duration of x4(t). Equations (4.16) to (4.19) repre- sent an AHP-based trajectory that is continuous in position, velocity, acceleration, and jerk. A MATLAB implementation of this AHP-based trajectory generation ap- proach is outlined in Appendix D, Section D.1. 4.2.2 Real-time Implementation In this section, the experimental task from Study I is used as an example HRST scenario to demonstrate how AHP-based trajectory designs can be implemented on a real-time HRST system. In Study I, the robot’s task was to perform a series 49 of reach-retract motions while ‘interacting’ with the experimenter. By generat- ing two quintic Hermite splines (one for reach and another for retract), the task of producing human-like reach-retract motions can be automated to replace the pre-generated time-series position data used in Study I. A general quintic Hermite equation can be described as follows: x(t) = H0xi+H1ẋi+H2ẍi+H3ẍ f +H4ẋ f +H5x f (4.20) H0 = 110t3+15t46t5 (4.21) H1 = t6t3+8t43t5 (4.22) H2 = 0:5t21:5t3+1:5t40:5t5 (4.23) H3 = 0:5t3 t4+0:5t5 (4.24) H4 = 4t3+7t43t5 (4.25) H5 = 10t315t4+6t5 (4.26) Here, subscripts i and f denote the initial and final positions. Going from the robot’s home position (Rx o home) at rest (Rẋ o home = Rẍ o home = 0) to the target position (Rx o targ), the following quintic spline yields trajectories of human-like reaching mo- tions: Rx o reach(t) = H0 Rx o home+H3 Rẍ o targ+H4 Rẋ o targ+H5 Rx o targ (4.27) Rx o reach(t) = (110t3+15t46t5)Rxohome+(+0:5t3 t4+0:5t5)Rẍotarg +(4t3+7t43t5)Rẋotarg+(10t315t4+6t5)Rxotarg (4.28) The trajectory for retracting from Rx o targ to Rx o home can be expressed as follows: Rx o retract(t) = H0 Rx o targ+H1 Rẋ o targ+H2 Rẍ o targ+H5 Rx o home (4.29) Rx o retract(t) = (110t3+15t46t5)Rxotarg+(t6t3+8t43t5)Rẋotarg +(0:5t21:5t3+1:5t40:5t5)Rẍohome +(10t315t4+6t5)Rxohome (4.30) By employing the method outlined in Section 4.2.1, one can supplement this reach-retract trajectory generation system with an AHP-based conflict response 50 mechanism. When a robot starts to move using a quintic-based reaching trajectory, its launch acceleration, a1, occurs near the beginning of the robot’s motion. Since the target location and the desired speed of a reaching motion is known before the robot starts is motion, the value of a1, t1, and Rx o reach(t1) can be calculated a priori. First, the time values of acceleration extrema can be found by calculating the third derivative of (4.28), R ...x oreach(t): R ...x oreach(t) = (36+192t180t2)Rẋoreach(0)+(9+36t30t2)Rẍoreach(0) +(324t+30t2)Rẍoreach(t f )+(24+168t180t2)Rẋoreach(t f ) +(60360t+360t2) (4.31) Subsequently, Equation 4.31 can be re-organized as a second order polynomial: R ...x oreach(t) = At 2+Bt+C (4.32) A = 360Rxoreach(0)180Rẋoreach(0)30Rẍoreach(0) +30Rẍ o reach(t f )180Rẋoreach(t f )+360Rxoreach(t f ) (4.33) B = 360Rx o reach(0)+192Rẋ o reach(0)+36Rẍ o reach(0) 24Rẍoreach(t f )+168Rẋoreach(t f )360Rxoreach(t f ) (4.34) C = 60Rxoreach(0)36Rẋoreach(0)9Rẍoreach(0) +3Rẍ o reach(t f )24Rẋoreach(t f )+60Rxoreach(t f ) (4.35) Substituting R ...x oreach(t) = 0 into (4.32) and applying the quadratic formula (t = BpB24AC 2A ) yields the normalized time values of the acceleration extrema. The minimum positive solution is t1 = t1=t f . Then, the value of t1 can be substituted into the original quintic reaching tra- jectory, (4.28), to determine the position of the robot at t1, Rx o reach(t1) = x1 f . Sub- stituting this value into the second derivative of Rx o reach(t) (4.36) yields the value of a1. Rẍ o reach(t) = (3t12t2+10t3)Rẍoreach(t f )+(24t+84t260t3)Rẋoreach(t f ) +(60t180t2+120t3)Rxoreach(t f ) (4.36) 51 Using this approach, one can determine parameters a1 and t1 for a hesitation trajectory using the same initial and final conditions used to generate the quintic- based reaching trajectory. Once both a1 and Rx o targ are known, the coefficients for splines x2(t);x3(t);and x4(t) for the hesitation trajectory can be calculated using (4.17) to (4.19). If a resource conflict is detected before t1, then the real-time trajectory con- troller for the robot can be directed to follow the splines x2(t);x3(t); and x4(t) by switching its reference trajectory from Rx o reach(t) to x2(t) at t1. This allows the robot to make a smooth transition from its quintic reaching trajectory to an AHP-based trajectory without requiring a complex high speed trajectory controller to transition the motions, such as the one described in [39]. The final deliverable of the AHP method is an open-source package written in C++ and Python, and currently available online for ROS-based systems. The code and other implementation details are outlined in Appendix D, Section D.2. 4.3 Discussion This chapter presented a robot end-effector trajectory design specification, AHP, which derives robot trajectories from human hesitation motions. Using this ap- proach, only the kinematic output of human hesitation behaviours are considered in designing robot hesitation trajectories. The AHP describes hesitation motions as a proportional relationship between an end-effector’s launch acceleration to the abruptness of its halting and yielding behaviour. Hence, this model of hesitation implicitly specifies magnitudes of jerk during the halting and yielding phases of the motion. The process of extracting the AHP from the collection of trajectories was lim- ited by the number of sample trajectories available. Since only two subjects’ R-type motion trajectories are used to generate the key ratios, the AHP is only representa- tive of a small subset of hesitation gestures. However, the main aim of this char- acterization process is to extract trajectory features that can be implemented on a robot to generate human-recognizable hesitation motions. Hence, even though the AHP does not capture trajectory features common to all hesitation gestures, it is sufficient as a trajectory specification for generating one type of hesitation gesture 52 and can be implemented for any future collections of similar hesitation motions. 4.3.1 Limitations A key limitation of AHP is in the real-time implementation of the designed trajec- tory. Using the method introduced in Section 4.2.2, the decision to hesitate has to be made before t1, such that, at t1, the reference trajectory for the robot can switch from Rx o reach(t1) to the start of the second spline of the AHP, x2(0). However, it is realistic to expect a collision to become imminent when t1 has passed. In such a case, the robot would continue to follow Rx o reach(t1) and undesirably cause a col- lision. To address this safety issue, the real-time HRST experiment presented in Study III (Chapter 6) uses a real-time implementation of AHP in conjunction with an abrupt collision avoidance mechanism. It is possible, however, to extend the allowable period of hesitation decision- making from t1 to (t2 d ). Since the Rẍoreach(t) from t1 to t2 share the same ac- celeration a1 at t1 upon which they both start to decelerate, it is possible to use an interpolation function to make a smooth transition from Rẍ o reach(t) to x2(t) at some d seconds before t2 is reached. However, the acceptable lower bound for the value of d < (t2 t1) is unknown. Hence, this technique requires further investigation and testing. A novel online trajectory generation algorithm recently proposed by Kröger may also help address this problem [39]. As mentioned in previous sections, the P-type hesitation gestures have not been characterized in AHP. This limits the way in which a robot can hesitatingly re- spond to its observer, at least based on the available dataset. While R-type mo- tions demonstrate an immediate yielding of the resource in conflict, P-type motions could be used to communicate a robot’s persistent ‘intent’ to access the resource as soon as it becomes available. Collection and analysis of a larger set of human P-type hesitation gestures is needed to expand the hesitation trajectory design spec- ification. 4.4 Summary This chapter described the process of extracting key features from human hesitation trajectories collected in Study I, and presented these features as a trajectory design 53 specification. Qualitative observations of human motions yielded a typology of human motions, in which two different types of hesitations were identified. Of the two, there were more recorded trajectories of retract type (R-type) motions available for this investigation than there were the pause type (P-type). Hence, R-type motions were used to develop the hesitation trajectory specification. The main differences between R-type motions and successful reach-retract (S- type) motions were observed in terms of the relative magnitudes of and dura- tions between acceleration extrema with respect to the launch acceleration. R-type motions typically demonstrate a slightly smaller halting ratio and a significantly smaller yielding ratio than those of S-type motions. The AHP captures these ratio differences as a hesitation trajectory design specification. This chapter described how AHP-based motions can be generated offline as well as during a real-time HRST. Although there was some empirical evidence that the halting and yielding ratios of R-type hesitations are different from those of S-type motions, the actual efficacy of AHP-based trajectories in providing a communicative function was not tested in Study I. In particular, experimental work is needed to determine whether human observers of AHP-based end-effector trajectories would perceive the robot to be hesitating, and what range of launch acceleration values would yield human-like hesitation motions. Study II presented in the next chapter addresses this need by implementing AHP-based motions for an online Human-Robot Interaction (HRI) survey. 54 Chapter 5 Study II: Evaluating Extracted Communicative Content from Hesitations The previous chapter described the characteristic features of human hesitation ges- ture trajectories. These features were modeled and presented as Acceleration-based Hesitation Profile (AHP). However, given the small number of samples used in gen- erating the AHP, it is necessary to test whether untrained observers working with the robot will perceive AHP-based robot trajectories as hesitations. In particular, although it is unlikely that the full spectrum of the parameter values used to pro- duce AHP-based trajectories will be perceived as being hesitant, it is unknown what range of launch accelerations and their associated temporal parameters can be used to generate human-recognizable robot hesitation motions. Hence, to test the efficacy of AHP, this chapter presents a study that empiri- cally compares human perception of AHP-based motions with three other types of robot motions. These motions are: robotic collision avoidance motions, success- ful (complete) reach and retract motions, and collisions. For convenience, herein robotic collision avoidance motions are referred to as robotic avoidance motions and successful reach and retract motions are referred to as successful motions. The study presented in this chapter, Study II, consists of an online experimental survey using video recordings of the experimenter and a robot engaged in a series 55 of reach-retract tasks toward a shared target object. The survey questions are de- signed to measure the perceived anthromimicry and hesitation of the robot motions seen in the video. In order to test effectiveness of AHP within a range of parameter values, this study focuses on testing motions generated using three different levels of end-effector (hand) launch acceleration, Rẍ o 1. These values were chosen based on recorded human motions as discussed in Chapter 3. The results of the online survey are analysed to test the following three hypotheses: H2.1. Robot end-effector motions generated using AHP convey hesitation to un- trained observers, while typical robotic avoidance behaviours do not. H2.2. Robot end-effector motions generated using AHP are perceived to be more humanlike than typical robotic avoidance behaviours. H2.3. Robot trajectories generated via AHP are robust to changes in the initial acceleration parameters with regard to their communicative properties to un- trained observers. The remainder of this chapter is organized as follows. Section 5.1 outlines details of the human-robot interaction task used in this study, video recording of the interaction, and details of the online survey. Section 5.2 presents the results of the survey. Sections 5.3 and 5.4 discuss and summarize these results. 5.1 Experimental Methodology This study is comprised of a four-by-three within-subjects experiment that em- ployed four types of robot motions and three levels of launch accelerations. The experiment employed the same 6-DOF robot introduced in Chapter 3. The experi- menter created the Cartesian trajectories for 12 robot motions using the method de- scribed in Section 5.1.1, below. The robot followed these reference trajectories to generate the motions. At the same time, the experimenter performed a coordinated reaching motion to provide context for the robot’s motions. The experimenter’s motions were also based on the recorded motions described in Chapter 3. The robot and experimenter motions were video recorded following the method outlined in 56 Section 5.1.2. Using the online survey instrument described in Section 5.1.3, re- spondents watched and provided their perception feedback on the video recorded robot motions. Collected data were analysed according to the statistical methods described in Section 5.1.4. 5.1.1 Trajectory Generation To simplify the trajectory generation process, all 12 motions were restricted to two-dimensional (XoZo-plane) trajectories. The frame definition is consistent with Study I (see Figure 3.3). The reference trajectories in each axis were independently generated and, hence, are discussed separately in this section. The motion was dis- played to viewers in a two-dimensional video format (parallel to the XoZo-plane). The loss of the third dimension was not expected to be noticeable since only a rela- tively small amount of medio-lateral motion (in Yo) was were observed in the data simplification process described in Section 4.1.1. As outlined in Chapter 4, only two of the four parameters (Rẍ o 1, Rx o targ, t1, and t f ) are needed to specify an AHP. For practical reasons, the location of the target, Rx o targ, was set at the maximum reach distance of the robot. Since it is hypothe- sized that the trajectory profile, and not the overall time to motion completion (t f , and indirectly t1), is the key factor containing communicative content, the launch acceleration parameter, Rẍ o 1, was chosen as the key control variable in this study. In order to produce high fidelity motion on the robotic platform, within the kinematic limitations outlined in Appendix A, the robot followed the reference trajectories at a rate five times slower than the desired speed of motion. Similar to the approach described in Section 3.1.2, video recordings of the motions were then sped up five times. The slow reference trajectories were generated as a set of quintic splines and sampled at 10 Hz. This sampling rate results in a high fidelity frame rate of the motions (50 Hz) upon speeding up the recorded videos. This is above the standard rate of displaying visual information (24 to 30 fps). The frame rate of the final videos were downgraded to 30 fps. This study used the same control scheme employed in Study I to control the robot. See Section 3.1.2 and Figure 3.4 for more detail. 57 Principal Xo-axis Trajectories The same quintic reference trajectories were used to create both successful and collision motions. These trajectories consisted of two Hermite quintic splines that yield human-like minimum jerk motion [21]; one for reach and another for retract phase of the full motion. Details of these trajectories are presented in Section 4.2.2. To generate robotic avoidance behaviours, the peak positions of the two quintic trajectories used for successful motions were manually modified such that the robot stops at the same distance away from the target as it would for an analogous AHP- based trajectory, and retracts. The AHP-based motions were generated via the four-quintic spline generation approach described in Section 4.2.1. The location of the target, Rx o targ, together with approximated minimum, median, and maximum hand accelerations, ¨Hx o 1, ob- tained from Chapter 3, provided the parameters for the three AHP-based trajecto- ries. These values are ¨Hx o 1=f9.5, 16.5, and 23.5g m/s2 respectively. Figure 5.1 shows generated reference trajectories of the four types of robot motions. Figure 5.1: Reference trajectories generated for Study II. The same trajecto- ries were used to generate both the successful and collision conditions. 58 Supplemental Zo-axis Trajectories The reference trajectory in the Zo-axis for all four types of motions consisted of four smoothly connected quintic splines. Based on the motions observed in Study I (see Figure B.3), the first spline connected the initial location of the robot to a constant maximum vertical location, the second spline from the maximum verti- cal location to the minimum vertical location, the third spline from the minimum to the maximum location, and the fourth spline from the maximum back to the robot’s initial location (see Appendix D for the algorithm used to generate these trajectories). For the AHP-based motions, temporal end positions of these splines matched that of the four splines in the Xo-axis trajectory. For successful motions and col- lisions, the time at which the robot wrist reaches its maximum height was set as 1.75 t1, approximately matching the time at which AHP-based motions reach their maximum height. This was mirrored in retracting phase of the successful motions and collisions. Analogous to the Xo-axis trajectory generation, the Zo-axis trajec- tories for the robotic avoidance motions were generated by manually modifying a successful motion trajectory to hold the robot’s wrist at the same location before retracting. The stopping positions of collisions and the successful motions were the same and were set to be in physical contact with the target object, Rx o targ. Stopping posi- tions of the robotic avoidance and AHP-based motions were kept at the same height, approximately 2 to 4 cm above the upper surface of the experimenter’s hand. 5.1.2 Video Capture In all 12 videos, an experimenter stood facing the robot with an object located on a table between them. Similar to the HRI scenario used for Study I, the experimenter enacted a series of reach-retract motions toward the target object as though sharing the object and triggering different behaviours of the robot. To show the human- robot interaction context of the robot gestures produced, a human hand rested, reached for, and retracted from the target before and/or after the robot made its gesture. All videos showed at least one human hand motion. All of the human mo- 59 tions were successful in hitting the target object, and care was taken to produce consistent reaching speed/acceleration motion for all videos. Each unlabeled video contained only one of the 12 robot motions. Counterbalancing was done to avoid ordering effects when viewing the videos by organizing the videos in seven pseudo- random orders. Thus, seven different versions of the same online survey were pro- duced. Online respondents were presented to only one of the seven versions of the survey. These pseudo-random orders were chosen to adhere to the following: at least one version of the survey presents one of the four types of motions first; at least one shows one of the three other types of motion (successful, robotic avoid- ance, collision) just before the first recording of AHP-based motion is presented; and at least one shows one of successful, AHP-based, or collsion type of motion just before the first recording of robotic avoidance motion. 5.1.3 Survey Design The participants answered the following four survey questions for each video re- garding their perception of the robot motions: Q1 Did the robot successfully hit the target in the middle of the table? (1.Not successful - 5. Successful) Q2 Please rate your impression of the robot’s motion on the following scale: (1.Not hesitant - 5.Hesitant) Q3 Please rate your impression of the robot’s motion on the following scale: (1.Machinelike - 5.Humanlike) Q4 Please rate your impression of the robot’s motion on the following scale: (1.Smooth - 5. Jerky) Question 2 was aimed to test hypothesis H2.1 (conveying hesitation), while Question 3 was aimed to test H2.2 (human-like motion). Question 3 tests human perception of anthropomorphism from the robot motions and is adopted from the Godspeed questionnaire [4]. Questions 1 and 4 are distractors chosen to mitigate possible priming effect on participants’ responses to questions 2 and 3. Figure 5.2 60 shows a screenshot of one of the 12 pages of the survey shown to the participants. Appendix C presents the survey and its human consent form in more detail. The participants were able to play the video as many times as they wished be- fore moving on to the next video. Participant recruitment involved social media tools including Facebook, Twitter, websites and blogs. Participants were not com- pensated for their participation. The experiment was approved by the University of British Columbia Behavioural Research Ethics Board. 5.1.4 Data Analysis Analyses of the online survey results included a repeated-measures ANOVA and a post-hoc Bonferroni correction on hesitation (Q2) and anthropomorphism (Q3) scores. A significance level of a = 0.05 was used for all inferential statistics. To test H2.1 – that AHP-based motions demonstrate significantly higher hesi- tation scores compared to the scores of other types of motions – the ANOVA and the post-hoc analyses of the hesitation score were conducted with motion types as a factor. Significant findings from these analyses will support H2.1. Likewise, analogous analyses were conducted on the anthropomorphism scores. Significant findings from these results will provide empirical support for H2.2 that AHP-based motions are perceived to be more anthropomorphic than robotic avoidance mo- tions. Considering the three levels of launch accelerations as a factor, a repeated- measures ANOVA on both the hesitation and anthropomorphism scores of AHP- based motions provides empirical testing of H2.3. Lack of significant differences in the two scores will support H2.3 that AHP-based motions are perceived to be hesitant and anthropomorphic regardless of the launch acceleration values used, as long as these values fall within those found in the natural human motion. 61 Figure 5.2: Screenshot from one of the twelve survey pages shown to online participants. Each of the videos embedded in the survey showed the experimenter and the robot reaching for the shared object in the centre of the workspace. 62 5.2 Results A total of 58 respondents participated in the survey. Missing responses to questions were allowed in proceeding through the survey. Table 5.1 presents the ANOVA re- sults along with a summary of Mauchly’s tests of sphericity. All sphericity viola- tions were corrected using the Greenhouse-Geisser approach. This section discusses results of hesitation and anthropomorphism scores (Q2 and Q3) only, as they are pertinent to testing the hypotheses for Study II. Analy- ses of perceived jerk and success scores (Q1 and Q4), therefore, are presented in Appendix C. 5.2.1 H2.1: AHP-based Robot Motions are Perceived as Hesitant The ANOVA of hesitation scores (Q2) across the motion types yields a significant result (p < :0001, see Table 5.1). Post-hoc analyses show that human percep- tions of hesitations from AHP-based motions are significantly higher than those of robotic avoidance motions (p< :02), and provides strong empirical support for hy- pothesisH2.1. Post-hoc analyses also provides empirical evidence that AHP-based motions convey hesitation more than successful motions (p< :001) and collisions (p< :001). Figure 5.3 (a) summarizes these results. 5.2.2 H2.2: AHP-based Robot Motions are More Human-like than Robotic Avoidance Motions ANOVA results on the anthropomorphism scores (Q3) are also significant (p < :0001), indicating that at least one of the motion types is perceived as significantly more anthropomorphic than others. Post-hoc analyses indicate that anthropomor- phism scores of AHP-based motions, successful motions, and collisions all show an above-neutral mean score and are not significantly different from each other. However, scores of all three motion types are significantly higher than those of the robotic avoidance motions (all with p < :001). This supports hypothesis H2.2 that motions generated using AHP are perceived to be more anthropomorphic than typical robotic avoidance motions. Figure 5.3 (b) graphically summarizes these results. 63 Table 5.1: Two-way repeated-measures ANOVA results comparing perception of hesitation and anthropomorphism across motion types and accelera- tions. Note the lack of significant differences found in hesitation and an- thropomorphism scores across accelerations. Measures showing signifi- cant ANOVA results are indicated with the following suffix: p< :001 Measure ANOVA Mauchly’s Test Motion Type Hesitation F(2:49;104:48) = 132:83; W (5) = :73; p< :0001 p< :05;e = :83 Anthropomorphism F(2:32;97:54) = 12:45; W (5) = :63; p< :0001 p< :01;e = :77 Acceleration Level Hesitation F(1:75;89:07) = 1:58; W (2) = :86; p= :21 p< :05;e = :87 Anthropomorphism F(1:70;86:44) = 1:05; W (2) = :82; p= :34 p< :01;e = :85 5.2.3 H2.3: Non-Expert Observations of AHP-based Motions are Robust to Changes in Acceleration Parameters The online survey responses also support H2.3 that, within the natural range of hu- man motion recorded from the study described in Chapter 3, variations in launch accelerations used to generate AHP do not significantly affect the perception of hes- itation or anthropomorphism in these motions. As shown in Table 5.1, hesitation and anthropomorphism scores of AHP-based motions, when compared across the three levels of launch accelerations, show lack of significance (p = :21 and :34 respectively). This indicates that hesitation and anthropomorphism scores of AHP- based motions are not significantly affected by the launch acceleration values used to generate the motions as long as the values are within the range of natural human motion (9.5 to 23.5 m/s2). A significant interaction between motion types and acceleration was observed in the anthropomorphism score, F(4:68;196:65) = 4:03; p < :005, but the effect size was small. The partial eta-squared1 score was only 0.08, indicating that the 1Partial eta-squared, h2, describes the proportion of variance in data that can be attributed to the factor in focus. 64 Figure 5.3: a) Overview of hesitation scores from the five-point Likert scale question demonstrating significantly high hesitation scores for AHP- based hesitation motions; b) Overview of anthropomorphism scores from the five-point Likert scale question demonstrating that AHP-based hesitation motions are perceived to be more anthropomorphic than robotic avoidance motions. interaction effect only accounted for 8% of the overall variance in anthropomor- phism. No significant interaction was observed with hesitation scores. Accel- eration, by itself, demonstrated negligible effect on both hesitation and anthro- pomorphism scores, 0.03 and 0.05 partial eta-squared, respectively. That of the motion types, however, was much larger, 0.76 and 0.56 partial eta-squared for hes- itation and anthropomorphism, respectively. This implies that types of motions have a larger effect on people’s perception of hesitation and anthropomorphism of a robot’s motion than launch acceleration values used to generate the motions. 65 5.3 Discussion The results of this study provide strong evidence that AHP-based motions are per- ceived to convey hesitation significantly more than the other types of motions tested. However, it is important to note that robotic avoidance motions are also perceived to convey hesitation. As shown in Figure 5.3, the mean hesitation scores of robotic avoidance motions are above the neutral score, although still signifi- cantly below that of AHP-based motions. This emphasizes the fact that the use of AHP-based motions represents one of many approaches to generating motions that people perceive as hesitant. However, the anthropomorphism scores show a clear perception difference be- tween robotic avoidance and the other type of motions. Robotic avoidance motions received below-neutral mean scores that are significantly lower than the other mo- tion types. This, plus the high anthropomorphism score that AHP-based motions received, demonstrate that motions designed using the AHP trajectory specification produce highly human-like gestures comparable to minimum-jerk reaching trajec- tories. 5.3.1 Limitations It is important to note that the same channel used to recruit subjects in Study I was also used in this study. As the survey instrument did not collect nor filter online respondents using IP addresses or other identifying information, it is possible that some respondents from the online survey of Study I may also have participated in Study II. Although there are structural differences between the two studies’ online surveys, responses from such participants are likely to be biased. In addition, this study only tested AHP-based motions generated using human- like range of the launch acceleration parameter (9.5 to 23.5 m/s2). Whether the findings from this study will hold for launch acceleration parameter values outside the tested range remains untested. Hence, provided that many industrial robots do not operate at such high levels of acceleration, further testing in a lower accelera- tion range needs to be conducted. 66 5.4 Summary Previous chapters demonstrated that naturally occurring human hesitation gestures could be empirically identified, and common features of these gesture trajectories were modeled with AHP. The study presented in this chapter investigated whether AHP can serve as an effective trajectory design specification for generating human- like hesitation gestures on a robotic manipulator. Results from this study provide statistically strong evidence, with over 95% likelihood, that non-expert observers perceive AHP-based robot motions as con- veying hesitation significantly more than robotic avoidance motions. AHP-based motions were also perceived to be equally human-like as minimum-jerk reach- retract motions of a robot, whereas robotic avoidance motions were not. This find- ing was true regardless of the three different parameter values used to generate the AHP-based motions. Based on these findings, the following questions remain: Can non-experts rec- ognize AHP-based behaviour of a robot as a hesitation while interacting with a robot in situ? If so, what are the implications of implementing human recogniz- able hesitation gestures onto a robot as a resource conflict response mechanism in a human-robot collaboration context? These important questions are explored in an in-person HRI study discussed in the next chapter. 67 Chapter 6 Study III - Evaluating the Impact of Communicative Content The two studies presented in Chapter 3 and Chapter 5 provide strong empirical evidence that humans perceive hesitation in robot motions that a) mimic human wrist motions, and b) are generated using an Acceleration-based Hesitation Profile (AHP). However, these studies did not explore the effect of such anthromimetic hesitation behaviours in a Human-Robot Shared-Task (HRST) context. Considering the larger goal of improving human-robot collaboration, this chapter investigates whether: a) human-robot team performance will be better when the robot uses AHP- based hesitation gestures during collision avoidance as opposed to typical, abrupt collision avoidance motions, and b) human teammates in human-robot teams will have more positive feelings toward a robot teammate when the robot uses AHP-based hesitation gestures than when the robot uses abrupt collision avoidance motions. In the previous studies the survey respondents’ primary task was to watch video recordings of the robot motions and report on their observations. The study pre- sented in this chapter, Study III, explores the two questions listed above while investigating whether non-expert recognition of AHP-based motions will hold in an in-person real HRST context. In order to produce high fidelty motions at an 68 anthropometric range of speeds, a 7-DOF robot (WAM™, Barrett Technologies, Cambridge, MA, USA) capable of high-acceleration motions (peak of 20 m/s2)1 was used, instead of the 6-DOF CRS A460 robot exployed in Studies I and II. In this experiment, a 7-DOF robot (WAM™, Barrett Technologies, Cambridge, MA, USA). Similar to the experimental task employed in Study I (Chapter 3), a hu- man subject reached for a shared resource. However, rather than observing robot motions from video recordings of human-robot interaction, the subjects directly interacted with the robot. When, by chance, the two agents reached for the shared resource at the same time, the robot responded in one of the following three ways: (i) ignored the presence of the resource conflict and continue reaching for the re- source (Blind Condition), (ii) hesitated using an AHP-based trajectory (Hesitation Condition), and (iii) triggered an immediate stop (Robotic Avoidance Condition). These conditions are analogous to the motion types investigated in the online sur- vey of Study II (Chapter 5). To investigate the utility of implementing hesitation gestures as a resource con- flict response mechanism, this chapter considers the following hypotheses: H3.1. Robot hesitation motions, designed using AHP, are identified as hesitations when the motions are observed in situ. H3.2. Non-expert human users perceive a robot more positively when the robot responds with AHP-based hesitation gestures compared to when it does not. H3.3. A human-robot team yields better performance in a collaborative task when the robot uses AHP-based hesitation gestures than when it does not. The following sections of this chapter describe the details of the in situ exper- iment (Section 6.1), outline the results (Section 6.2), and discuss and conclude the implications of the findings (Section 6.3 and Section 6.4). 1This value is provided by the manufacturer as the peak end-effector acceleration with a 1kg load on the robot’s end-effector. This robot has the maximum end-effector velocity of 3 m/s. Other technical specifications of the robot is available in [2]. 69 First Encounter Post-experiment Interview Gesture identification experiment Explanation of the three conditions Pre-experiment questionnaire Explanation of the experimental task and rules Consent Trial 1 Trial 2 Trial 3 Pre-experiment Main Experiment Post-experiment Training Questionnaire Questionnaire Questionnaire Trial 1 Trial 2 Trial 3 Questionnaire Questionnaire Questionnaire Second Encounter Interview question 1 Interview question 2 Interview question 3 (5) Figure 6.1: Overview of the Study III experiment process. Only five subjects participated in the additional gesture identification experiment. 6.1 Method An interactive HRST was devised for the three by two (condition x encounter) within-subjects experiment. This section describes the details of the experiment in Section 6.1.1, outlines the measurement instruments used in this study in Sec- tion 6.1.2, and presents the overall robotic system devised for the experiment in Section 6.1.3. Section 6.1.4 describes the data analysis method employed in this study. In total, 33 subjects (female: 13, male: 20) were recruited by posting a call for volunteers across the University of British Columbia campus and on the author’s lab website. The advertising materials are presented in Appendix C. The age of the participants ranged from 20 to 52 (M : 26:83;SD : 7:24), and they were mostly unfamiliar with robots in general (M : 1:42;SD : :58, from a five-point Likert scale measure, 1=not familiar at all, 5=very familiar). All of the subjects were right handed by chance. The experiment took place in a lab environment where the experimental area was surrounded by curtains to mitigate the effects of extraneous visual cues. 6.1.1 Experimental Task and Procedure There were three phases to the experiment: Pre-experiment, the main experiment, and the post-experiment. Figure 6.1 shows an overview of the experimental proce- dure. This section outlines each of the phases in detail. 70 Pre-Experiment Prior to starting the experiment, the experimenter informed the subjects that the task might result in physical contact with the robot, which was covered in soft safety padding. The study was approved by the University of British Columbia Behavioural Research Ethics Board. The consent form and questionnaires used for this study are available in Appendix C. The subjects signed an informed consent form and completed a pre-experiment questionnaire to provide demographic infor- mation. The details of the main experiment were then explained to the subjects. All subjects were invited to touch the robot’s padded end-effector to mitigate the fear of potentially colliding with the robot. As well, in order to avoid mistakes caused by a lack of understanding of the task, all subjects went through a training session, performing the task two to three times prior to starting the experiment. After this training session, the experimenter initialized the robot to its starting po- sition. Main Experiment The experimental task was designed to represent a simplified assembly line task. The subject’s task was to pick up marbles from the marbles bin one at a time, and perform an “assembly” by combining each marble with a shape from the shapes bin according to the example marble-shape pairs displayed on the table. The robot’s task was to inspect themarbles bin at intervals during the task. This section outlines the experimental setup, details of the human’s and the robot’s tasks, and the four human states used to detect occurrence of resource conflicts. Experimental Setup. The subject sat opposite the robot facing the workspace setup as shown in Figure 6.2. The subject wore a ring attached to a cable po- tentiometer (SP1-50, Celesco, Chatsworth, CA, USA) on his/her dominant hand at all times while carrying out the experimental task. Data from the cable potentiome- ter was used to monitor the approximate extension of the subject’s hand during the trials and to identify the subject’s task state. At the start of each trial, the marbles bin located in the centre of the workspace contained twenty marbles (ten clear and ten blue). A shapes binwas placed in front 71 of the subject representing a parts bin in an assembly line, assigned to the human worker. The bin contained small foam items in various shapes (heart, circle, trian- gle, rectangle), colours (blue, red, purple, yellow, pink), and sizes (large, medium, small). Two pairs bins located on either side of the shapes bin were designated to contain finished products, i.e., pairs comprising a marble and a shape. Figure 6.2: Overview of experimental setup for Study III. Subjects sat across from the robot. The subject’s task was to pick up marbles from the marbles bin one at a time, “assemble” it with a shape from the shapes bin according to the example marble-shape pairs. The robot’s task was to inspect the marbles bin. The robot is shown in its initial, ready-to- reach position. Human Task. Once cued by the experimenter, the subject’s task was to pick up each marble, one at a time, from the marbles bin using the instrumented dominant hand, pair the marble with a shape object according to the examples shown, and place the pair into the correct pairs bin. The subject used the non-dominant hand to pick up the foam shapes. The foam shape could be of any colour and size as long as the shape matched that of the respective example pair. To mitigate possible training effects, the examples changed at the beginning of every trial in random order. The following rules were explained to the subject: any instance of collision 72 with the robot and of pairing mistakes made during the task resulted in a penalty score for the team. Pairing mistakes were not to be corrected. Robot Task. The robot’s task was to move back and forth between its initial po- sition and the marbles bin to “inspect” the bin fifteen times, thereby sharing the marbles bin with the subject. The robot was programmed to monitor the subject’s reaching speed and match this speed in its own motions. The subjects were aware of this before the beginning of the experiment. The details of this relationship between human motions and robot motions is described in more detail in the fol- lowing section. When a resource conflict occurred, the robot responded in one of the following three ways depending on the condition assigned to the trial (see below).  Blind Condition: Regardless of the occurrence of human-robot resource con- flict, the robot continued to reach for the shared resource. This resulted in collisions or near collision situations between the two agents.  Hesitation Condition: Upon occurrence of human-robot resource conflict, the robot followed an AHP-based trajectory to exhibit a human-like hesi- tation gesture and then returned to its initial position. Immediately after returning, it attempted to reach for the shared resource again.  Robotic Avoidance Condition: Upon occurrence of a human-robot resource conflict, the robot abruptly stopped and then retracted back to its initial po- sition. Similar to the Hesitation Condition, on return, the robot immediately re-attempted to access the resource. Figure 6.3 presents a flow diagram of the robot’s behaviours for each of the conditions. Figure 6.4 illustrates the flow of interactions for the three conditions. All subjects encountered each of the three conditions once within the first three trials (first encounter) and once within the last three trials (second encounter) of the experiment. The order of the conditions was randomized. At the end of each trial, the subject was conducted away from the experiment area and asked to fill out a questionnaire. 73 Dwell Trigger robotic avoidance stop Resource conflict detected? Switch to AHP-based trajectory Follow quintic trajectory to return Calculate AHP parameters and spline coefficients Resource conflict detected? Resource conflict detected? Dwell Follow quintic trajectory to reach Follow quintic trajectory to reach Trigger robotic avoidance stop Motion finished? Motion finished? Y N N Y N Y Y N Follow quintic trajectory to reach t ≥ t1? N Y Dwell Follow quintic trajectory to return Follow quintic trajectory to return Blind Condition Hesitation Condition Robotic Avoidance Condition Figure 6.3: Overview of the robot’s behaviours in the three conditions. Human Task States. The potentiometer readings were used to recognize occur- rence of resource conflicts. This was accomplished by identifying four subject states based on the potentiometer readings: dwelling, reaching, reloading, and re- tracting. The distance of the subject’s hand from the starting location of the hand, j ~Hdah j, was measured using the potentiometer (see Figure 6.2 for frame definition). Then a set of conditions, summarized in Table 6.1, were used to identify these four states. A resource conflict was considered to have occurred when the robot had started its motion and the human was in either the reaching or reloading state. The amount of time the subject spent in motions other than reaching, reloading, or retracting was considered dwell time and mostly consisted of sorting through the shapes bin or placing finished pairs into the appropriate pairs bins. The subject’s dwell time and speed of reach were reflected in the robot’s mo- tions. The robot’s dwell times in between its reaches were 80% of the last human dwell time recorded. This imbalance in dwell times between the two agents helped create resource conflicts, while allowing the robot to attempt its inspections more 74 Figure 6.4: Time series plots of trials with the Blind, Hesitation, and Robotic Avoidance Conditions. The plot of the Blind Condition shows collisions at approximately 5 and 9 seconds. The plot of the Hesitation Condition shows the robot’s AHP-based hesitation response at 3 and 10 seconds. Triggering of avoidance behaviours are observed at 2 and 7 seconds in the Robotic Avoidance Condition plot. 75 Table 6.1: Conditions for identifying the four states of task-related human motion. Variables Hd a dwell and Hd a reload are constant thresholds set at 22 cm and 39 cm, respectively. Sprev is the previous state of the human. State (Snew) Condition Dwelling If (j ~Hdah j  Hdadwell) Reaching If (Hd a dwell < j ~Hdah j< Hdareload ) & (Sprev = Dwelling) Reloading If (Hd a reload  j ~Hdah j) Retracting If (Hd a dwell< j ~Hdah j< Hdareload) & (Sprev = Reload) frequently. An exact match of speed between the person and the robot’s motions would have resulted in high speed and high acceleration motions that were likely to be threatening to the subjects, not to mention causing mechanical stress on the robot actuators. Therefore, the robot traveled at a slower but proportional rate to that of human’s: Rtreach = 4Htreach, where Rtreach is the amount of time the robot is commanded to travel from its initial position to the marbles bin, and Htreach is the duration the subject took to travel from the dwelling state to the reloading state. Post-Experiment After all six trials were completed, the experimenter conducted a post-experiment interview and a follow-up gesture identification experiment designed to test H3.1, discussed below. Post-experiment Interview. The experimenter asked each subject three interview questions. In the first question, the subjects were asked which of the six trials they liked the most. In the second question, the experimenter asked whether the subjects felt any discomfort or nervousness working with the robot. The subject’s qualita- tively feedback from these two questions were used to confirm the quantitative findings from the questionnaire. After the subjects answered these two questions, the experimenter explained to the subjects the three different resource conflict response behaviours of the robot. The experimenter outlined the Blind Condition as the one in which the robot did not respond to the subject’s motions at all, whereas the robot did respond to avoid col- 76 lisions in the Robotic Avoidance and Hesitation Conditions. The Robotic Avoid- ance Condition was described as the one in which the robot stopped abruptly. The Hesitation Condition was described as the one in which the robot hesitated. After- wards, the experimenter asked the third interview question: whether they noticed the difference between the two conditions, and if so, which trial they think was the Hesitation Condition. Gesture Identification Experiment. After the post-experiment interview, five of the subjects who answered affirmatively to the last interview question participated in a short additional experiment. This experiment was designed to test whether the context for observing robot motions (that is, watching a video recording of a robot and an actor, versus observing a robot whilst interacting with it) affects the accuracy of identifying AHP-based motions as hesitation gestures. These subjects were explained that the purpose of this additional experiment was to verify whether the motions they perceived as ‘hesitations’ during the main experiment were indeed motions programmed to convey hesitations. The subjects were not given any additional description of the motion differences between the Robotic Avoidance and the Hesitation Conditions. The robot was programmed to continuously attempt to reach for the marble bowl with one second rests between attempts. The subjects were asked to intentionally interrupt the robot’s reaches by reaching for the marble bin (i.e., to trigger the robot’s collision avoidance) and then to verbally label which of the two robot behaviours (hesitation or robotic avoid- ance) the robot exhibited in its collision avoidance behaviour. 6.1.2 Measuring Human Perception and Task Performance This section describes the instruments used to measure human perception of the robot and and the human-robot team performance from the main experiment phase of this study. A questionnaire was used to measure five elements of the sub- ject’s perception of the robot, and three elements of the subject’s perception of the human-robot teamwork. The total of eight perception measurements provided a rich set of human perception data to test H3.2 (a human-robot team will perform better when the robot uses AHP-based hesitation gestures than when it does not). 77 Independent of the questionnaire, the experimenter collected five task-performance- related measures. Human Perception of Robot Measures Five key measures of human perception of the robot were measured using the God- speed survey [4]. This survey instrument is not the only standard questionnaire available for evaluating various aspects of HRI, but it has been widely accepted and used within the field. It includes the following elements of human percep- tion important for this study: animacy, anthropomorphism, likeability, perceived intelligence and perceived safety. Human-Robot Team Perception Measures Unlike human perception of robot measures, standardized questionnaires for hu- man perception of human-robot teamwork have yet to be developed in HRI. Hence, the questions used in this study are borrowed from well-documented and widely used instruments in the neighbouring field of Human-Computer Interaction (HCI). In a human-computer interaction study, Burgoon et al. used the Desert Survival Problem 2 to study the impact the different elements of HCI have on the human par- ticipant’s perception of the computer as a teammate and the team’s overall perfor- mance [13, 14]. A positive scenario is defined as one in which each team member has positive perception of the other, and one that results in a positive output. Three key team measures were identified: interactive measures, social judgment mea- sures and task outcome measures. Questionnaires designed by Burgoon et al. and others comprise previously tested instruments in psychology and HCI. 2The Desert Survival Problem is one of the most widely used methods of measuring human- human and human-computer teamwork, and was proposed by Lafferty and Eady in 1974 [41]. In this scenario-based game, typically consisting of two agents, participants are given background informa- tion about being stranded in a desert with a limited number of items they can take with them in their journey of desert survival. The participants independently rank a given list of items in their order of importance in surviving in the desert. Upon initial ranking of the items, the agents discuss each of the survival items as a team. Afterwards, the agent(s) have the option to changing the ranking of the items. Questionnaires typically follow the experimental game and measures each of the teammates’ influence on each other (measure of how many items were ranked differently post-discussion), how positive the influence is on the team’s performance (measure of how many correct answers were obtained), and also how positively a teammate perceives another member of the team. 78 Of the three team measures mentioned above, the social judgment measure is mainly borrowed from a study by Moon (no relation to the author) and Nass [48]. This measure includes elements such as credibility, dominance, usefulness and attractiveness as a partner. The two experiments presented in [48] also used the Desert Survival Problem and investigated whether people’s responses to computer personalities are similar to their responses to analogous human personalities. Considering the survey questions from [13] and [48] that measure interac- tiveness and social judgment of a partner, only HCI questions applicable to HRI were retained. These questions include measures of dominance, usefulness, and emotional satisfaction. The questionnaire used for this study is presented in Ap- pendix C. Task Performance Measures Analogous to task outcomemeasures used in Desert Survival Problem experiments, this study employed five performance measures to test the impact AHP-based mo- tions had on the human-robot team. The five performance measures are:  Human performance: The time between the experimenter’s ‘Go’ signal and the time at which the subject completed the task.  Robot performance: The time between the start of the trial and the comple- tion of the robot’s last retracting motion. The experimenter’s ‘Go’ signal coincided with the start of the trial.  Team performance: The larger of the human and robot task completion time.  Mistakes: Each misplaced marble or shape was counted as one mistake, as assessed by the experimenter at the end of each trial.  Collisions: The number of collisions as counted by the experimenter when reviewing the video recording of the experiment. 6.1.3 System Design and Implementation This section describes how the robot’s end-effector motions were generated and managed for the experiment. The experimental setup utilized the 7-DOF WAM 79 Figure 6.5: The software architecture to interface higher level decision mak- ing and control algorithm in ROS to lower level real-time control of the WAM arm using the BtClient environment. Client nodes 1 through n represent the various ROS nodes that are used to make higher level control decisions for the robot, as well as interface with the cable poten- tiometer via an Arduino. running BtClient (Barrett Technologies, Cambridge, MA, USA) as the low-level controller, and Robot Operating System (ROS) (Willow Garage, Menlo Park, CA, USA) as the high-level controller. To interface the high-level controller with the low-level controller, an open source software package, WAMinterface, was used. Figure 6.5 shows the overall controller architecture for the setup. This sec- tion is organized in the following order: details of the low-level controller, the high-level controller and its management of gestures, the interfacing algorithms, and the algorithms used for monitoring the human’s states. Low-Level Controller The real-time trajectory controller ran on BtClient. Communication between an ex- ternal PC with the robot was enabled via the CANbus system outlined in Figure 6.6. Controlling of the robot via segments of quintic splines is enabled by Quintic 80 Figure 6.6: Modified from Figure 34 of Barrett’s WAM user documentation (WAM UserManual AH-00.pdf). The experimental setup uses an ex- ternal PC with a Xenomai real-time development platform to access the CANbus. Traj Preparation and Quintic Traj Generator functions residing in the BtClient system and customised for this study (see Figure 6.5 for an overview of the system). Quintic Traj Preparation prepares the BtClient system for controlling the end-effector of the robot in Cartesian space via quintic splines. Quintic Traj Generators generates quintic spline trajectories in real-time and servos the robot through the spline. Utilizing the AHP-based trajectory implementation method introduced in Chap- 81 ter 4, two quintic trajectory generators were programmed into the Quintic Traj Generator. The first generator receives endpoint conditions (position, velocity, acceleration) of a desired primary axis (Yb) trajectory as input, and servos the robot’s end-effector through a quintic spline that adheres to the boundary condi- tions (see Figure 6.2 for frame definition). In this study, this function is used to generate reaching and retracting motions of the robot. The second generator re- ceives coefficients of the desired quintic spline as input, also in the Yb-axis, and servos the robot’s end-effector through the spline. This second generator allows calculated AHP spline coefficients to be used in servoing the robot. Both the quintic trajectory generators use the following parabolic trajectory to control the Zb-axis motions as a function of Ry b w(t): Rz b w(t) =2(Rybw(t)+ Rybo f f set)2+0:4(Rybw(t)+ Rybo f f set)+ Rzbo f f set (6.1) High-Level Controller and Generation of Motion Trajectories The high-level algorithms implemented in ROS managed the triggering and com- manding of the robot’s motions. Figure 6.7 provides a schematic overview of the ROS-based algorithms. The gesture launcher node managed triggering of the robot’s motions and the robot’s dwelling behaviour. gesture engine received commands from gesture launcher and managed triggering of robotic avoidance or AHP-based motions. Once gesture engine was given the command to start its reaching motion, the human’s task state was monitored to detect occurrence of resource conflicts. If the human remained in the dwelling or retracting state while the robot was reach- ing, the robot continued its motion. Upon successfully completing its reach, the robot waited for one second so as to ‘inspect’ the marble bin, and retracted back to its starting position. If the human entered the reload or reaching state while the robot was reaching, the system considered this to be an instance of resource conflict. The conflict was handled according to the session condition. As illustrated in Figure 6.3, in the Blind Condition, the robot was programmed to ignore the resource conflict. 82 Figure 6.7: The software system architecture implemented for the HRST ex- periment. The WAMServer node interfaces btClient control algorithms that operate outside of ROS to directly control the robot. Further detail of the interface and btClient algorithms are outlined in Figure 6.5. In the Robotic Avoidance Condition, a trajectory command was sent to the robot via WAMinterface to stop at a point 0.1 cm past the current position of the robot on its current path and then retract using a quintic trajectory. The 0.1 cm distance accommodated the motion occurring during the inherent communica- tion delay in the system followed by an abrupt stopping motion. A built-in linear trapezoidal trajectory controller in BtClient was used to generate this motion. This produced abrupt stopping motions without triggering the robot’s torque limits that would otherwise result in a disruptive low-level shutdown by robot’s safety sys- tems. In the Hesitation Condition, gesture engine used calculate param node to compute the launch acceleration, a1, and its temporal location, t1. Quintic coefficients for AHP splines were calculated at the start of the reaching motion via get s2 s3 coefs node. Both calculate param and get s2 s3 coefs nodes used the implementation method described in Section 4.2.2. Detailed de- scriptions of these nodes and the calculation algorithms are presented in Appendix D, Section D.2. If a resource conflict was detected prior to reaching the launch accel- 83 eration (t < t1), then the robot continued its motion until the launch acceleration was reached. At that point, gesture engine switched its reference trajectory to the first of the three remaining piecewise quintic splines (x2(t) in Chapter 4) of AHP. If the conflict was detected after the launch acceleration (t  t1), the robot resorted to the abrupt stopping behaviour designed for the Robotic Avoidance Con- dition. High-Low Level Controller Interfacing Algorithm Commands and data from ROS requiring actions from BtClient were called us- ing functions in the WAMClientROSFunctions node, which provides access to a list of client functions in ROS that trigger WAM-related function calls. The respective server module for these client calls is also ROS-based, and is called WAMServerROS. The WAMServerROS node interfaced the ROS-based func- tion calls as a client to its respective server commands in the Socket Server Commands module. This module interfaces the ROS-based client/server system to the BtClient system. Any data, including function calls, passed to the socket, were then decoded into individual input variables. Socket-WAM interface deci- phered the data using Socket Commands. The decoded and deciphered data- command set was then repackaged into a form understood by BtClient using WAM Interface node. This node directly communicated with BtClient’s control thread to control the robot. The relationship between these nodes is presented in Figure 6.5. Monitoring Human States The cable potentiometer interfaced with the system using an Arduino platform. The potentiometer measurements provided an approximate extension of the subject’s hand from the edge of the table forward. It was broadcast in ROS using an open- source rosserial package, and is shown as gate interface in Figure 6.7. A separate node, sensor launch, received these measurements, and identified and recorded durations of the four states of the subject’s motion (dwelling, reach- ing, reloading, and retracting). The decision maker node used these iden- tified human states to infer occurrence of resource conflicts and make decisions 84 on whether to trigger the appropriate collision avoidance motions for the Hesita- tion and Robotic Avoidance Conditions. The gesture launcher node used the recorded durations of the subject’s dwelling and reaching states to determine respective dwell times (80% of human dwell time) and duration of reach for the robot (Rtreach = 4xHtreach). 6.1.4 Data Analysis With the three conditions of interaction (Blind, Hesitation, and Robotic Avoid- ance) and two encounters (First, Second) as factors, a two-way repeated-measures Analysis of Variance (ANOVA) was conducted. For questionnaire responses, Cron- bach’s alpha values were calculated in order to ensure that the collected data are internally reliable. For performance measures, Cronbach’s alpha calculations are not necessary, since each performance measure consists of only one element. Once significant results are found from the ANOVA, post-hoc analyses were conducted to identify which conditions or encounters received significantly higher or lower scores. This method provided empirical testing of hypotheses H3.2 and H3.3. In addition to the quantitative analyses, qualitative responses collected from post-experiment interviews were analyzed to find support for the quantitative find- ings. Interview notes were coded by two individuals. Inter-rater reliability values were calculated via Cohen’s Kappa. Upon confirming a high level of reliability, percentages of the different categories of responses were calculated. This reflected the landscape of the subject’s qualitative feedback, and was compared with the quantitative findings. For the follow-up experiment involving the five subjects, the number of false positives and false negatives were divided by the total number of AHP-based and robotic avoidance motions triggered by the subjects. This provided a quantitative measure of accuracy from human recognition of AHP-based motions. 6.2 Results This section presents the results from the data analyses outlined in the previous section. The results are presented in the order of the hypotheses. Section 6.2.1 discusses relevant results for testing H3.1, Section 6.2.2 discusses H3.2, and Sec- 85 tion 6.2.3 discusses H3.3. Of the 33 subjects recruited, data from only 24 subjects (female: 12, male: 12) are analysed and reported here due to to technical problems, subject failure to follow instructions, and/or insufficient occurrence of resource conflicts. All but two subjects had never interacted with the WAM robot before. These two subjects indicated that they had seen the robot at an exhibit or during a lab tour. 6.2.1 H3.1: Can Humans Recognize AHP-based Motions as Hesitations in Situ? The question of whether humans recognize hesitations from AHP-based motions in situ was addressed by the last question in the post-experiment interview and by the results of the gesture identification experiment. Inter-rater reliability for the interview question yielded Cohen’s Kappa of 0.75 (p< :001). This is considered a substantial level of consistency. For the last ques- tion of our post-experiment interview, the majority of subjects (79%) said that they noticed the difference between the two stopping motions. The five subjects who participated in the gesture identification experiment each triggered at least five instances of stopping behaviours of the robot. In total, 52 stopping behaviours were triggered and identified in situ; 15 of these were AHP- based motions, and 37 were robotic avoidance behaviours. Six robotic avoidance motions were falsely identified as hesitations, and one AHP-based motion was iden- tified as a robotic avoidance motion. This yields a total of 87% accuracy in the subjects’ in situ identification of hesitation gestures from AHP-based motions. This supports H3.1 that the AHP-based trajectories are perceived as hesitations in situ. 6.2.2 H3.2: Do Humans Perceive Hesitations More Positively? This section presents the empirical findings of the eight perception measures col- lected from the questionnaire. These questions provide a broad test of the hy- pothesis that when a robot responds with an AHP-based hesitation gesture upon encountering a resource conflict, then humans perceive the robot more positively than when it does not. The qualitative feedback obtained from the post-experiment interview with all subjects supports the quantitative findings: both are presented 86 below. Human Perception Measures Of the eight human perception measures collected, all but the perceived intelli- gence measure yielded an internal reliability score above 0.70. Hence, the per- ceived intelligence measure is excluded from the discussion. The number of items used to collect the perception measures was reduced or modified from the original items in Moon and Nass [48] and the Godspeed survey [4]. Nonetheless, internal reliability scores from the Study III questionnaire responses show similar values as the original measures (see Table 6.2). Summarised in Table 6.3 are the repeated-measures ANOVA results for the seven internally reliable measures. Across the three conditions, all but usefulness and emotional satisfaction measures show statistically significant score differences (a = :05). Table 6.4 reports the mean and standard error of the measures and their significant differences across the conditions. Figure 6.8 presents a graphical overview of the scores having significance. Significant differences in scores between the first and second encounters are only found in the perceived safety and animacy measures. A summary of the mea- sures’ means, standard errors, and the presence of significant pairwise differences is presented in Table 6.5. Significance level adjustments for all post-hoc analy- ses were made with the Bonferroni correction. No significant interaction is found between the factors Condition and Encounter. Only the significant findings are presented below, and the complete statistical results are reported in Appendix E: Dominance. Post-hoc comparisons indicate that the subjects perceive the Blind Condition as significantly more dominant than the Hesitation and Robotic Avoid- ance Conditions. No significant difference is found between dominance scores of the Hesitation and Robotic Avoidance Conditions (p = 1:00). Figure 6.8 (a) plots the dominance scores. 87 Perceived Safety. Scores from the questionnaire suggest a trend that the Hesita- tion and Robotic Avoidance Conditions are perceived as more safe than the Blind Condition (p= 0:10 and p= 0:07 respectively); however, this is not significant to a = :05. No apparent difference is found between the perceived safety of the Hesi- tation and Robotic Avoidance Conditions (p= 1:00). However, there was a signif- icant increase in perceived safety from the first encounter to the second (p= 0:02). Figure 6.8 (b) graphically summarises these results. Table 6.2: Internal reliabilities of the eight self-reported measures are pre- sented here. Only the measures with Cronbach’s alpha greater or equal to 0:70 are analyzed and reported. All but perceived intelligence meet this requirement. Cronbach’s alpha values for dominance, usefulness, and emotional satisfaction from Moon and Nass’ work were 0.89, 0.80 and 0.86, respectively [48]. Multiple alpha values are reported in Bartneck et al.’s work from cited studies and their range is as follows: anthropomor- phism (0.86 to 0.93), animacy (0.70 to 0.76), likeability (0.84 to 0.92), and perceived safety (0.91) [4]. Measures Cronbach’s Items alpha Dominance 0:88 Aggressive, Assertive, Competitive, Dominant, Forceful, Independent Usefulness 0:79 Efficient, Helpful, Reliable, Useful Emotional Satisfaction 0:84 How much did you like this robot?, How much did you like working with this robot?, Boring (reverse scale), Enjoyable, Engaging Perceived Safety 0:91 Anxious, Agitated Likeability 0:87 Like, Kind, Pleasant, Friendly Animacy 0:78 Apathetic, Artificial, Mechanical, Stagnant Anthropomorphism 0:81 Artificial, Fake, Machinelike, Moving Elegantly Perceived Intelligence 0:54 Incompetent (reverse scale), Intelligent 88 Table 6.3: Two-way repeated-measures ANOVA results are presented for all seven perception measures. Four measures (dominance, emotional satis- faction, animacy, and anthropomorphism) violate the sphericity assump- tion, and their ANOVA results have been corrected via the Greenhouse- Geisser approach. Their respective e-values are reported here. Measures showing significant ANOVA results are indicated with the following suf- fixes: t p< :10; p< :05; p< :001. Measure ANOVA Mauchly’s Test Condition Dominance F(1:42;31:16) = 40:92; W (2) = :56; p< :01; p< :0001 e = :71 Usefulness F(2;44) = :37; p= :69 W (2) = :96; p= :69 Emotional F(1:48;32:52) = 2:68; W (2) = :65; p= :01; Satisfactiont p= 0:10 e = :74 Perceived Safety F(2;44) = 4:03; p= :02; W (2) = :98; p= :80 Likeability F(2;44) = 18:36; p< :0001 W (2) = :76; p= :06 Animacy F(1:44;31:66) = 4:96; W (2) = :61; p< :01; p= :02 e = :72 Anthropomorphism F(1:50;32:88) = 4:92; W (2) = :66; p= :01; p= :02 e = :75 Encounter Dominance F(1, 22) = .01, p = .94 Usefulness F(1, 22) =2.17, p = .16 Emotional F(1;22) = 1:89; p= :18 Satisfaction Perceived Safety F(1, 22) = 6.46, p = .02 Likeabilityt F(1, 22) = 3.64, p = .07 Animacy F(1, 22) = 5.63, p = .03 Anthropomorphismt F(1, 22) = 3.86, p = .06 Likeability. Significant differences were found in likeability scores across the conditions (p < :001); both the Hesitation and Robotic Avoidance Conditions are significantly more liked than the Blind Condition (p < :001 and p < :01 respec- tively). Of the likeability scores, the Hesitation Condition has the highest score, although the Hesitation and Robotic Avoidance Conditions show no significant score difference (p = :50). The scores also tend to increase from the first to the 89 Table 6.4: The mean and standard error, in parentheses, of the human per- ception and task performance measures are presented according to Con- dition. Scores from Hesitation and Robotic Avoidance Conditions that have significant differences from that of the Blind Condition are marked according to their significance level as follows: t p< :10; p< :05; p< :01; p< :001. Dependent Measure Blind Hesitation Robotic Avoidance Human Perception Dominance 3.31 (.17) 2.09 (.12) 2.03(.10) Usefulness 3.06 (.13) 3.16 (.16) 3.11 (.12) Emotional Satisfaction 3.32 (.14) 3.60 (.15) 3.59 (.14) Perceived Safety 3.70 (.20) 4.05t (.17) 4.09t(.17) Likeability 2.81 (.14) 3.73 (.13) 3.56 (.11) Animacy 2.73 (.14) 3.29(.13) 3.16(.14) Anthropomorphism 2.70 (.13) 3.18(.13) 3.03(.13) Task Performance Team Performance 99.40 (2.96) 137.99 (3.73) 135.39 (4.27) Robot Performance 90.20 (1.77) 136.55 (3.96) 133.48 (4.53) Human Performance 92.98 (3.31) 97.73 (3.94) 92.82 (2.97) second encounter, although this trend is without significance (p= :07). Figure 6.8 (c) presents these results. Animacy. As shown in Figure 6.8 (d), the Hesitation Condition is perceived sig- nificantly more animate than the Blind Condition (p < :05). The Robotic Avoid- ance, on the other hand, is not perceived significantly more animate than the Blind Condition (p = :17). No significant difference exists between perceived animacy of the Hesitation and the Robotic Avoidance Conditions (p = :85). The animacy measure also shows a significant increase from the first to the second encounter (p< :05). Anthropomorphism. The robot is perceived more anthropomorphic in the Hesi- tation Condition than in the Blind Condition (p < :05). However, the robot is not perceived more anthropomorphic in the Robotic Avoidance Condition compared 90 Table 6.5: The mean and standard error, in parentheses, of the human percep- tion and task performance measures are divided by the first and second encounters and presented here. Scores that show significant differences between the two encounters are marked as follows: p< :05; p< :01. Dependent Measure First Encounter Second Encounter Human Perception Dominance 2.48 (.11) 2.47 (.11) Usefulness 3.00 (.13) 3.22 (.15) Emotional Satisfaction 3.44 (.13) 3.57 (.13) Perceived Safety 3.78 (.16) 4.12 (.17) Likeability 3.26 (.10) 3.48 (.11) Animacy 2.92 (.08) 3.20 (.12) Anthropomorphism 2.83 (.10) 3.10 (.13) Task Performance Team Performance 128.42 (3.76) 120.10 (2.98) Robot Performance 124.81 (3.78) 115.36 (2.67) Human Performance 94.96 (2.77) 94.05 (3.44) to the Blind Condition (p= :28). Anthropomorphism scores of the Hesitation and Robotic Avoidance Conditions do not show a significant difference (p= :49). This measure also seems to increase from the first to the second encounter, although this is not significant (p = :06). A graphical summary of the score is shown in Figure 6.8 (e). Interview Question: Which Trial Did You Like the Best? Two individuals coded the answers to the first interview question with a high level of inter-rater reliability (Cohen’s Kappa of 0:88; p < :001). Seven subjects chose more than one trial, and their choices are weighted accordingly in our analysis. The Hesitation Condition was preferred the most (42%), followed by the Robotic Avoidance Condition (37%) and the Blind Condition (21%). This is consistent with the quantitative finding from the likeability measure. Surprisingly, the subjects who chose a trial with the Blind Condition expressed that they prefer the aggressiveness of the robot. The subjects who chose trials with the Robotic Avoidance and/or Hesitation Conditions preferred the lower dom- 91 Figure 6.8: Overview of a) dominance, b) perceived safety, c) likeability, d) animacy, and e) anthropomorphism scores collected from five-point Likert scale questions. 92 inance level of the robot. Two interesting comments were made by subjects who chose trials with the Hesitation Condition. One subject expressed that she liked the human-like fea- tures of the robot, while the other expressed general preference toward the robot’s AHP-based motions. One subject (subject 17) commented “I guess there was this hesitation happening. So I really felt like there was feedback happening here. So it was conscious of not hitting me, and at the same time, try to do its task.”, and an- other (subject 28) said “I liked the first one as well, when it hesitated. It seemed... it kind of reminded of someone who is really, really shy, or like a kid who is totally ready to do his job but then stopping.” These comments were made before the experimental conditions were explained to the subjects. Interview Question: Did You Feel Uncomfortable or Nervous? Two coders processed the subjects’ responses to the second interview question with a substantial level of inter-rater reliability (Cohen’s Kappa of 0:75; p< :001). Over half of the subjects (58%) answered yes. The majority of these subjects (57% of all ‘yes’ responses) attributed this to the collision(s) with the robot. Others (36% of all ‘yes’ responses) indicated that they disliked instances where the robot seemed inefficient, such as taking too long to “inspect” themarbles bin, or finishing its task later than the subject. The subjects also expressed that, although they were surprised when a collision happened for the first time, the collision itself was not painful. Some even found the collision(s) rather fun and entertaining. 6.2.3 H3.3: Does Hesitation Elicit Improved Performance? This section discusses the robot, human, and team performance measures collected from this study in order to investigate whether a human-robot team performs bet- ter when a robot uses AHP-based hesitation gestures than when it does not. This section discusses the three completion times as the main factors to test H3.3. The counts of collisions and mistakes are discussed as supplementary, yet important, factors to consider in weighing the performance of a team. 93 Human, Robot, and Team Task Completion Time Across the three conditions of interaction, significant results are found in the team and robot performance (see Table 6.6 for ANOVA results, and Table 6.4 for pairwise comparisons). For both team and robot performance measures, the Blind Condition shows significantly better performances than the Robotic Avoidance and Hesitation Conditions. The Robotic Avoidance and Hesitation Conditions do not show signif- icantly better or worse performance from each other. The team and robot performance measures also show significant differences between the first and second encounters. As shown in Table 6.5, both the robot and team have significantly faster task completion times in the second encounters. This may be explained by the decreased number of AHP-based motions and robotic avoidance motions triggered in the second encounter. In the first encounter, a to- tal of 102 AHP-based motions and 182 robotic avoidance motions were triggered, whereas 90 AHP-based motions and 147 robotic avoidance motions were triggered in the second encounter. Hence, the larger number of stopping motions triggered in the first encounter explains the longer task completion times for the robot and, subsequently, the team in the first encounter. On the other hand, the human’s performance does not suggest significant dif- ferences across conditions and encounters. This indicates that human task perfor- mance is not significantly affected by the different conflict response behaviours of the robot, nor are there significant training effects throughout the trials. This raises the question of how the team and the robot’s performance scores can significantly decrease from the first to the second encounters while the human performance re- mains the same. Since triggering of either of the stopping motions was solely dependent on the behaviour of the human subjects, one can postulate that the sub- jects learned to behave in ways that reduced the number of near-collision situations with the robot while not affecting the performance of their own task. These results fail to support our hypothesis H3.3 that AHP-based hesitations in a human-robot collaboration result in improved task performance. However, it also suggests that the communicative feature of the AHP-based conflict response strategy does not hinder performance of the robot and the team. 94 Table 6.6: Two-way repeated-measures ANOVA results for the three task per- formance measures are presented here. Measures with a significant ANOVA result are highlighted as follows: p< :01; p< :001 Performance Measure ANOVA Mauchly’s Test Condition Team F(2;46) = 74:88; p< :0001 W (2) = :91; p= :34 Robot F(2;46) = 103:46; p< :0001 W (2) = :95; p= :59 Human F(2;46) = 1:56; p= :22 W (2) = :97; p= :74 Encounter Team F(1;23) = 8:51; p< :01 Robot F(1;23) = 11:64; p< :01 Human F(1;23) = :15; p= :67 Collisions and Mistakes Since the robot was programmed to avoid collisions in both the Hesitation and Robotic Avoidance Conditions, no collisions occurred in those conditions. As many as six collisions occurred in trials with the Blind Condition. The number of collisions found between the Blind Condition and the two no-collision condi- tions are statistically significant (X2(8;N = 144) = 75:79; p< :001). Section E.3.2 of Appendix E outlines details of the statistical analysis. Obviously, no statistical significance is found between the number of collisions in the Robotic Avoidance Condition and that of Hesitation conditions. Nonetheless, the author believes that the occurrence of collisions should be considered in conjunction with the task completion times when considering the overall desirability of human-robot collaboration performance. The distribution of collisions is presented in Table 6.7. Mistakes are also a quantitative negative measure of team performance. In a real assembly line scenario, making a mistake can be costly in terms of both re- sources and time required to correct the mistake. There is not enough statistical power to report on a significant difference across the three conditions (X2(6;N = 144) = 3:29; p = :77). Nonetheless, in the raw data, the highest number of mis- takes was found in the Blind Condition, and the least number of mistakes in the Hesitation Condition (see Table 6.8). Section E.3.1 of Appendix E presents the 95 details of the non-parametric test conducted on the mistakes measure. Table 6.7: The distribution of the number of collisions occurred. Each cell in- dicates the number of subjects who collided with the robot. No collision occurred for both the Hesitation and Robotic Avoidance Conditions. Condition Number of Collisions per Trial 0 1 2 3 4 5 6 Blind 18 16 10 3 0 0 1 Hesitation 48 0 0 0 0 0 0 Robotic Avoidance 48 0 0 0 0 0 0 Table 6.8: The number of mistakes made in each condition is summarised by condition. Each cell contains the number of subjects who made mistakes. The total number of mistakes reported on the bottom row is the sum of the mistakes made by all subjects. Blind Hesitation Robotic Avoidance 1 Mistake 4 3 4 2 Mistakes 1 0 1 3 Mistakes 1 0 0 Total Number of Mistakes 9 3 6 6.3 Discussion The findings from Study III, together with those of Study II, demonstrate that non- expert human subjects are able to identify the proposed AHP-based trajectories as humanlike hesitations. Compared to Study II, accurate kinematic control of the robot in Study III was difficult to achieve due to the robot’s native real-time control architecture. Nevertheless, human subjects recognized the AHP-based motions as hesitations, and accurately distinguished them as different from abrupt stopping behaviours. The fact that subjects were able to identify nuances from the brief mo- tions of a robotic manipulator alludes to the possible usefulness of anthromimetic gestures in human-robot collaboration. However, the impact of AHP-based hesitations in human-robot collaboration requires further investigation. The fact that only the Hesitation, and not the Robotic 96 Avoidance, condition was perceived significantly more anthropomorphic and ani- mate than the Blind Condition suggests that AHP-based hesitations are perceived in a more positive light. Nonetheless, there is a lack of significance in human perception and performance data between the two non-aggressive conditions. The Blind Condition was the least preferred of the three experimental condi- tions, and the robot in the Blind Condition was perceived to be significantly more dominant and less likable than both the Hesitation and Robotic Avoidance Condi- tions. This was true even though collisions with the robot did not physically harm the subjects. Hence, despite the lack of human perception differences between the Hesitation and Robotic Avoidance Conditions, the fact that the Blind Condition yielded significantly lower human perception scores emphasizes the importance of responding to, rather than ignoring, a human-robot resource conflict. The findings from this study, therefore, indicate that a robot should not ignore human-robot re- source conflicts even when the imminent collisions are not expected to physically harm the human user. Given the evidence that subjects were able to recognize the AHP-based mo- tions as hesitations in this study, it remains unknown as to why the perception and performance measures were not significantly different between the Hesitation and Robotic Avoidance Conditions. It is worth considering a number of plausible explanations. First, none of the three conditions induced physical harm to the subjects, and all subjects were aware of the possible collisions with the robot. While it is unethical to test conditions in which subjects are physically or psychologically harmed, the subjects may have quickly internalised the lack of real danger, thereby contributing to the non-significance of the results obtained. This is evidenced by the qualitative finding that some of the subjects found the collisions entertaining. This underlines the challenge of creating experiments that reflect potential real-life human-robot collaboration scenarios. The author believes that creating a perception of possi- ble danger while not undermining the safety of subjects would yield results more reflective of reality. Second, by the nature of the task, the subject’s task performance was not di- rectly affected by the performance of the robot; subjects did not require the robot to succeed its “inspection” task in continuing their task. Hence, nothing stopped 97 the subjects from ignoring the robot and performing their tasks that, in turn, pe- nalised the performance of the robot in both the Hesitation and Robotic Avoidance Conditions. Furthermore, the subjects were aware that they were being timed. This may have motivated the subjects to finish their tasks as quickly as possible, regard- less of the robot’s behaviour. Thus, it should not come as a surprise that human performance remained unaffected across conditions. Indeed, the Blind Condition generated the best team and robot task comple- tion times. However, the performance came with larger counts of mistakes and collisions. Considering industrial applications in which correcting for mistakes or occurrence of collisions can seriously affect completion times of a task and the quality of the finished product, it is important to consider these secondary perfor- mance measures. In this study, a separate performance score that encompasses the team’s task completion time, mistakes, and collisions was not calculated, since the amount of penalty applied to each of the performance measures can affect the final score. However, with a fair means to calculate such penalty scores, it is possible to conjecture that the Hesitation Condition would have shown the highest perfor- mance of the three conditions. 6.3.1 Limitations One of the key limitations of this study is in the implementation of the Hesitation Condition. Since the decision whether to trigger an AHP-based motion needs to be made before t1 is reached, all other occurrences of resource conflicts after t1 must be dealt with using robotic avoidance motions in order to avoid collisions. Hence, some of the subjects encountered both AHP-based and robotic avoidance motions in the Hesitation Condition. This may have contributed to the lack of significance in the human perception measures as well as performance measures. Due to the multiple layers of interfacing required for the system setup, the system did not truly operate in real-time. The ROS-based algorithms were run in a non-real-time environment, whereas the BtClient system operated in real-time. Hence, the ROS-based algorithms sometimes caused delay in sending commands to the real-time system and may have affected human perception of the robot motions. In addition, due to the limited number of subjects recruited for this study, the 98 number of mistakes made did not yield significant results. Retrospectively, the team performance and human performance scores may have demonstrated signifi- cant differences across the conditions if the subjects were asked to correct for their mistakes during the experiment, resulting in a natural performance penalty with increased mistakes. The experimental task can also be improved to yield more realistic performance and perception results. If the task were redesigned such that the robot’s access to the shared resource affects the performance of the human’s task, then a more collaborative human-robot social dynamic could be established. 6.4 Summary This chapter presented strong evidence that trajectories generated using the AHP- based approach are perceived as more anthropomorphic and convey hesitation more effectively than robotic avoidance motions. Findings from Studies II and III support this to be true whether the motion is observed via a video recording or in situ while the subject is engaged in a collaborative task with the robot. The qualitative and quantitative data from this study indicate that a robot utilizing hes- itation gestures is preferred over a robot that does not respond to human motions at all, but is not significantly more liked than a robot exhibiting abrupt stopping behaviours. The robot is considered less dominant and more animate when it hesi- tates or abruptly stops than when it does not respond to subjects at all. The results show an overall positive perception of the robot when it responds with hesitation than when it ignores humans, but this perception is not significantly more positive than robotic avoidance responses. Results of this study also show that, while the use of a “blind” robot that does not respond to human motions yields faster team completion times for a collabo- rative task than a robot that hesitates or abruptly stops, this may be at a cost of an increased collisions and mistakes. Accounting for the number of collisions, mis- takes, and the fact that the completion time of the human’s task was unaffected by the addition of AHP-based hesitation gestures, the author remains optimistic that a human-robot collaboration system with hesitation gestures will produce a positive overall increase in task performance. 99 Chapter 7 Conclusion This thesis started with the question of what should a robot do when it faces a resource conflict with a human user. It was proposed that a robot could negoti- ate through this context-dependent problem with the human user if the robot is equipped with natural human-robot communication tools. In an attempt to build a framework that allows human-robot teams to resolve resource conflicts, this thesis focuses on developing a robot’s ability to communicate its behaviour states to the user. In particular, this thesis answers the following questions, which are addressed individually in the sections below: a) can an articulated industrial robot arm com- municate hesitation? (Section 7.1); b) can an empirically grounded acceleration profile of human hesitation trajectories be used to generate hesitation motions for a robot? (Section 7.2); c) what is the impact of a robot’s hesitation response to re- source conflicts in a Human-Robot Shared-Task (HRST) (Section 7.3)? In the three studies presented in this work, anthromimetic hesitation gestures are proposed, de- signed and experimentally tested as a novel and communicative robot responses to answer these questions. Study I and the subsequent analyses in Chapter 4 contribute to a better under- standing of human hesitations manifested as kinesic gestures. This new knowledge about the trajectory features of hesitation gestures was used to design hesitation gestures for a robot. Two different studies, Studies II and III, in Chapters 5 and 6 respectively, contribute to the field of nonverbal Human-Robot Interaction (HRI) by 100 demonstrating that humans recognize and differentiate the designed hesitant robot motions. Section 7.4 discusses limitations of the work and outlines recommenda- tions for future work. 7.1 Can an Articulated Industrial Robot Arm Communicate Hesitation? Study I was aimed to answer the question of whether an articulated industrial robot arm can communicate hesitation. In this study, human-human interaction was used to capture wrist trajectories of human hesitation motions. A robot mimicked the human motions with is end-effector to create human-robot interaction analogous to the recorded human-human interaction. Human perception of hesitation from the robot’s motions was collected via online surveys. The results of the surveys demonstrate that robotic manipulator end-effector motions can convey hesitation to human observers. These results empirically support the idea that the commu- nicative content of human hesitations can be simplified to 3D Cartesian position trajectories of a person’s wrist. 7.2 Can an Empirically Grounded Acceleration Profile of Human Hesitations be Used to Generate Robot Hesitations? The results of Study I inspired the following questions: what are the characteristic features of the trajectories that convey hesitation to human observers and can these characteristic features be used to design human-like hesitation gestures for a robot? The qualitative and quantitative analyses described in Chapter 4 aimed to an- swer the former question. The qualitative analysis of collected human motions from Study I resolved two different types of hesitations in the presence of a shared resource conflict. R-type hesitations were typified by hand retraction back to the home position. P-type hesitations were typified by the hand hovering or paus- ing before continuing towards the target once the shared resource became free of conflict. R-type hesitations were quantitatively compared against successful reach- retract (S-type) motions. The results of this analysis indicated that R-type motions 101 can be differentiated from S-type motions in the time domain by their accelera- tion extrema. Based on the quantitative differences in R-type and S-type motions, a hesitation trajectory design specification was developed. This specification – Acceleration-based Hesitation Profile (AHP) – describes a hesitation trajectory in terms of a) how abruptly the robot should halt in relation to how quickly it launched towards the target object, and b) how smoothly the robot should yield and return to its initial position. Study II, presented in Chapter 5, was designed to answer the question of whether the AHP can be used to generate human-like hesitation gestures for a robot. In the study, online participants watched three different AHP-based motions along with other robot trajectories to test the efficacy of AHP. The results from this study sug- gest that AHP can be used to generate human-recognizable hesitation motions, and demonstrate that communicative content of hesitation gestures can be captured in 2D Cartesian trajectories. Only the motions in the principal axis need to follow an AHP, and the secondary axis can supplement the principal axis to generate a human-like path of reach towards the target. In Study III, the AHP was implemented in a real-time human-robot interaction system to further answer this question. In the study, the AHP was used to generate hesitation gestures on a robot in response to spontaneously occurring human-robot resource conflicts. The results from the study demonstrate that humans perceive hesitation from AHP-based motions while interacting with the robot and recognize these motions to be different from motions of an abrupt robotic collision avoidance mechanism. Since the AHP only specifies robot motions in one dimension, the 6- and 7- DOF robots used in Studies II and III, respectively, did not use all their DOFs in generating the AHP-based trajectories. Nonetheless, human subjects were able to recognize the robots’ AHP-based hesitation motions. Based on this strong empiri- cal evidence, even lower-DOF robots may be able to exhibit human-recognizable hesitations using AHP. 102 7.3 What is the Impact of a Robot’s Hesitation Response to Resource Conflicts in a Human-Robot Shared-Task? With the positive findings from Studies I and II, Study III was aimed to answer the subsequent question of whether the anthromimetic hesitation response to resource conflicts positively impacts human-robot collaboration. The subjects were asked to participate in a HRST in which the robot either did not respond to resource conflicts, respond to the conflict using AHP-based motions, or respond to the conflict using typical robotic collision avoidance motions. Questionnaires and interview results from Study III found support that a robot is more positively perceived by human users when it responds to conflicts than when it does not. This finding was true even though the subjects knew they would not be physically harmed by the robot’s lack of response to conflicts. This finding suggests that a robot should always respond to resource conflicts, rather than ignore it, even if the robot is designed to be safe for human-robot collisions. In addition, the results from Study III provide support for the hypothesis that a robot is more positively perceived by human users when it responds to conflicts with AHP-based motions than when it does not respond at all. However, human perception of the robot and the task performances is neither improved nor hindered by AHP-based robot responses with respect to robotic avoidance motions. The anthromimetic conflict response mechanism did not yield any improvements in task completion time when compared with robotic avoidance responses. Nonethe- less, counts of secondary performance measures, including the number of mistakes made and collisions occurred during the task, suggest that AHP-based robot re- sponses may yield improvements in performance if a different human-robot col- laboration had been tested. Although numerous studies already demonstrate which robots that use nonver- bal gestures have a positive impact on human-robot teamwork, these studies have been limited to collaborative tasks that include clear turn-taking rules or hierar- chical roles for human and robot. Study III contributes to the body of work in nonverbal HRI by exploring nonverbal human-robot communication within a team context that lacks predefined turn-taking rules and an assumed hierarchy. 103 7.4 Recommendations and Future Work In light of the findings from this thesis, some key questions remain: Do the differ- ent types of hesitations by a robot carry a different meaning to its human users? What would be the impact of a robot using one type of hesitation instead of an- other? When should a robot hesitate or not hesitate? Can we map socially ac- ceptable yielding behaviours of a robot as hesitation trajectory parameters, thereby embedding low-level behaviour-based ethics onto a robot? More importantly, do hesitation behaviours of a robot influence the human user’s decision to yield to the robot? If so, then do negotiated resolution of human-robot resource conflicts re- sult in better management of shared resources than when the robot always yields to humans? With the empirically validated hesitation trajectory design devised in this thesis, these important questions can be investigated to improve human-robot collaboration. With regards to direct follow up on the studies completed and robot trajectories proposed herein, it should be noted that the CRS A460 robot used in both Studies I and II followed the reference trajectories five times slower than natural human speed during video recording in order to generate high fidelity motion. This sheds some light on the limitations of Study II, which tested the efficacy of the AHP within human-like launch acceleration parameter values. The human-like range of launch acceleration demands a magnitude of deceleration even larger than that of the launch acceleration to follow the AHP. Like the 6-DOF robot, many industrial robots are not capable of generating high acceleration motions that match human speeds. Hence, prior to implementing the AHP on a slower robot, further testing is necessary to verify the efficacy of the AHP at a lower range of launch accelerations. Similar to human-human interactions, a system that enables human-robot non- verbal negotiation and resolution of resource conflicts requires a robot to both ex- press its behaviour-states as well as understand what is expressed by humans. This thesis work only addressed robot expression of hesitation to its human observers. The author posits that, with improved technologies to robustly understand human expression of intentions and internal states in real-time, a robot would be able to use hesitation gestures to resolve resource conflicts with its human partners. Although significant perception and performance differences are not observed between AHP- 104 based robot responses and robotic avoidance motions of Study III, greater differ- ences in these teamwork measures may be observed when bidirectional human- robot nonverbal communication mechanisms are established. 105 Bibliography [1] M. Argyle. The Psychology of Interpersonal Behaviour. Penguin, 5th edition, 1994. ISBN 0140172742. URL http://www.amazon.co.uk/ Psychology-Interpersonal-Behaviour-Penguin/dp/0140172742. ! pages 2, 7 [2] Barrett Technology Inc. Datasheet - WAM (WAM-02.2011). Technical report, Barrett Technology Inc., Cambridge, Massachusetts, 2011. ! pages 69 [3] C. Bartneck, T. Kanda, O. Mubin, and A. Al Mahmud. Does the Design of a Robot Influence Its Animacy and Perceived Intelligence? International Journal of Social Robotics, 1(2):195–204, Feb. 2009. ISSN 1875-4791. doi:10.1007/s12369-009-0013-7. URL http://www.springerlink.com/index/10.1007/s12369-009-0013-7. ! pages 9 [4] C. Bartneck, D. Kulić, E. Croft, and S. Zoghbi. Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. International Journal of Social Robotics, 1 (1):71–81, Nov. 2009. ISSN 1875-4791. doi:10.1007/s12369-008-0001-3. URL http://www.springerlink.com/content/d422u846113572qn/http: //www.springerlink.com/index/d422u846113572qn.pdf. ! pages 60, 78, 87, 88 [5] C. Becchio, L. Sartori, M. Bulgheroni, and U. Castiello. Both your intention and mine are reflected in the kinematics of my reach-to-grasp movement. Cognition, 106(2):894–912, Mar. 2008. ISSN 0010-0277. doi:10.1016/j.cognition.2007.05.004. URL http://www.ncbi.nlm.nih.gov/pubmed/17585893. ! pages 2, 7 [6] C. Becchio, L. Sartori, and U. Castiello. Toward You: The Social Side of Actions. Current Directions in Psychological Science, 19(3):183–188, June 2010. ISSN 0963-7214. doi:10.1177/0963721410370131. URL 106 http://cdp.sagepub.com/lookup/doi/10.1177/0963721410370131. ! pages 2, 7 [7] S. Berman, D. G. Liebermann, and T. Flash. Application of motor algebra to the analysis of human arm movements. Robotica, 26(4):435–451, 2008. ISSN 0263-5747. URL http://portal.acm.org/citation.cfm?id=1394718. ! pages 34 [8] J. Bernhardt, P. J. Bate, and T. A. Matyas. Accuracy of observational kinematic assessment of upper-limb movements. Physical Therapy, 78(3): 259–70, Mar. 1998. ISSN 0031-9023. URL http://ptjournal.apta.org/content/78/3/259.abstract. ! pages 35 [9] C. L. Bethel and R. R. Murphy. Affective expression in appearance constrained robots. In Proceeding of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction - HRI ’06, page 327, New York, New York, USA, 2006. ACM Press. ISBN 1595932941. doi:10.1145/1121241.1121299. URL http://portal.acm.org/citation.cfm?id=1121299http: //portal.acm.org/citation.cfm?doid=1121241.1121299. ! pages 12, 13 [10] M. Bratman. Shared cooperative activity. The Philosophical Review, 101(2): 327–341, 1992. URL http://www.jstor.org/stable/10.2307/2185537. ! pages 10 [11] C. Breazeal and B. Scassellati. Robots that imitate humans. Trends in Cognitive Sciences, 6(11):481–487, Nov. 2002. ISSN 1879-307X. URL http://www.ncbi.nlm.nih.gov/pubmed/12457900. ! pages 13 [12] C. Breazeal, C. Kidd, A. Thomaz, G. Hoffman, and M. Berlin. Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 383–388. Ieee, 2005. ISBN 0-7803-8912-3. doi:10.1109/IROS.2005.1545011. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1545011. ! pages 11, 12 [13] J. K. Burgoon, J. a. Bonito, A. Ramirez, N. E. Dunbar, K. Kam, and J. Fischer. Testing the Interactivity Principle: Effects of Mediation, Propinquity, and Verbal and Nonverbal Modalities in Interpersonal Interaction. Journal of Communication, 52(3):657–677, Sept. 2002. ISSN 0021-9916. doi:10.1111/j.1460-2466.2002.tb02567.x. URL 107 http://doi.wiley.com/10.1111/j.1460-2466.2002.tb02567.x. ! pages 3, 78, 79 [14] J. K. J. Burgoon, J. J. A. Bonito, B. Bengtsson, A. Ramirez, N. E. Dunbar, and N. Miczo. Testing the interactivity model: Communication processes, partner assessments, and the quality of collaborative work. Journal of Management Information Systems, 16(3):33–56, 2000. ISSN 07421222. URL http://portal.acm.org/citation.cfm?id=1195839. ! pages 78 [15] P. R. Cohen and H. J. Levesque. Teamwork. Noûs, 25(4):487–512, 1991. ! pages 10 [16] CRS Robotics Cooperation. A465 Robot Arm User Guide. Technical report, CRS Robotics Cooperation, Burlington, ON, Canada, 2000. ! pages 117, 118 [17] W. H. Dittrich and S. E. Lea. Visual perception of intentional motion. Perception, 23(3):253–68, Jan. 1994. ISSN 0301-0066. URL http://www.ncbi.nlm.nih.gov/pubmed/7971105. ! pages 7 [18] L. W. Doob. Hesitation: impulsivity and reflection. Greenwood Press, Westport, CT, 1990. ISBN 0313274460. URL http://books.google.com/books?id=Q7B9AAAAMAAJ&pgis=1http: //www.questia.com/read/27488222. ! pages 8 [19] T. Ende, S. Haddadin, S. Parusel, T. Wusthoff, M. Hassenzahl, and A. Albu-Schaffer. A human-centered approach to robot gesture based communication within collaborative working processes. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 3367–3374. IEEE, Sept. 2011. ISBN 978-1-61284-456-5. doi:10.1109/IROS.2011.6094592. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6094592. ! pages 14 [20] T. Fincannon, L. Barnes, R. Murphy, and D. Riddle. Evidence of the need for social intelligence in rescue robots. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), volume 2, pages 1089–1095. IEEE, 2004. ISBN 0-7803-8463-6. doi:10.1109/IROS.2004.1389542. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1389542. ! pages 13 108 [21] T. Flash and N. Hogan. The Coordination of Arm Movements: Mathematical Model. Journal of Neuroscience, 5(7):1688–1703, 1985. URL http://www.jneurosci.org/cgi/content/abstract/5/7/1688. ! pages 13, 35, 58 [22] R. Fox and C. McDaniel. The perception of biological motion by human infants. Science, 218(4571):486–487, Oct. 1982. ISSN 0036-8075. doi:10.1126/science.7123249. URL http://www.sciencemag.org/content/218/4571/486.abstract. ! pages 7 [23] H. Fukuda and K. Ueda. Interaction with a Moving Object Affects Ones Perception of Its Animacy. Int J Soc Robotics, 2(2):187–193, Mar. 2010. ISSN 1875-4791. doi:10.1007/s12369-010-0045-z. URL http://www.springerlink.com/index/10.1007/s12369-010-0045-z. ! pages 7 [24] D. B. Givens. The Nonverbal Dictionary of Gestures, Signs & Body Language Cues. Center for Nonverbal Studies Press, Spokane, Washington, 2002. URL http://www.mikolaj.info/edu/Body Language - List of Signs n Gestures.pdf. ! pages 8 [25] J. Goetz, S. Kiesler, and A. Powers. Matching robot appearance and behavior to tasks to improve human-robot cooperation. In The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003., pages 55–60. IEEE, 2003. ISBN 0-7803-8136-X. doi:10.1109/ROMAN.2003.1251796. URL http://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=1251796http: //ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1251796. ! pages 11 [26] B. J. Grosz. Collaborative Systems. AI Magazine, 17(2):67–85, 1996. ! pages 10 [27] V. B. Gupta. History, Definition and Classification of Autism Spectrum Disorders. In V. B. Gupta, editor, Autistic Spectrum Disorders in Children, chapter 1, pages 85–123. Marcel Dekker Inc.,, New York, 2004. ISBN 0824750616. URL http://books.google.com/books?hl=en&lr=&id=tOZqDydjMMIC&pgis=1. ! pages 7 [28] F. Heider and M. Simmel. An Experimental Study of Apparent Behavior. The American Journal of Psychology, 57(2):243 – 259, 1944. URL http://www.citeulike.org/user/justaubrey/article/1107150. ! pages 7 109 [29] P. Hinds, T. Roberts, and H. Jones. Whose Job Is It Anyway? A Study of Human-Robot Interaction in a Collaborative Task. Human-Computer Interaction, 19(1):151–181, June 2004. ISSN 0737-0024. doi:10.1207/s15327051hci1901n&2n 7. URL http://www.informaworld.com/openurl?genre=article&doi=10.1207/ s15327051hci1901&2 7&magic=crossrefjj D404A21C5BB053405B1A640AFFD44AE3. ! pages 11 [30] A. Holroyd, C. Rich, C. L. Sidner, and B. Ponsler. Generating connection events for human-robot collaboration. In 2011 RO-MAN, pages 241–246. IEEE, July 2011. ISBN 978-1-4577-1571-6. doi:10.1109/ROMAN.2011.6005245. URL http://ieeexplore.ieee.org/xpl/freeabs all.jsp?arnumber=6005245. ! pages 11, 12 [31] C.-M. Huang and A. L. Thomaz. Effects of responding to, initiating and ensuring joint attention in human-robot interaction. In 2011 RO-MAN, pages 65–71. IEEE, July 2011. ISBN 978-1-4577-1571-6. doi:10.1109/ROMAN.2011.6005230. URL http://ieeexplore.ieee.org/xpl/freeabs all.jsp?arnumber=6005230. ! pages 11, 12 [32] J. ILLES. Neurolinguistic features of spontaneous language production dissociate three forms of neurodegenerative disease: Alzheimer’s, Huntington’s, and Parkinson’s*1. Brain and Language, 37(4):628–642, Nov. 1989. ISSN 0093934X. doi:10.1016/0093-934X(89)90116-8. URL http://linkinghub.elsevier.com/retrieve/pii/0093-934X(89)90116-8. ! pages 8 [33] G. Johansson. Visual perception of biological motion and a model for its analysis. Perception & Psychophysics, 14(2):201–211, June 1973. ISSN 0031-5117. doi:10.3758/BF03212378. URL http://www.springerlink.com/index/10.3758/BF03212378. ! pages 7 [34] W. Ju and L. Takayama. Approachability: How People Interpret Automatic Door Movement as Gesture. Int J Design, 3(2), Aug. 2009. URL citeulike-article-id:6390837http: //www.ijdesign.org/ojs/index.php/IJDesign/article/view/574/244. ! pages 7 [35] T. Kazuaki, O. Motoyuki, and O. Natsuki. The hesitation of a robot: A delay in its motion increases learning efficiency and impresses humans as teachable. In 2010 5th ACM/IEEE International Conference on 110 Human-Robot Interaction (HRI), volume 8821007, pages 189–190, Osaka, Japan, Mar. 2010. IEEE. ISBN 978-1-4244-4892-0. doi:10.1109/HRI.2010.5453200. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5453200. ! pages 9 [36] J. F. Kelley. An iterative design methodology for user-friendly natural language office information applications. ACM Transactions on Information Systems, 2(1):26–41, Jan. 1984. ISSN 10468188. doi:10.1145/357417.357420. URL http://dl.acm.org/citation.cfm?id=357417.357420. ! pages 11 [37] H. Kim, S. S. S. Kwak, and M. Kim. Personality design of sociable robots by control of gesture design factors. In RO-MAN 2008, pages 494–499, Munich, Aug. 2008. Ieee. ISBN 978-1-4244-2212-8. doi:10.1109/ROMAN.2008.4600715. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4600715. ! pages 13, 14 [38] S. T. Klapp, P. A. Kelly, and A. Netick. Hesitations in continuous tracking induced by a concurrent discrete task. Human Factors, 29(3):327–337, 1987. ! pages 8 [39] T. Kroger. Online Trajectory Generation: Straight-Line Trajectories. IEEE Transactions on Robotics, 27(5):1010–1016, Oct. 2011. ISSN 1552-3098. doi:10.1109/TRO.2011.2158021. URL http://ieeexplore.ieee.org/xpl/freeabs all.jsp?arnumber=5887431. ! pages 52, 53 [40] D. Kulic and E. Croft. Physiological and subjective responses to articulated robot motion. Robotica, 25(01):13, Aug. 2006. ISSN 0263-5747. doi:10.1017/S0263574706002955. URL http://www.journals.cambridge.org/abstract S0263574706002955. ! pages 3, 14 [41] J. C. Lafferty and P. M. Eady. The desert survival problem. Experimental Learning Methods., Plymouth , MI, 1974. URL http://www.citeulike.org/user/mortimer/article/2214983. ! pages 78 [42] D. Leathers. Successful Nonverbal Communication: Principles and Applications (3rd Edition). Allyn & Bacon, 1997. ISBN 0205262309. URL http://www.amazon.com/ 111 Successful-Nonverbal-Communication-Principles-Applications/dp/ 0205262309. ! pages 4 [43] A. Lindsey, J. Greene, R. Parker, and M. Sassi. Effects of advance message formulation on message encoding: Evidence of cognitively based hesitation in the production of multiple-goal messages. Communication Quarterly, 43 (3):320–331, 1995. ISSN 0146-3373. doi:10.1080/01463379509369979. URL http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle: Effects+of+advance+message+formulation+on+message+encoding: +Evidence+of+cognitively+based+hesitation+in+the+production+of+ multiple-goal+messages#0. ! pages 8 [44] V. Manera, C. Becchio, A. Cavallo, L. Sartori, and U. Castiello. Cooperation or competition? Discriminating between social intentions by observing prehensile movements. Experimental brain research. Experimentelle Hirnforschung. Expérimentation cérébrale, 211(3-4):547–56, June 2011. ISSN 1432-1106. doi:10.1007/s00221-011-2649-4. URL http://www.ncbi.nlm.nih.gov/pubmed/21465414. ! pages 7 [45] M. Mataric. Getting humanoids to move and imitate. In IROS 2000, volume 15, pages 18–24, July 2000. doi:10.1109/5254.867908. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=867908. ! pages 13 [46] D. Matsui, T. Minato, K. MacDorman, and H. Ishiguro. Generating Natural Motion in an Android by Mapping Human Motion. In IROS 2005, pages 1089–1096, 2005. ISBN 0-7803-8912-3. doi:10.1109/IROS.2005.1545125. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1545125. ! pages 13 [47] S. Merlo and P. A. Barbosa. Hesitation phenomena: a dynamical perspective. Cognitive Processing, 11(3):251–61, Aug. 2010. ISSN 1612-4790. doi:10.1007/s10339-009-0348-x. URL http://www.ncbi.nlm.nih.gov/pubmed/19916035. ! pages 8 [48] Y. MOON and C. NASS. How ”Real” Are Computer Personalities?: Psychological Responses to Personality Types in Human-Computer Interaction. Communication Research, 23(6):651–674, Dec. 1996. ISSN 0093-6502. doi:10.1177/009365096023006002. URL http://crx.sagepub.com/cgi/doi/10.1177/009365096023006002. ! pages 79, 87, 88 112 [49] Y. Ogai and T. Ikegami. Microslip as a Simulated Artificial Mind. Adaptive Behavior, 16(2/3):129–147, Apr. 2008. ISSN 1059-7123. doi:10.1177/1059712308089182. URL http://adb.sagepub.com/content/16/2-3/129.abstracthttp: //adb.sagepub.com/content/16/2-3/129.full.pdfhttp: //adb.sagepub.com/cgi/doi/10.1177/1059712308089182. ! pages 8 [50] Oxford Online Dictionary. “moral”, 2012. URL http: //oxforddictionaries.com/definition/moral?region=us&q=morals#moral 5. ! pages 1 [51] P. Philippot, R. S. Feldman, and E. J. Coats, editors. The Social Context of Nonverbal Behavior (Studies in Emotion and Social Interaction). Cambridge University Press, 1999. ISBN 0521583713. URL http://www.amazon.com/ Context-Nonverbal-Behavior-Studies-Interaction/dp/0521583713. ! pages 7 [52] N. Pollard, J. Hodgins, M. Riley, and C. Atkeson. Adapting human motion for the control of a humanoid robot. In ICRA 2002, pages 1390–1397, Washington, 2002. ISBN 0-7803-7272-7. doi:10.1109/ROBOT.2002.1014737. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1014737. ! pages 13 [53] F. Pollick. Perceiving affect from arm movement. Cognition, 82(2): B51–B61, Dec. 2001. ISSN 00100277. doi:10.1016/S0010-0277(01)00147-0. URL http://dx.doi.org/10.1016/S0010-0277(01)00147-0. ! pages 14 [54] F. E. Pollick. The Features People Use to Recognize Human Movement Style. Lecture Notes in Computer Science: Gesture-Based Communication in Human-Computer Interaction, 2915:467–468, 2004. doi:10.1007/b95740. URL http://www.springerlink.com/content/qnbtu7b25t0kguha/. ! pages 7 [55] K. Reed, M. Peshkin, M. J. Hartmann, M. Grabowecky, J. Patton, and P. M. Vishton. Haptically linked dyads: are two motor-control systems better than one? Psychological Science, 17(5):365–6, May 2006. ISSN 0956-7976. doi:10.1111/j.1467-9280.2006.01712.x. URL http://www.ncbi.nlm.nih.gov/pubmed/16683920. ! pages 10 [56] K. B. Reed, J. Patton, and M. Peshkin. Replicating Human-Human Physical Interaction. In Proceedings 2007 IEEE International Conference on 113 Robotics and Automation, number April, pages 3615–3620, Roma, Italy, Apr. 2007. IEEE. ISBN 1-4244-0602-1. doi:10.1109/ROBOT.2007.364032. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4209650. ! pages 11 [57] B. Reeves and C. Nass. The Media Equation: How People Treat Computers, Television, and New Media like Real People and Places. Cambridge University Press, 1996. ISBN 157586052X. URL http://www.amazon.com/ The-Media-Equation-Television-Information/dp/1575860538. ! pages 7 [58] L. D. Riek, T.-c. Rabinowitch, P. Bremner, A. G. Pipe, M. Fraser, and P. Robinson. Cooperative Gestures : Effective Signaling for Humanoid Robots. In HRI 2010, pages 61–68, Osaka, Japan, 2010. ACM/IEEE. ! pages 13, 14 [59] P. Rober. Some Hypotheses about Hesitations and their Nonverbal Expression in Family Therapy Practice. Journal of Family Therapy, 24(2): 187–204, 2002. ISSN 1467-6427. doi:10.1111/1467-6427.00211. URL http: //www3.interscience.wiley.com/cgi-bin/abstract/118914502/ABSTRACT. ! pages 8 [60] M. Saerbeck and C. Bartneck. Perception of Affect Elicited by Robot Motion. In Proceeding of the 5th ACM/IEEE international conference on Human-robot interaction - HRI ’10, pages 53–60, New York, New York, USA, 2010. ACM/IEEE. ISBN 9781424448937. doi:10.1145/1734454.1734473. URL http://portal.acm.org/citation.cfm?doid=1734454.1734473. ! pages 14 [61] M. Salem, K. Rohlfing, S. Kopp, and F. Joublin. A friendly gesture: Investigating the effect of multimodal robot behavior in human-robot interaction. In 2011 RO-MAN, pages 247–252. IEEE, July 2011. ISBN 978-1-4577-1571-6. doi:10.1109/ROMAN.2011.6005285. URL http://ieeexplore.ieee.org/xpl/freeabs all.jsp?arnumber=6005285. ! pages 11 [62] L. Sartori, C. Becchio, M. Bulgheroni, and U. Castiello. Modulation of the action control system by social intention: unexpected social requests override preplanned action. Journal of Experimental Psychology. Human Perception and Performance, 35(5):1490–500, Oct. 2009. ISSN 1939-1277. doi:10.1037/a0015777. URL http://apps.isiknowledge.com/full record.do? 114 product=UA&search mode=GeneralSearch&qid=29&SID= 4B4fPD4ijj1e8NBnMHj&page=1&doc=1&colname=WOS. ! pages 2 [63] J. F. Sousa-Poza and R. Rohrberg. Body Movement in Relation To Type of Information (Person- and Nonperson-Oriented) and Cognitive Style (Field Dependence) 1. Human Communication Research, 4(1):19–29, Sept. 1977. ISSN 0360-3989. doi:10.1111/j.1468-2958.1977.tb00592.x. URL http: //www.blackwell-synergy.com/doi/abs/10.1111/j.1468-2958.1977.tb00592.x. ! pages 8 [64] C. Suda and J. Call. What Does an Intermediate Success Rate Mean? An Analysis of a Piagetian Liquid Conservation Task in the Great Apes. Cognition, 99(1):53–71, Feb. 2006. URL http://www.eric.ed.gov/ERICWebPortal/detail?accno=EJ729778. ! pages 8, 9 [65] S. B. Thies, P. Tresadern, L. Kenney, D. Howard, J. Y. Goulermas, C. Smith, and J. Rigby. Comparison of linear accelerations from three measurement systems during “reach & grasp”. Medical Engineering & Physics, 29(9): 967–72, Nov. 2007. ISSN 1350-4533. doi:10.1016/j.medengphy.2006.10.012. URL http://www.ncbi.nlm.nih.gov/pubmed/17126061. ! pages 18 [66] K. R. Thorisson, Justine Cassell. the Power of a Nod and a Glance: Envelope Vs. Emotional Feedback in Animated Conversational Agents. Applied Artificial Intelligence, 13(4-5):519–538, May 1999. ISSN 0883-9514. doi:10.1080/088395199117360. URL http://www.informaworld.com/ openurl?genre=article&doi=10.1080/088395199117360&magic=crossrefjj D404A21C5BB053405B1A640AFFD44AE3. ! pages 7 [67] M. Tomasello, M. Carpenter, J. Call, T. Behne, and H. Moll. Understanding and sharing intentions: the origins of cultural cognition. Behavioral and Brain Sciences, 28(5):675–735, Oct. 2005. ISSN 0140-525X. doi:10.1017/S0140525X05000129. URL http://www.ncbi.nlm.nih.gov/pubmed/16262930. ! pages 7 [68] P. D. Tremoulet and J. Feldman. The influence of spatial context and the role of intentionality in the interpretation of animacy from motion. Perception & Psychophysics, 68(6):1047–58, Aug. 2006. ISSN 0031-5117. URL http://www.ncbi.nlm.nih.gov/pubmed/17153197. ! pages 7 115 [69] H. J. Woltring. On Optimal Smoothing and Derivative Estimation from Noisy Displacement Data in Biomechanics. Human Movement Science, 4: 229–245, 1985. ! pages 21 [70] T. Yokoi and K. Fujisaki. Hesitation behaviour of hoverflies Sphaerophoria spp. to avoid ambush by crab spiders. Die Naturwissenschaften, 96(2): 195–200, Feb. 2009. ISSN 0028-1042. doi:10.1007/s00114-008-0459-8. URL http://www.springerlink.com/content/u72m427q103m43uk/. ! pages 8, 9 [71] H. Zhou and H. Hu. Reducing Drifts in the Inertial Measurements of Wrist and Elbow Positions. IEEE Trans on Instrumentation and Measurement, 59 (3):575–585, Mar. 2010. ISSN 0018-9456. doi:10.1109/TIM.2009.2025065. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5247091. ! pages 18 [72] H. Zhou, H. Hu, N. Harris, and J. Hammerton. Applications of wearable inertial sensors in estimation of upper limb movements. Biomedical Signal Processing and Control, 1(1):22–32, 2006. ISSN 17468094. doi:10.1016/j.bspc.2006.03.001. URL http://dx.doi.org/10.1016/j.bspc.2006.03.001. ! pages 18 116 Appendix A CRS A460 Robot Specifications This appendix presents key technical specifications of the CRS A460 robot arm that affect the robot motions produced for Studies I and II. Figure 3.3, which shows a schematic of the robot, has been reproduced in Figure A.1 for convenience. Further technical details of the robot can be found in [16]. In Study I, presented in Chapter 3, the robot replicated a set of recorded human motions. In preparing the human trajectories to be replicated by the manipulator’s end-effector, the range of motion of the robot was considered. As outlined in Ta- ble B.1, motions of Subject 2 show the maximum range of reach in the Xo-axis for all subjects (39 cm)1. However, this range of motion of the robot is smaller. The distance between joints 2 and 3 is 30 cm of the robot, and that of joints 3 and 5 is 33 cm. This yields the maximum position for the robot’s wrist, measured as the distance between joints 2 and 5, is 63 cm. In the elbow-up configuration, the wrist reaches its minimum position, 36 cm, when q3 is at its minimum, 70°. This yields a total of 27 cm range of motion for the wrist. Hence, the human motions were scaled accordingly in order to match the maximum achievable range of motion of the robot. In replicating the human motions, it was critical that high fidelity motion be produced by the robot. Human wrist motions recorded from the inertial sensors demonstrated peak linear accelerations in the Xo-axis ranging from 6.0 to 21 cm/s2, and respective decelerations ranging from -29 to -11 cm/s2. However, the CRS 1Xo-axis is the principal axis of motion as defined in Figure 3.2 117 Joint 6 Joint 3 Joint 4 Joint 5 Joint 2 Joint 1 Z X Y d ϕ -θ O Figure A.1: Schematics of the 6-DOF CRS A460 robot arm in the elbow-up configuration. This figure is repeated from Figure 3.3. Table A.1: Soft limits in position, q, velocity, q̇, and acceleration, q̈, set for the CRS A460 robot arm. These soft limits are set to prevent the robot from mechanical damage and are more conservative limits than the hard limits provided in [16]. q1 q2 q3 q4 q5 q6 q;(rad) 2.93 1.50 1.94 3.07 1.76 3.07 q̇;(rad=s) 3.30 3.30 3.30 2.99 3.02 2.99 q̈;(rad=s2) 18.80 18.80 18.80 37.28 37.65 37.28 A460 robot is not capable of producing such high range of acceleration. Table A.1 outlines the software limit of the robot employed to ensure safe operation of the robot. The robot’s maximum linear path velocity of the wrist in its Xo-axis is 0.76 m/s, and the maximum velocity of the compounded joint interpolated motions is 4.57 m/s. In order to produce high fidelity replication of human motion using the robot, human motions were converted into reference trajectories of the robot, and slowed down to fit within its maximum capacity. However, this resulted in large overshoot of the joint angles responsible for generating linear forward motions of the end- 118 effector (q2 and q3). Calculating the difference between the reference and recorded trajectories yielded 0.07 radians of error for q2, and 0.15 radians for q3. Upon the process of observing the error between the commanded and recorded joint positions of these two joints, the robot finally replicated human motions five times slower during recording. Video recording of these motions were sped up in order to match the desired human speed of motion in Study I. A MATLAB simulink model was developed to control the robot motions. This model is presented in Figure A.2. 119 Figure A.2: Screen capture of the control scheme used to servo the CRS A460 robot through 3D Cartesian reference trajectories. 120 Appendix B Human Motion Trajectory Characteristics Contents B.1 Segmentation of Recorded Human Motions . . . . . . . . . . 122 B.1.1 Butterworth Filtering algorithm . . . . . . . . . . . 122 B.1.2 Acceleration-based Segmentation Algorithm . . . . 122 B.2 Overview of Position Profiles . . . . . . . . . . . . . . . . . 124 B.3 Descriptive Statistics of Principal Component Analysis Errors 128 B.4 AHP Parameter Values from Human Motions . . . . . . . . . 128 This appendix presents quantitative findings of the recorded human motions from Study I (Chapter 3) that inform the development of Acceleration-based Hesi- tation Profile (AHP) (presented in Chapter 4). Section B.1 presents the filtering and segmentation algorithms used to prepare the human motions for the quantitative analysis outlined in Chapter 4. Section B.2 presents an overview of the collected, filtered, and segmented human motion’s position profiles. As described in Chap- ter 4, human motion data was simplified from 3D to 2D before being used for gen- erating the AHP. Section B.3 presents the errors associated with the simplification technique employed in the process. To calculate the AHP ratio values (C1;C2;B1 and B2 defined in Chapter 4), acceleration extrema values from the collected human motions were extracted. Section B.4 presents these values. 121 B.1 Segmentation of Recorded Human Motions The linear acceleration measurement of the human motions collected from Study I were filtered and used to segment the human wrist trajectory data collected from inertial sensors (see Chapter 3 for details of the human motion collection and use of the sensors). This section presents the details of the algorithm used to filter and segment the recorded human motions. Section B.1.1 describes the algorithm used to filter the data. Section B.1.2 describes the algorithm used to segment the data. B.1.1 Butterworth Filtering algorithm The followingMATLAB function, TruncatedAccPlot wristonly, was used to filter acceleration recordings of human wrist trajectories using a 4th order Butter- worth filter. The algorithm described Section B.1.2 uses the output of this program to segment human wrist trajectory data. Presented below is a pseudo code for the TruncatedAccPlot wristonly function. 1 Load sub jec t s p e c i f i c data f i l e s 2 Truncate the i n e r t i a l sensor data to e l im ina te data not pe r t i n en t to the ... experiment 3 Convert w r i s t acce le ra t i on readings i n t o cm/ s ˆ2 f o r processing 4 5 f o r a l l data po in ts , 6 Wris t acce le ra t i on i n the g loba l frame = Ro ta t i ona l mat r i x f o r the ... ShoulderWris t sensor * ShoulderWris t Acce le ra t i on value ; 7 end 8 9 Wris t acce le ra t i on i n the g loba l frame = F i l t e r d a t a ( Wr is t acce le ra t i on ... i n the g loba l frame ) ; B.1.2 Acceleration-based Segmentation Algorithm In this section, the AccelerationBasedSegmentation1.m script is pre- sented. This MATLAB script is used to segment the human wrist trajectory data collected from Chapter 3. A pseudo code outlining the flow of the segmentation algorithm is presented below. 122 1 Get bu t te rwor th f i l t e r e d acce le ra t i on data from the ... Trunca tedAccP lo t wr i s ton ly f unc t i on 2 Set a th resho ld f o r the f i r s t maxima , FirstMaximaThreshold 3 Set a th resho ld f o r the t h i r d minima , ThirdMinimaThreshold 4 I n i t i a l i z e Boundary pos i t i o n to be zero 5 S ta r t w i th i =2 , j =2 6 7 whi le i l eng th ( Acce l e ra t i on i n X ) 8 Number of minima=0; 9 i f ( Acce l e ra t i on i n X ( i )>(FirstMaximaThreshold ) ) && ... ( Acce l e ra t i on i n X ( i 1) ( FirstMaximaThreshold ) ) && ... ( Boundary ( j 1,3)==0) ; 10 f o r l = 1:1:30 11 i f ( Acce l e ra t i on i n X ( i + l )Acce le ra t i on i n X ( i + l 1) )&& ... ( Acce l e ra t i on i n X ( i + l 1)Acce le ra t i on i n X ( i + l 2) ) ... %Find a minima in reverse order 12 i = i + l 3;%minima found , e x i t t h i s f o r loop and ... con t i nue .%rese t i to s t a r t from here . 13 break ; 14 end 15 end 16 17 % Sta r t searching f o r the three minimas 18 f o r k = 1:1:100 %f o r the maximum number o f datasamples i n one ... reach motion 19 i f ( Acce l e ra t i on i n X ( i +k )Acce le ra t i on i n X ( i +k+1) )&& ... ( Acce l e ra t i on i n X ( i +k+1)Acce le ra t i on i n X ( i +k+2) ) ... %Find a minima 20 i f ( Boundary ( j 1,3)==0) && ( minimacount == 0) 21 Boundary ( j , 1 ) = ( i +k+1) ; %Timestamp 22 Boundary ( j , 2 ) = Acce le ra t i on i n X ( i +k+1) ; %Record ... the value o f AccX at the s t a r t o f motion 23 Boundary ( j , 3 ) = 1 ; %I n d i c a t i v e o f motion s t a r t 24 j = j +1 25 k = k+1 26 Number of minima = Number of minima+1 27 e l s e i f ( Boundary ( j 1,3)==1) 28 Boundary ( j , 1 ) = ( i +k+1) ; %Timestamp 29 Boundary ( j , 2 ) = Acce le ra t i on i n X ( i +k+1) ; %Record ... the value o f AccX at the s t a r t o f motion 30 Boundary ( j , 3 ) = 0 .5 ; %I n d i c a t i v e o f q u i n t i c s p l i t 31 Number of minima = Number of minima+1 32 j = j +1 33 k = k+1 34 e l s e i f ( Boundary ( j 1,3)==0 .5 ) &&( Acce l e ra t i on i n X ( i +k ) 35 ThirdMinimaThreshold ) 36 Boundary ( j , 1 ) = ( i +k+1) ; %Timestamp 123 37 Boundary ( j , 2 ) = Acce le ra t i on i n X ( i +k+1) ; %Record ... the value o f AccX at the s t a r t o f motion 38 Boundary ( j , 3 ) = 0 ; %I n d i c a t i v e o f motion end 39 Number of minima = Number of minima+1 40 j = j +1 41 k = k+1 42 end 43 end 44 i f ( Number of minima==3) 45 Number of minima=0; 46 i = i +k ; 47 break ; 48 end 49 50 end 51 end 52 i = i +1; 53 end B.2 Overview of Position Profiles This section presents an overview of position profiles observed from the human motion trajectories collected in Study I. All recorded data from the inertial sensors used in Study I were filtered using a 4th order Butterworth filter using the algo- rithm described in Section B.1. The filtered position profiles of human motions are presented in figures B.1, B.2 and B.3. Since mimicking the recorded human motion trajectories was of interest in gen- erating human-robot interaction videos, it was necessary to calculate the recorded human range of motions. Table B.1 presents the minimum and maximum posi- tion values collected from all three Study I pilot experiment subjects. These values were calculated via forward kinematics approach outlined in Chapter 3. Motions of Subject 2 show the maximum range of reach in the Xo-axis across the three subjects (39 cm). Appendix A describes how this value compares to the range of motions of the CRS A460 robot used in Studies I and II, and how these motions are scaled for Study I. 124 Table B.1: Range of motion of the three pilot subjects who participated in Study I. These values were calculated via the forward kinematics ap- proach described in Chapter 3. The values in the parentheses are mini- mum and maximum position values, in that order, of the recorded subject motions. Subject 1 Subject 2 Subject 3 Xo (cm) (13.21, 50.26) (13.84, 52.90) (6.20, 44.34) Yo (cm) (-12.76, 11.31) (1.05, 12.54) (-4.03, 6.80) Zo (cm) (-36.54, -18.21) (-26.66, -6.39) (-27.15, -9.41) 0 25% 50% 75% 100% 10 15 20 25 30 35 40 45 50 Time Normalized X o -A x is  P o si ti o n  ( cm ) Subject1 Xo-Axis Wrist Position R-type motion S-type motion   Figure B.1: A few examples of Butterworth-filtered Xo-axis wrist motions from Subject 1 in Study I. This figure is reproduced from Figure 4.5. All trajectories are time-normalized to match the slowest (longest) motion segment. 125 0 12 10 8 6 4 2 0 2 4 Time Normalized Y o -A x is  P o si ti o n  ( cm ) Subject1 Yo-Axis Wrist Position R-type motion S-type motion 25% 50% 75% 100% Figure B.2: A few examples of Butterworth-filtered Yo-axis wrist motions from Subject 1 in Study I. All trajectories are time-normalized to match the slowest (longest) motion segment. 126 Time Normalized Z o -A x is  P o si ti o n  ( cm ) Subject1 Zo-Axis Wrist Position 0 20 25 30 35 R-type motion S-type motion 25% 50% 75% 100% Figure B.3: A few examples of Butterworth-filtered Zo-axis wrist motions from Subject 1 in Study I. All trajectories are time-normalized to match the slowest (longest) motion segment. 127 B.3 Descriptive Statistics of Principal Component Analysis Errors In Chapter 4, human motions are characterized by the acceleration profile of the motions’ principal axis. Identifying the principal axis for each motion segment required Principal Component Analysis to simply the 3D motion into 2D. This section outlines the errors from the simplification process. As shown in Table B.2 the mean and standard deviation of the sum of squared errors for each subject are quite small. Table B.2: Sum of squared errors from PCA simplification of Chapter 3 sub- ject motion data. Units all in cm2. Subject Mean SD Min Max 1 30 57 4 327 2 22 24 5 185 3 42 17 6 78 B.4 AHP Parameter Values from Human Motions The acceleration ratios used in AHP were calculated from the recorded human ac- celeration profiles. This section outlines the acceleration and temporal parameter values used to calculate the ratios. All acceleration values reported are based on the filtered and segmented data. A modified version of the MATLAB script used to segment the human trajectories was used to determine extrema of acceleration and their temporal parameters. The segmentation algorithm is outlined in detail in Section B.1.2. Presented in Table B.3 are values of the launch accelerations used for AHP ratio calculation. Presented in Table B.4 are the descriptive statistics of the temporal parameters used to calculate B1 and B2 ratios. 128 Table B.3: Descriptive statistics on a1 values of all three subject data from Chapter 3 presented by motion type. All units are in cm/s2. Significant ANOVA results are found in the acceleration values of successful reach- retract motions, F(2;130) = 25:502; p < :001, but not for P-type or R- type hesitation motions, F(1;3) = 1:92; p = :26 and F(2;5) = :77; p = :51 respectively. LB and UB indicate the lower and upper bounds of the 95% confidence interval respectively. Subj N Mean SD SE 95% C.I. Min Max LB UB S-type Motions 1 26 1488 367 72 1339 1636 602 2120 2 51 1496 207 29 1437 1554 1031 1960 3 56 1817 240 32 1752 1881 1141 2260 Total 133 1629 302 26 1577 1681 602 2260 P-type Hesitation Motions 1 0 . . . . . . . 2 4 1292 238 119 913 1671 956 1515 3 1 924 . . . . 924 924 Total 5 1219 264 118 891 1546 924 1515 R-type Hesitation Motions 1 4 1689 469 234 944 2436 1346 2380 2 2 1156 665 470 -4815 7128 686 1626 3 2 1326 550 389 -3620 6272 937 1715 Total 8 1465 512 181 1037 1893 686 2380 129 Table B.4: Descriptive statistics on the temporal values of acceleration peaks based on all three subject motions collected from Chapter 3. All units are in seconds. LB and UB indicate the lower and upper bounds of the 95% confidence interval respectively. Motion Type N Mean SD SE 95% C.I. Min Max LB UB t1 S-Type 134 0.19 0.04 0.00 0.18 0.20 0.12 0.36 R-Type 8 0.16 0.04 0.01 0.13 0.19 0.12 0.24 P-Type 4 0.24 0.11 0.05 0.07 0.41 0.16 0.4 Total 146 0.19 0.05 0.00 0.18 0.20 0.12 0.4 (t2 t1)=t1 S-Type 134 1.05 0.33 0.03 0.99 1.11 0.47 2.38 R-Type 8 1.14 0.47 0.16 0.75 1.53 0.58 1.88 P-Type 4 1.09 0.88 0.44 -0.30 2.49 0.40 2.38 Total 146 1.06 0.36 0.03 1.00 1.12 0.40 2.38 (t3 t2)=t1 S-Type 134 1.63 0.44 0.04 1.56 1.71 0.75 4.25 R-Type 8 3.13 1.36 0.48 1.99 4.27 1.13 4.86 P-Type 4 1.75 1.06 0.53 0.05 3.44 0.65 3.18 Total 146 1.72 0.64 0.05 1.62 1.82 0.65 4.86 130 Appendix C Advertisements, Consents, and Surveys Contents C.1 Study I Advertisements, Online Surveys, and Consents . . . . 131 C.2 Study II Advertisement, Online Surveys, and Consent . . . . 145 C.3 Study III Advertisements, Questionnaires, and Consent . . . . 149 This appendix outlines the details of the online surveys used for Studies I and II, as well as the questionnaire used for Study III. Consent forms and advertisement materials used for the studies are also presented in this appendix. This appendix is divided into three sections: Section C.1 presents the three different consent forms and the online surveys used for Study I; Section C.2 presents the consent form and the online survey used for Study II; and Section C.3 presents the consent form, pre-experiment questionnaire, and main questionnaire used for Study III. C.1 Study I Advertisements, Online Surveys, and Consents Three different consent forms were used in Study I. One was employed for the pilot experiment involving human-human interaction, in which the participants’ motions during the interaction were captured via two inertial sensors. The consent form is 131 presented in Figure C.1. The second consent form (see Figure C.4) was used for the HH online surveys, where the participants watched videos of the human-human interaction recorded from the pilot experiment. The HH online surveys were ad- vertised online as per Figure C.3. Screen captures of the HH online surveys are presented in figures C.5 to C.7. The third consent form (see Figure C.9) was used for the HR online surveys that presented videos of human-robot interactions anal- ogous to the human-human interactions. The HR online surveys were advertised online using the contents presented in Figure C.8. Screen captures of the HR online surveys are presented in figures C.10 to C.12. 132 Figure C.1: Consent form used for the human-human interaction pilot exper- iment (page 1). 133 Last revised:  April 17, 2012 consent form Motion Capture - rev2.doc Page 2 of 2 which has restricted secure access and is locked at all times. Only your hand and arm motion will be videotaped, and potentially identifying features such as your face will not be videotaped. If you have any concerns about your treatment or rights as a research subject, you may telephone the Research Subject Information Line in the UBC Office of Research Services at the University of British Columbia, at (604) 822-8598.  By signing this form, you consent to participate in this study, and acknowledge you have received a copy of this consent form. Name (print):______________________________________  Date:_________________  Signature:_______________________________________________  Figure C.2: Consent form used for the human-human interaction pilot exper- iment (page 2). 134 Last Revised:  April 17, 2012  Call for Volunteers Gesture Survey rev1.docx Re: Call for volunteers for a Human-Robot Interaction study  We are offering you the opportunity to contribute to the advancement of human-robot relations. Increasing widespread implementation of robots has revealed that effective robot-human communication is a vital element to creating a friendly shared workspace environment.  We are investigating your perception of a human-human interaction (HHI). Once the research is complete the data obtained will be used to attempt the development of a human-robot interaction LQZKLFKWKHURERW¶VDFWLRQVDUHSHUFHLYHGLQWKHVDPHPDQQHUDVWKH++,  The study will be conducted via an online survey. It will consist of a short video of HHI, which will be followed by a few questions pertaining to the video. The survey should take no longer than 10 minutes. We need volunteers to participate in the study.  A consent form will be available as the first page of the survey. You will be required to complete the form in order to participate in the study.  The link to the study is posted at http://caris.mech.ubc.ca/?pageid=4.401   For information/concerns regarding the survey please contact:  AJung Moon survey@amoon.ca (604)822-3147  Thank you very much for your help.  AJung Moon, Masters Candidate, UBC Mechanical Engineering ajung.moon@gmail.com Mike Van der Loos, Associate Professor, UBC Mechanical Engineering vdl@mech.ubc.ca Elizabeth Croft, Professor, UBC Mechanical Engineering, ecroft@mech.ubc.ca    <omit> <omit> Figure C.3: Contents of the online advertisement used to recruit subjects for Study I, HH online surveys. The study was advertised on the Collabo- rative Advanced Robotics and Intelligent Systems Laboratory website and other social media tools including facebook, twitter, and the au- thor’s website. 135 Figure C.4: Screen capture of the consent form used for the HH online sur- veys. The same consent form was used for all three HH surveys. 136 Figure C.5: Screen capture of online survey for HH-1. This figure is a repeat of Figure 3.6. 137 Figure C.6: Screen capture of online survey for human-human condition, Session 2. 138 Figure C.7: Screen capture of online survey for human-human condition, Session 3. 139 Last Revised:  April 17, 2012  Call for Volunteers Gesture Survey rev2.docx Re: Call for volunteers for a Human-Robot Interaction study  We are offering you the opportunity to contribute to the advancement of human-robot relations. Increasing widespread implementation of robots has revealed that effective robot-human communication is a vital element to creating a friendly shared workspace environment.  We are investigating your perception of a human-human interaction (HHI) and/or human-robot interaction (HRI). Once the research is complete the data obtained will be used to attempt the development of a human-URERWLQWHUDFWLRQLQZKLFKWKHURERW¶VDFWLRQVDUHSHUFHLYHd in the same manner as the HHI.  The study will be conducted via an online survey. It will consist of a short video of HHI and/or HRI, which will be followed by a few questions pertaining to the video. The survey should take no longer than 10 minutes. We need volunteers to participate in the study.  A consent form will be available as the first page of the survey. You will be required to complete the form in order to participate in the study.  The link to the study is posted at http://caris.mech.ubc.ca/?pageid=4.401   For information/concerns regarding the survey please contact:  AJung Moon survey@amoon.ca (604)822-3147  Thank you very much for your help.  AJung Moon, Masters Candidate, UBC Mechanical Engineering ajung.moon@gmail.com Mike Van der Loos, Associate Professor, UBC Mechanical Engineering vdl@mech.ubc.ca Elizabeth Croft, Professor, UBC Mechanical Engineering, ecroft@mech.ubc.ca    <omit> <omit> Figure C.8: Contents of the online advertisement used to recruit subjects for Study I, HR online survey. The study was advertised on the Collabo- rative Advanced Robotics and Intelligent Systems Laboratory website and other social media tools including facebook, twitter, and the au- thor’s website. 140 Figure C.9: Screen capture of the consent form used for the human-robot interaction online surveys. The same consent form was used for all three HR surveys. 141 Figure C.10: Screen capture of online survey for human-robot condition, Session 1. 142 Figure C.11: Screen capture of online survey for human-robot condition, Session 2. 143 Figure C.12: Screen capture of online survey for human-robot condition, Session 3. 144 C.2 Study II Advertisement, Online Surveys, and Consent In Study II, seven versions of the same online survey, each containing a different pseudo-random order of HRI videos was used. All versions of the survey used a single consent form. This consent form is presented in Figure C.14. The study was advertised via online media tools including twitter, facebook, and the lab and the author’s website. The advertised material is presented in Figure C.13. Each survey contained 12 pages, each page containing a video and the same four survey questions. A sample page is shown in Figure C.15. 145 Last Revised:  April 17, 2012 Call for Volunteers Robot Gesture Survey rev1.docx Re: Call for volunteers for a Human-Robot Interaction study  We are offering you the opportunity to contribute to the advancement of human-robot relations. Increasing widespread implementation of robots has revealed that effective robot-human communication is a vital element to creating a friendly shared workspace environment.  We are investigating your perception of a human-human interaction (HHI) and/or human-robot interaction (HRI). Once the research is complete the data obtained will be used to attempt the development of a human-URERWLQWHUDFWLRQLQZKLFKWKHURERW¶VDFWLRQVDUHSHUFHLYHd in the same manner as the HHI.  The study will be conducted via an online survey. It will consist of twelve short videos (< 30sec) of HHI and/or HRI, which will be followed by a few questions pertaining to the video. The survey should take no longer than 20 minutes. We need volunteers to participate in the study.  A consent form will be available as the first page of the survey. You will be required to complete the form in order to participate in the study.  The link to the study is posted at http://caris.mech.ubc.ca/?pageid=4.401   For information/concerns regarding the survey please contact:  AJung Moon survey@amoon.ca (604)822-3147  Thank you very much for your help.  AJung Moon, Masters Candidate, UBC Mechanical Engineering ajung.moon@gmail.com Mike Van der Loos, Associate Professor, UBC Mechanical Engineering vdl@mech.ubc.ca Elizabeth Croft, Professor, UBC Mechanical Engineering, ecroft@mech.ubc.ca    <omit> <omit> Figure C.13: Contents of the online advertisement used to recruit subjects for Study II. The study was advertised on the Collaborative Advanced Robotics and Intelligent Systems Laboratory website. Links to this advertisement was distributed via other online media tools, including twitter, facebook, and the author’s website. 146 Figure C.14: Screen capture of the consent form used for the human-robot interaction online surveys outlined in Chapter 5. The same consent form was used for all surveys in Study II. 147 Figure C.15: This is an example screenshot from one of the 12 pages of sur- vey shown to online participants. All pages of the survey contained the same questions in the same order. Only the contents of the embedded video changed. This screen capture is also presented in Figure 5.2. 148 C.3 Study III Advertisements, Questionnaires, and Consent Subjects for Study III were recruited via posted advertisements at the University of British Columbia Vancouver campus and the lab’s website. Figure C.16 and Figure C.17 present the call for volunteers for the study. In Study III, all subjects signed a consent form (see Figure C.18) prior to be- ginning the experiment. The subjects then completed a pre-questionnaire that was used to collect demographic information (see Figure C.19). During the main exper- iment, at the end of each trial, the subjects provided feedback on their perception of the robot using the questionnaire presented in Figure C.20. 149 Last Revised:  April 17, 2012  Call for Volunteers HR Interaction rev1.doc Re: [Call for Volunteers] Sorting Hearts and Circles with a Robot ± A Human-Robot Collaboration Study  At the CARIS Lab (ICICS building, x015), we are conducting an exciting human-robot interaction experiment to investigate whether a robot that uses humanlike gestures can work as a better teammate than robots that GRQ¶W when humans and robots collaborate with each other.  We would like to invite you to participate in our study. It will take no more than 45 minutes of your time, and you will be asked to interact with a robot at our lab. The study will involve you wearing a cable-based sensor on your finger while sorting a number of small objects in collaboration with a robot. Prior to the experiment and between the sessions of sorting task, you will be asked to fill out a questionnaire. At the very end of the experiment, we will ask you for your feedback on the robot¶s behaviours. The experiment will be video recorded as part of the experiment as well as for analysis purposes. However, the recordings will not be made public without your consent.  We believe that the results of our study will contribute to creating a friendly human-robot shared workspace environment.  A consent form will be available on site, as well as prior to the experiment. You will be required to complete the form in order to participate in the study.  To participate in the study, or have concerns about the study, please contact:  AJung Moon ajmoon@interchange.ubc.ca (604)822-3147  Thank you very much for your help.  AJung Moon, Masters Candidate, UBC Mechanical Engineering ajmoon@interchange.ubc.ca Mike Van der Loos, Associate Professor, UBC Mechanical Engineering vdl@mech.ubc.ca Elizabeth Croft, Professor, UBC Mechanical Engineering, ecroft@mech.ubc.ca    <omit> <omit> Figure C.16: Contents of the online advertisement used to recruit subjects for Study III. The study was advertised on the Collaborative Advanced Robotics and Intelligent Systems Laboratory website. 150  Sorting Hearts and Circles with a Robot?!         The CARIS Lab (ICICS x015) is looking for healthy adult volunteers to participate in a fun human-robot collaboration study.  You will be asked to sort a number of small objects with a robot. With your help, we will be able to investigate whether a robot that uses KXPDQOLNHJHVWXUHVZLOOEHDEHWWHUWHDPPDWHWKDQURERWVWKDWGRQ¶W  The study will run from late October to early November, 2011.  Visit http://to.ly/bj61 for more information, OR Contact AJung at ajung@amoon.ca to participate.  Th e H u m an -R ob ot E xp erim ent @ CA R IS  L ab  IC ICS x015  http ://to .ly/bj61  aju ng@ am o o n .ca    604 -822 -3147   Th e H u m an -R ob ot E xp erim ent @ CA R IS  L ab  IC ICS x015  http ://to .ly/bj61  aju ng@ am o o n .ca    604 -822 -3147   Th e H u m an -R ob ot E xp erim ent @ CA R IS  L ab  IC ICS x015  http ://to .ly/bj61  aju ng@ am o o n .ca    604 -822 -3147   Th e H u m an -R ob ot E xp erim ent @ CA R IS  L ab  IC ICS x015  http ://to .ly/bj61  aju ng@ am o o n .ca    604 -822 -3147   Th e H u m an -R ob ot E xp erim ent @ CA R IS  L ab  IC ICS x015  http ://to .ly/bj61  aju ng@ am o o n .ca    604 -822 -3147   Th e H u m an -R ob ot E xp erim ent @ CA R IS  L ab  IC ICS x015  http ://to .ly/bj61  aju ng@ am o o n .ca    604 -822 -3147   Th e H u m an -R ob ot E xp erim ent @ CA R IS  L ab  IC ICS x015  http ://to .ly/bj61  aju ng@ am o o n .ca    604 -822 -3147   Th e H u m an -R ob ot E xp erim ent @ CA R IS  L ab  IC ICS x015  http ://to .ly/bj61  aju ng@ am o o n .ca    604 -822 -3147   Th e H u m an -R ob ot E xp erim ent @ CA R IS  L ab  IC ICS x015  http ://to .ly/bj61  aju ng@ am o o n .ca    604 -822 -3147   Th e H u m an -R ob ot E xp erim ent @ CA R IS  L ab  IC ICS x015  http ://to .ly/bj61  aju ng@ am o o n .ca    604 -822 -3147   Scan it here <omit> < o m i t >  < o m i t >  < o m i t >  < o m i t >  < o m i t >  < o m i t >  < o m i t >  < o m i t >  < o m i t >  < o m i t >  Figure C.17: Advertisement posted at the University of British Columbia campus to recruit subjects for Study III. 151 Figure C.18: Consent form used for Study III. 152 Subject #: Date:  1. What is your age? ____________  2. What is your gender?    Female /   Male  3. What is your dominant hand?  Right-handed /   Left-handed (If you are ambidextrous, please circle the one you¶d like to use for the experiment.)  4. How familiar are you in working with a robot arm?  Not familiar at all 1 2 3 4 5 Very familiar  5. Have you ever worked or interacted with this particular robot? Yes  /  No  6. If you answered µ<HV¶LQ the above question, please describe your experience with the robot below: Figure C.19: Pre-questionnaire used to collect demographic information from the Study III subjects. 153 Subject #: Condition #:  t(complete):  Collision: Mistakes: 1. Please rate YOUR emotional state on these scales: Anxious 1 2 3 4 5 Relaxed Agitated 1 2 3 4 5 Calm  2. How much did you like this robot? Not at all 1 2 3 4 5 Very much  3. How much did you like working with this robot? Not at all 1 2 3 4 5 Very much  4. For each word below, please indicate how well it describes your INTERACTION with the robot.   Describes very poorly      Describes very well Boring  1  2  3  4  5 Enjoyable 1  2  3  4  5 Engaging 1  2  3  4  5  5. For each word below, please indicate how well it describes the ROBOT you just worked with.   Describes very poorly      Describes very well Aggressive 1  2  3  4  5 Independent 1  2  3  4  5 Helpful  1  2  3  4  5 Assertive 1  2  3  4  5 Efficient 1  2  3  4  5 Useful  1  2  3  4  5 Competitive 1  2  3  4  5 Dominant 1  2  3  4  5 Reliable 1  2  3  4  5 Forceful 1  2  3  4  5 6. Please rate your impression of the ROBOT on these scales:  Apathetic 1 2 3 4 5 Responsive Mechanical 1 2 3 4 5 Organic Pleasant 1 2 3 4 5 Unpleasant Intelligent  1 2 3 4 5 Unintelligent Fake 1 2 3 4 5 Natural Incompetent 1 2 3 4 5 Competent Machinelike 1 2 3 4 5 Humanlike Friendly  1 2 3 4 5 Unfriendly Moving elegantly 1 2 3 4 5 Moving rigidly Stagnant 1 2 3 4 5 Lively Like  1 2 3 4 5 Dislike Kind 1 2 3 4 5 Unkind Artificial  1 2 3 4 5 Lifelike Figure C.20: Main questionnaire used to collect the subject’s perception of the robot in Study III. 154 Appendix D Acceleration-based Hesitation Profile Trajectory Characterisation and Implementation Algorithms Contents D.1 Offline Acceleration-based Hesitation Profile (AHP)-based Tra- jectory Generation . . . . . . . . . . . . . . . . . . . . . . . 156 D.2 AHP-based Trajectory Implementation for Real-time Human- Robot Shared Task . . . . . . . . . . . . . . . . . . . . . . . 160 D.2.1 Management of the Robot’s Task . . . . . . . . . . . 160 D.2.2 Management of Real-time Gesture Trajectories . . . 162 D.2.3 Calculation of a1 and t1 Parameters for AHP-based Trajectories . . . . . . . . . . . . . . . . . . . . . . 163 D.2.4 Generation of AHP Spline Coefficients . . . . . . . . 163 D.2.5 Human State Tracking and Decision Making . . . . 164 This appendix presents the details of the algorithms used to generate robot trajectories based on the Acceleration-based Hesitation Profile (AHP). As out- 155 lined in Chapter 4, AHP-based trajectories can be generated offline, and as a re- sponse mechanism for a real-time Human-Robot Shared-Task (HRST). Section D.1 presents MATLAB implementation of generating AHP-based trajectories offline. Section D.2 presents an implementation of AHP as a real-time resource conflict response mechanism in Robot Operating System (ROS) and BtClient environment operating a 7-DOF robotic manipulator used in Study III (WAM™, Barrett Tech- nologies, Cambridge, MA, USA). D.1 Offline AHP-based Trajectory Generation This section discusses in detail the generation and implementation of reference tra- jectories used by the 6-DOF robot for Study II. As described in Chapter 5, 12 dif- ferent motions were generated and tested. Figure D.1 provides an overview of the trajectory generation process. All reference trajectory generation codes presented in this section are written in MATLAB. Thethe quinticpoints wz outputAV function generates time-series po- sition data for the robot by calling the quinticpoints wz outputAV func- tion. Upon receiving reference trajectories for individual motion segments from the quinticpoints wz outputAV function, the script appends these motions as a stream of multiple motion segments. The following pseudo code outlines the algorithm for this script. 1 I n i t i a l i z e constants : maximum post ions o f the robot , i n i t i a l p o s i t i o n ... coord inates , f a c t o r to slow down the t r a j e c t o r y 2 L i nea r l y i n t e r p o l a t e from s t a r t i n g pos i t i o n o f robot to the i n i t i a l p o s i t i o n 3 f o r i =1 :1 :N 4 [ Ax , Vx , mot ion in X , mot ion in Z ] = ... qu in t i cpo in ts wz ou tpu tAV ( MotionType ( i ) , a1 ( i ) , t1 ( i ) , ... min imum z ax is pos i t ion ( i ) , s low down fac tor ) ; 5 Append mot ion in X to e a r l i e r Xax is t r a j e c t o r i e s 6 Append mot ion in Z to e a r l i e r Zax is t r a j e c t o r i e s 7 Append an empty t r a j e c t o r y to r es t between motions f o r both X and ... Zax is t r a j e c t o r i e s 8 end 9 10 Append t ime stamps to mot ion in X 11 Append t ime stamps to mot ion in Z 156 Once the quinticpoints wz outputAV function is called, it receives the type of motion to be generated, two parameter values, minimum Z-axis position, and a scaling factor to slow down the reference trajectory. Using this information, the function generates the requested AHP-based trajectories in the X-axis, gener- ates a Z-axis that accommodates the X-axis, and returns the position profile of the trajectory. The following pseudo code outlines this algorithm. 1 Define the AHP ra t i o s , C1, C2, B1 , and B2. 2 Calcu la te acce le ra t i on p r o f i l e o f Sp l ines 1 through 3 3 Compute f i n a l acce le ra t i on value o f Sp l ine 3 4 Compute f i n a l v e l o c i t y values f o r Spl ines 1 through 3 5 Compute f i n a l po s i t i o n values f o r Spl ines 1 through 3 6 Compute Spl ine 4 using the f i n a l pos i t i on , ve l o c i t y , and acce le ra t i on ... value o f Sp l ine 3 7 Append pos i t i o n t r a j e c t o r i e s o f Sp l ines 1 through 4 8 Ca l l gen z qu i n t i c s f unc t i on and rece ive fou r Zax is sp l ines , z1 , z2 , z3 ... and z4. 9 Solve symbol ic Zax is sp l i nes z1 , z2 , z3 and z4 from gen z qu i n t i c s a t ... every sampling per iod 10 Append the Zax is pos i t i o n t r a j e c t o r i e s The Z-axis calculation of the quinticpoints wz outputAV function is accomplished by calling the gen z quintics function. This function symbol- ically produces four quintic trajectories that span from one end of a spline to the next. The first Z-axis spline, for example, spans the entire duration of the first X-axis AHP spline, and the second Z-axis spline starts at t1 and spans the en- tire duration of the second X-axis AHP spline and so on. Presented below is the gen z quintics algorithm. 1 f unc t i on [ z1 , z2 , z3 , z4 ] = gen z qu in t i c s ( zmax1 , zlow , zmax2 , z1t , ... z2t , z3t , z4 t ) 2 syms t ; 3 a max1 = 0.02 ; 4 a max2 = 0.02 ; 5 a low = a max1 ; 6 7 z1 = qu in t i c sp l i ne sym gen (0 , 0 , 0 , zmax1 , 0 , a max1 ) ; 8 z1 = subs ( z1 , f t g , f t / z1 t g) ; 9 z2 = qu in t i c sp l i ne sym gen (zmax1 , 0 , a max1 , zlow , 0 , a low ) ; 157 10 z2 = subs ( z2 , f t g , f t / z2 t g) ; 11 z3 = qu in t i c sp l i ne sym gen ( zlow , 0 , a low , zmax2 , 0 , a max2 ) ; 12 z3 = subs ( z3 , f t g , f t / z3 t g) ; 13 z4 = qu in t i c sp l i ne sym gen (zmax2 , 0 , a max2 , 0 , 0 , 0) ; 14 z4 = subs ( z4 , f t g , f t / z4 t g) ; 158 Start the Reference Trajectory Generator (DataPt to SignalGenerater Long Apr4 acc.m) Reference Trajectory Generator calls Quintic Point Generators (quinticpoints wz outputAV.m) Quintic Point Generator calculates time index, and acceleration, velocity, and position version of the four spline AHP-based motions Quintic Point Generator calls Analytic Quintic Generator and produces four connected splines for the Z-axis motion (gen z quintics.m) Quintic Point Generator receives the four splines in analytic form, and samples them with the generated time index at ts. Figure D.1: Overview of the AHP-based trajectory generation process. 159 D.2 AHP-based Trajectory Implementation for Real-time Human-Robot Shared Task This section presents pseudo codes of the algorithms implemented in ROS environ- ment to conduct the experiment in Study III. The relationship between the nodes have been outlined in Chapter 6, and a graphical overview of these nodes are repli- cated here (see Figure D.2). Section D.2.1 presents the pseudo code for the gesture launcher node, which manages the running of the entire experiment. Section D.2.2 presents the gesture engine node that manages the triggering of different trajectory splines for different experimental conditions. The gesture engine node uses an inde- pendent node to calculate the AHP parameters necessary for computing AHP spline coefficients. Algorithms for this node, the calculate parameter node, is presented in Section D.2.3. Section D.2.4 discusses the get s2 s3 coefs node that uses the calculated AHP parameter values from calculate parameter to compute coefficients for splines 2 and 3 of AHP-based trajectories. Finally, Section D.2.5 presents the decision maker node that is used to keep track of the four human task states. The decision maker node is called by the gesture engine node to determine whether a collision is imminent or not. All nodes presented in this section are written in C++. D.2.1 Management of the Robot’s Task In this section, the gesture launcher node that manages the robot’s task be- haviour is described. Once triggered, this node is provided with, by the experi- menter, the number of times the robot must successfully inspect the marbles bin, and the experimental condition in which it should operate. 1 Sleep f o r the i n i t i a l dwe l l i ng t ime of 4 seconds 2 Ca l l gesture engine to move 3 mot ion count++ 4 i f (Stype motion completed ) 5 s count++ 6 else 7 i f (Rtype motion completed ) 8 r coun t++ 160 Figure D.2: The software system architecture implemented for the Study III HRST experiment replicated from Figure 6.7. The WAMServer node interfaces btClient control algorithms that operate outside of ROS to directly control the robot. Further detail of the interface and btClient algorithms are outlined in Figure 6.5. 9 else i f ( Robot ic Avoidance motion completed ) 10 ra count++ 11 else 12 r epo r t e r r o r 13 whi le ( s count < requested number f o r Stype reach )f 14 Get human task s ta te t imes from decis ion maker node 15 Sleep f o r 80% of human dwel l t ime 16 Request motion from gesture engine node wi th reach t ime 4x human ... reach t ime 17 i f (Stype motion completed ) 18 s count++ 19 else 20 i f (Rtype motion completed ) 21 r coun t++ 22 else i f ( Robot ic Avoidance motion completed ) 23 ra count++ 24 else 25 r epo r t e r r o r 26 i f (Rtype o f Robot ic Avoidance motion completed ) 27 Request motion from gesture engine node wi th reach t ime 4x ... human reach t ime 161 28 mot ion count ++; 29 else 30 r epo r t e r r o r 31 Report task complet ion t ime 32 Return D.2.2 Management of Real-time Gesture Trajectories In this section the gesture engine node that manages the triggering of dif- ferent robot motion trajectories (including AHP-based motions) is presented. This node receives commands from the gesture launcher node that is responsible for tracking the dwell times for the robot and triggering the robot to start its reach- ing motion via the gesture engine node. A flow diagram outlining this node is presented in Figure 6.3. 1 I n i t i a l i z e c l i e n t s 2 Ca l l ca lcu la te param 3 i f ( Ca l l ca lcu la te param == success ) 4 Ca l l ge t s2 s3 coe fs 5 i f ( Ca l l ge t s2 s3 coe fs == success ) 6 Ca l l move to car tes ians 7 Sleep u n t i l ( t = t 1  0.04) 8 Ca l l decis ion maker 9 i f ( Exper iment Condi t ion != B l ind ) 10 i f ( Experiment Cond i t ion == Hes i t a t i on && decis ion maker ... == con f l i c t imm inen t ) 11 Ca l l move to ca r tes ian qu in t f o r sp l i ne2 movements 12 Wait u n t i l t r a j e c t o r y f i n i s hed 13 Ca l l move to ca r tes ian qu in t w i th sp l i ne 3 c o e f f i c i e n t s 14 Wait u n t i l t r a j e c t o r y f i n i s hed 15 else 16 whi le ( ! abor t mot ion ) 17 Ca l l decis ion maker 18 i f ( decis ion maker == con f l i c t imm inen t ) 19 Get cu r ren t pos i t i o n 20 Move to ( cu r ren t pos i t i o n ) + 0.01 21 Wait u n t i l t r a j e c t o r y f i n i s hed 22 else 23 Sleep f o r 0.05 seconds 24 i f ( T ra j ec t o r y F in ished == True ) 25 Sleep f o r 1 second 26 else 162 27 Wait u n t i l t r a j e c t o r y f i n i s hed 28 Ca l l move to car tes ian to r e t r a c t 29 Wait f o r t r a j e c t o r y f i n i s hed 30 Return D.2.3 Calculation of a1 and t1 Parameters for AHP-based Trajectories This section describes how the key parameters, a1 and t1, are calculated in the ROS environment via the calculate param server node. This node calculates the two key AHP parameters, and provides the information to the gesture engine node (see Section D.2.2) to allow smooth transition to take place between the S- type motions (successful reach-retract motions generated by quintic splines) and the AHP splines. The following pseudo code outlines the calculate param node. 1 I npu t : ( i n i t i a l and f i n a l cond i t i ons o f a q u i n t i c sp l i ne ) q0 , v0 , a0 , ... q1 , v1 , a1 2 Define c = 60*q0 36*v0  9*a0 + 3*a1 24*v1 + 60*q1 3 Define b = 360*q0 +192*v0 +36*a0 24*a1 +168*v1 360*q1 4 Define a = 360*q0 180*v0 30*a0 +30*a1 180*v1 +360*q1 5 6 Define tb = (b + sq r t ( b *b4*a* c ) ) / ( 2 * a ) 7 Define tb1 = (b  sq r t ( b *b4*a* c ) ) / ( 2 * a ) 8 9 i f ( tb  tb1 ) 10 t1 = tb * f i n a l t i m e ; 11 else i f ( tb1 < tb ) 12 t1 = tb1 * f i n a l t i m e ; 13 14 Calcu la te pos i t i o n a t a1 using t1 15 Calcu la te v e l o c i t y a t a1 using t1 16 Calcu la te a1 ( launch acce le ra t i on ) using t1 17 Return D.2.4 Generation of AHP Spline Coefficients This section describes how the coefficients for the AHP splines are calculated for real-time trajectory planning. The get s2 s3 coefs node is a server node in ROS that generates the coefficients for AHP splines 2 and 3. The AHP equations 163 presented in Chapter 4 is used to calculate the spline coefficients. The following pseudo code outlines the get s2 s3 coefs node. 1 Define c h a r a c t e r i s t i c acce le ra t i on r a t i o constants c1 , c2 , b1 , b2 2 3 Spl ine2 coef5 = a1 *0 .1* (1+ c1 ) / ( b2 *b2*b2 ) ; 4 Spl ine2 coef4 = a1*(0.25) *(1+ c1 ) / ( b2 *b2 ) ; 5 Spl ine2 coef3 = 0; 6 Spl ine2 coef2 = a1 * 0 . 5 ; 7 Spl ine2 coef1 = v e l o c i t y o f Spl ine1 at t1 ; 8 Spl ine2 coef0 = pos i t i o n o f Spl ine1 at t1 ; 9 10 Compute Spl ine2 f i n a l po s i t i o n 11 Compute Spl ine2 f i n a l v e l o c i t y 12 Compute Spl ine2 f i n a l acce le ra t i on 13 14 Spl ine3 coef5 = req . a*0.1*(c1c2 ) / ( b3 *b3*b3 ) ; 15 Spl ine3 coef4 = req . a*(0.25) *(c1c2 ) / ( b3 *b3 ) ; 16 Spl ine3 coef3 = 0; 17 Spl ine3 coef2 = req . a*(0.5) * c1 ; 18 Spl ine3 coef1 = dp2 f ; 19 Spl ine3 coef0 = res . p2 f ; 20 21 Compute Spl ine3 f i n a l po s i t i o n 22 Compute Spl ine3 f i n a l v e l o c i t y 23 Compute Spl ine3 f i n a l acce le ra t i on 24 25 Return D.2.5 Human State Tracking and Decision Making In this section, the decision maker node is presented. This node monitors the cable potentiometer readings and is used to keep track of human task states and determine occurrence of collisions. The following pseudo code describes how the human states are determined, and how the duration in each of the four states are recorded. 1 Define dwe l l t h resho ld 2 Define re l oad th resho ld 3 4 i f ( cable < dwel l )f 164 5 i f ( s t a t e == dwe l l i ng ) 6 dwe l l s t a r t = cu r r t ime 7 s ta te = dwe l l i ng 8 else i f ( s t a te == reaching ) 9 dwe l l s t a r t = cu r r t ime 10 s ta te = dwe l l i ng 11 else 12 s t i l l dwe l l ing , or e r r o r . Do noth ing 13 14 else i f ( cable > dwel l ) 15 i f ( cable > re load ) 16 i f ( s t a t e == re load ing ) 17 r e l o a d s t a r t = cu r r t ime 18 reach t ime = cu r r t ime  r each s t a r t 19 else i f ( s t a te == re t r a c t ed and re turned to re load ing ) 20 r e l o a d s t a r t = cu r r t ime 21 s ta te = re load ing 22 else 23 i f ( s t a t e == reaching ) 24 do noth ing 25 else i f ( s t a t e == re load ing ) 26 re load t ime = cu r r t ime  r e l o a d s t a r t 27 s ta te = r e t r a c t i n g 28 else i f ( s t a t e == r e t r a c t i n g ) 29 do noth ing 30 else 31 r each s t a r t = cu r r t ime 32 Subt rac t cu r ren t t ime from the g loba l va r i ab l e dwe l l s t a r t 33 dwe l l t ime = cu r r t ime  dwe l l s t a r t 34 s ta te = reaching 35 else 36 do noth ing When the node is called to make a decision on whether a collision is imminent or not, the following algorithm is triggered. 1 i f ( cable > dwe l l t h r esho l d && s ta te == reaching ) 2 c o l l i s i o n i s imminent 3 else i f ( cable > dwe l l t h r esho ld && s ta te == re load ing ) 4 c o l l i s i o n i s imminent 5 else 6 no imminent c o l l i s i o n 165 The same node, when called by the gesture launcher node, returns the times recorded for each of the key human states. The following pseudo code demonstrates how this node compares the recorded human task state times to a fixed maximum and minimum thresholds and returns dwell time and reach time to be used by the robot. 1 Define maximum reach time , reach time max to be 0.6 seconds 2 Define minimum reach time , reach t ime min to be 0.2 seconds 3 Define maximum dwel l t ime , dwel l t ime max to be 4.0 seconds 4 Define minimum dwel l t ime , dwe l l t ime min to be 0.5 seconds 5 6 i f ( reach t ime > reach time max ) 7 reach t ime = reach time max 8 else i f ( reach t ime < reach t ime min ) 9 reach t ime = reach t ime min ; 10 11 i f ( dwe l l t ime > dwel l t ime max ) / / i f dwe l l t ime i s somehow ... r i d i c u l ous , co r r ec t i t 12 dwe l l t ime = dwel l t ime max 13 else i f ( dwe l l t ime < dwe l l t ime min ) 14 dwe l l t ime = dwe l l t ime min 15 16 r e t u rn dwe l l t ime and reach t ime 166 Appendix E Human Perception of AHP-based Mechanism and its Impact on Performance Contents E.1 Video Observation of Jerkiness and Success from Robot Mo- tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 E.1.1 Perceived Success of Robot Motions . . . . . . . . . 168 E.1.2 Perceived Jerkiness of Robot Motions . . . . . . . . 169 E.2 In Situ Perception of AHP-based Motions . . . . . . . . . . . 170 E.2.1 Usefulness . . . . . . . . . . . . . . . . . . . . . . 171 E.2.2 Emotional Satisfaction . . . . . . . . . . . . . . . . 172 E.3 Non-parametric Comparison of Performance Impact of the AHP-based Mechanism . . . . . . . . . . . . . . . . . . . . . 173 E.3.1 Counts of Mistakes . . . . . . . . . . . . . . . . . . 173 E.3.2 Counts of Collisions . . . . . . . . . . . . . . . . . 174 In this appendix, measures from Studies II and III that have not been discussed in Chapter 5 and Chapter 6 are presented. The scores of the two distractor ques- tions from Study II are not discussed in Chapter 5, and are presented in Section E.1. 167 Human perception measurements from Study III that do not yield statistically sig- nificant finding are discussed in Section E.2. Details of the non-parametric analysis conducted on the collision and mistake measures are presented in Section E.3. E.1 Video Observation of Jerkiness and Success from Robot Motions In Study II, presented in Chapter 5, human perception of robot motions were in- vestigated via an online survey. Of the four questions, two questions, Q1 and Q4, were distractor questions: Q1 Did the robot successfully hit the target in the middle of the table? (1.Not successful - 5. Successful) Q4 Please rate your impression of the robot’s motion on the following scale: (1.Smooth - 5. Jerky) A repeated-measures ANOVA was conducted on all four questions. However, the results for these two questions do not test hypotheses H2.1 and H2.2. Nonetheless, they provide interesting insights into human perception of robotic collision avoid- ance motions in comparison to AHP-based motions. This section discusses these results. Consistent with the results reported in Chapter 5, all sphericity violations in the Analysis of Variance (ANOVA) were corrected using Greenhouse-Geisser approach. E.1.1 Perceived Success of Robot Motions The responses to the first distractor question, Q1 (success score), yield an expected result. Overall, successful motions received a significantly higher score (M=4.70, SE=.08) than all other motion types (F(1:63;68:36) = 244:67; p < :0001). The success score did not change across the different acceleration values used to gen- erate the motions (F(2;84) = :52; p = :60). Post-hoc analysis with Bonferroni correction indicates that this score difference between the successful motions and the other motion types are all significant to p¡.001 level. Figure E.1 shows the distribution of scores for this question. 168 Figure E.1: Overview of the success score collected from a five-point Likert scale question in Study II. E.1.2 Perceived Jerkiness of Robot Motions The results of a repeated-measures ANOVA indicate that the responses to the second distractor question (Q4) also show significant differences across the motion types (F(2:45;102:82) = 11:33; p< :0001). Upon conducting a one-sample t-test of the jerkiness score against the neutral score, only the Robotic Avoidance motion types demonstrate an above-neutral score. All other motion types – Successful, Colli- sion, and AHP-based Hesitation – showed jerkiness score below the neutral score, indicating that these motions are perceived as smooth motions (p < :05 or better for all motion types). The perceived jerkiness of the motions did not significantly vary across the three levels of acceleration (F(2;84) = 2:25; p = :14). Figure E.2 shows the distribution of jerkiness scores. 169 Figure E.2: Overview of the jerkiness score collected from a five-point Likert scale question in Study II. E.2 In Situ Perception of AHP-based Motions This section discusses human perception measurements collected from Study III. In Study III, presented in Chapter 6, two different survey instruments were com- bined to measure human perception of the 7-DOF WAM robot from an HRST ex- periment. The experimental conditions included three different robot responses to occurrence of human-robot resource conflicts: in the Blind Condition, the robot did not respond to the conflict at all; in the Hesitation Condition, the robot used AHP-based trajectories communicate its behaviour state of uncertainty to the sub- ject while avoiding the imminent collision; in the Robotic Avoidance Condition, the robot abruptly stopped to avoid the imminent collision. Three human perception measurements collected from the study do not demon- 170 Figure E.3: Overview of perceived intelligence scores collected from five- point Likert scale questions in Study III. strate statistical sigificance, and are discussed in this section. These measures are usefulness, emotional satisfaction, and perceived intelligence. As demonstrated in Table 6.2, the perceived intelligence measure did not yield an acceptable level of internal reliability. Hence, rather than discussing the measurement scores that are not reliable, the measured perceived intelligence is presented in Figure E.3. The usefulness and emotional satisfaction scores were internally reliable (Cronbach’s alpha above 0.7 for both measures). Hence, they are discussed in the following sections. E.2.1 Usefulness The Hesitation Condition shows the highest mean usefulness score compared to the other two conditions. However, the results from a repeated-measures ANOVA indicate that these score differences are not statistically significant (F(2, 44)=.37, p=.69). No significant score difference is found between the first and second en- counters either. Nonetheless, the second encounter show a higher mean score than the first. A graphical overview of the usefulness scores are shown in Figure E.4 171 Figure E.4: Overview of usefulness scores collected from five-point Likert scale questions in Study III. E.2.2 Emotional Satisfaction Similar to the usefulness measure, the second encounter of the Hesitation Con- dition, in particular, show the highest emotional satisfaction. However, the re- sults from a repeated-measures ANOVA indicate that the scores are not signifi- cantly different across Conditions (F(1:48;32:52) = 2:68; p = :10) or Encounters (F(1;22) = 1:89; p = :18). Emotional satisfaction scores for each conditions and encounters are presented in Figure E.5. 172 Figure E.5: Overview of emotional satisfaction scores collected from five- point Likert scale questions in Study III. E.3 Non-parametric Comparison of Performance Impact of the AHP-based Mechanism In Study III, the number of collisions and mistakes occurred during the experiment are considered as secondary measures of human-robot performance. This section discusses the Chi-Square test conducted on the non-parametric measures. Sec- tion E.3.1 presents the mistakes measure, and Section E.3.2 discusses the collision measure. E.3.1 Counts of Mistakes In order to compare the number of mistakes made in each Conditions and Encoun- ters, the counts of mistakes are cross tabulated for Chi-Squared analysis. The cross tabulation is presented in Table E.1. Chi-Square test indicates that the counts of mistakes are not significantly different across the conditions (X2(6;N = 144) = 3:29; p = :77). Most subjects, as shown in Table E.1, did not make any mistakes resulting in similar non-parametric distribution of mistakes across the conditions. This implies that, due to the small effect, much larger number of subjects should be recruited to find significance in this measure. 173 Table E.1: Cross tabulation outlining the differences in the counts of mistakes by Condition as a factor. Condition Mistakes 0 1 2 3 Total Blind Count 42 4 1 1 48 Exp. Count 43.3 3.7 .7 .3 48.0 % of Total 29.2% 2.8% .7% .7% 33.3% Hesitation Count 45 3 0 0 48 Exp. Count 43.3 3.7 .7 .3 48.0 % of Total 31.3% 2.1% .0% .0% 33.3% Robotic Count 43 4 1 0 48 Avoidance Exp. Count 43.3 3.7 .7 .3 48.0 % of Total 29.2% 2.8% .7% .7% 33.3% Total Count 130 11 2 1 144 Exp. Count 130 11 2 1 144 % of Total 90.3% 7.6% 1.4% .7% 100.0% Table E.2: Chi-Square tests of counts of mistake differences by Condition. Value DOF Asymp. Sig. (2-sided) Pearson Chi-Square 3.29 6 .77 Likelihood Ratio 4.11 6 .661 Linear-by-Linear Association .52 1 .47 Number of Valid Cases 144 E.3.2 Counts of Collisions This section presents the number of collisions made by the subjects during the main experiment of Study III. Presented in Table E.3 is a cross tabulation of the collision measure organized by Condition. Chi-Square test results (see Table E.4) demonstrate that there is a significant difference in the collision scores (X2(1;N = 144) = 75:8; p < :001). This difference is between the Blind Condition and the non-collision conditions (Hesitation and Robotic Avoidance Conditions). This is a trivial result, considering that the robot motions in Hesitation and Robotic Avoid- ance Conditions were designed to avoid collisions. 174 Table E.3: Cross tabulation outlining the differences in the counts of colli- sions by Condition as a factor. Condition Mistakes 0 1 2 3 6 Total Blind Count 18 16 10 3 1 48 Exp. Count 38 5.3 3.3 1.0 0.3 48.0 % of Total 12.5% 11.1% 6.9% 2.1% .7% 33.3% Hesitation Count 48 0 0 0 0 48 Exp. Count 38.0 5.3 3.3 1.0 .3 48.0 % of Total 33.3% .0% .0% .0% .0% 33.3% Robotic Count 48 0 0 0 0 48 Avoidance Exp. Count 38.0 5.3 3.3 1.0 .3 48.0 % of Total 33.3% .0% .0% .0% .0% 33.3% Total Count 114 16 10 3 1 144 Exp. Count 114 16 10 3 1 144 % of Total 79.2% 11.1% 6.9% 2.1% .7% 100.0% Table E.4: Chi-Square tests of counts of collisions differences by Condition. Value DOF Asymp. Sig. (2-sided) Pearson Chi-Square 75.79 8 .00 Likelihood Ratio 83.87 8 .00 Linear-by-Linear Association 38.38 1 .47 Number of Valid Cases 144 175

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0103462/manifest

Comment

Related Items