UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

What should a robot do? : design and implementation of human-like hesitation gestures as a response mechanism.. Moon, AJung 2012

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
24-ubc_2012_spring_moon_ajung.pdf [ 5.96MB ]
Metadata
JSON: 24-1.0103462.json
JSON-LD: 24-1.0103462-ld.json
RDF/XML (Pretty): 24-1.0103462-rdf.xml
RDF/JSON: 24-1.0103462-rdf.json
Turtle: 24-1.0103462-turtle.txt
N-Triples: 24-1.0103462-rdf-ntriples.txt
Original Record: 24-1.0103462-source.json
Full Text
24-1.0103462-fulltext.txt
Citation
24-1.0103462.ris

Full Text

What Should a Robot Do? Design and Implementation of Human-like Hesitation Gestures as a Response Mechanism for Human-Robot Resource Conflicts  by AJung Moon B.A.Sc., The University of Waterloo, 2009  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  Master of Applied Science in THE FACULTY OF GRADUATE STUDIES (Mechanical Engineering)  The University Of British Columbia (Vancouver) April 2012 © AJung Moon, 2012  Abstract Resource conflict arises when people share spaces and objects with each other. People easily resolve such conflicts using verbal/nonverbal communication. With the advent of robots entering homes and offices, this thesis builds a framework to develop a natural means of managing shared resources in human-robot collaboration contexts. In this thesis, hesitation gestures are developed as a communicative mechanism for robots to respond to human-robot resource conflicts. In the first of the three studies presented in this thesis (Study I), a pilot experiment and six online surveys provided empirical demonstrations that humans perceive hesitations from robot trajectories mimicking human hesitation motions. Using the set of human motions recorded from Study I, a characteristic acceleration profile of hesitation gestures was extracted and distilled into a trajectory design specification representing hesitation, namely the Acceleration-based Hesitation Profile (AHP). In Study II, the efficacy of AHP was tested and validated. In Study III, the impact of  AHP -based  robot motions was investigated in a Human-  Robot Shared-Task (HRST) experiment. The results from these studies indicate that AHP-based robot responses are perceived by human observers to convey hesitation, both in observational and in situ contexts. The results also demonstrate that AHP-based responses, when compared with the abrupt collision avoidance responses typical of industrial robots, do not significantly improve or hinder human perception of the robot and human-robot team performance. The main contribution of this work is an empirically validated trajectory design that can be used to convey a robot’s state of hesitation in real-time to human observers, while achieving the same collision avoidance function as a traditional ii  collision avoidance trajectory.  iii  Preface This thesis is submitted in partial fulfillment of the requirements for the degree of Master of Applied Science in Mechanical Engineering at the University of British Columbia. An outline of the three experiments presented in this thesis has been published as a position paper at the Workshop on Interactive Communication for Autonomous Intelligent Robots (ICAIR), 2010 International Conference on Robotics and Automation: Moon, A., Panton, B., Van der Loos, H. F. M., & Croft, E. A. (2010). Using Hesitation Gestures for Safe and Ethical Human-Robot Interaction. Workshop on Interactive Communication for Autonomous Intelligent Robots at the 2010 International Conference on Robotics and Automation (pp. 11-13). Anchorage, United States. The author presented this work at the workshop. A co-author for this publication, Mr. Boyd Panton, was a co-op student at the Collaborative Advanced Robotics and Intelligent Systems Laboratory. Panton was involved in the design of the humansubject interaction task described in Chapter 3. In preparation for Study III, presented in Chapter 6, he investigated different options for setting up the experimental workspace for the study. He proposed using a stereoscopic camera for sensing human motions during the main experiment. However, a different approach was used in the study. He produced a technical report from his work: Panton, B. (2010). The Development of a Human Robot Interaction Project (pp. 1-42). Vancouver.  iv  Study I, presented in Chapter 3, and the trajectory design specification, Accelerationbased Hesitation Profile (AHP), presented in Chapter 4, are published in a conference proceedings: Moon, A., Parker, C. A. C., Croft, E. A., & Van der Loos, H. F. M. (2011). Did You See It Hesitate? - Empirically Grounded Design of Hesitation Trajectories for Collaborative Robots. 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1994-1999). San Francisco, CA1 . This work was presented by the author at the 2011 IROS conference. This jointly authored paper involved the work of Dr. Chris A. C. Parker. He has supervised the experiment design of Study I and the process of developing the  AHP  from a  collected set of human motion trajectories (Chapter 4). The controller used to servo the CRS A460 robot in Studies I and II of this thesis is a modified version of a controller developed by Parker. The two studies presented in Chapters 5 and 6 have been submitted as a journal manuscript, which is under review at present: Moon, A., Parker, C. A. C., Croft, E. A., & Van der Loos, H. F. M. (2012). Design and Impact of Hesitation Gestures during Human-Robot Resource Conflicts. Journal of Human Robot Interaction. (Submitted January, 2012). All human-subject experiments described in this thesis were approved by the University of British Columbia Behavioural Research Ethics Board (H10-00503).  1 ©2011  IEEE  v  Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iv  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  vi  List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  xi  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  xiii  Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1  1  Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . .  3  2 Background and Motivating Literature . . . . . . . . . . . . . . . .  6  2.1  Nonverbal Communication in Human-Human Interaction . . . . .  7  2.2  Hesitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  8  2.3  Human-Robot Shared Task . . . . . . . . . . . . . . . . . . . . .  10  2.4  Trajectory Implications in Nonverbal Human-Robot Interaction . .  13  3 Study I: Mimicking Communicative Content Using End-effector Trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  16  3.1  Experimental Methodology . . . . . . . . . . . . . . . . . . . . .  17  3.1.1  17  Human Subject Pilot . . . . . . . . . . . . . . . . . . . .  vi  3.2  3.3 3.4  3.1.2  Robotic Embodiment of Human Motion . . . . . . . . . .  20  3.1.3  Session Video Capture . . . . . . . . . . . . . . . . . . .  22  3.1.4  Survey Design . . . . . . . . . . . . . . . . . . . . . . .  23  3.1.5  Data Analysis . . . . . . . . . . . . . . . . . . . . . . . .  25  Online Survey Results . . . . . . . . . . . . . . . . . . . . . . .  25  3.2.1  Identification of Segments Containing Hesitation Gestures  26  3.2.2  Perception Consistency between Human Gestures and Robot Gestures . . . . . . . . . . . . . . . . . . . . . . . . . . .  28  Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  30  3.3.1  Limitations . . . . . . . . . . . . . . . . . . . . . . . . .  31  Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  32  4 Designing Communicative Robot Hesitations: Acceleration-based Hesitation Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  33  4.1  34  Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1  Pre-processing – Filtering, Segmentation, and Principal Component Analysis . . . . . . . . . . . . . . . . . . . . . . .  4.1.2 4.1.3 4.2  4.3 4.4  34  Qualitative Observations and Typology of Hesitation and Non-Hesitation Motions . . . . . . . . . . . . . . . . . .  37  Quantitative Observations and Characterization Approach  39  Acceleration-based Hesitation Gestures . . . . . . . . . . . . . .  46  4.2.1  AHP -based  Trajectory Generation . . . . . . . . . . . . .  47  4.2.2  Real-time Implementation . . . . . . . . . . . . . . . . .  49  Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  52  4.3.1  Limitations . . . . . . . . . . . . . . . . . . . . . . . . .  53  Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  53  5 Study II: Evaluating Extracted Communicative Content from Hesitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  55  5.1  Experimental Methodology . . . . . . . . . . . . . . . . . . . . .  56  5.1.1  Trajectory Generation . . . . . . . . . . . . . . . . . . .  57  5.1.2  Video Capture . . . . . . . . . . . . . . . . . . . . . . .  59  5.1.3  Survey Design . . . . . . . . . . . . . . . . . . . . . . .  60  vii  5.1.4 5.2  Data Analysis . . . . . . . . . . . . . . . . . . . . . . . .  61  Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  63  5.2.1  H2.1: AHP-based Robot Motions are Perceived as Hesitant  63  5.2.2  H2.2: AHP-based Robot Motions are More Human-like than Robotic Avoidance Motions . . . . . . . . . . . . . .  5.2.3  63  H2.3: Non-Expert Observations of AHP-based Motions are Robust to Changes in Acceleration Parameters . . . .  64  Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  66  5.3.1  Limitations . . . . . . . . . . . . . . . . . . . . . . . . .  66  Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  67  6 Study III - Evaluating the Impact of Communicative Content . . . .  68  5.3 5.4  6.1  6.2  Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  70  6.1.1  Experimental Task and Procedure . . . . . . . . . . . . .  70  6.1.2  Measuring Human Perception and Task Performance . . .  77  6.1.3  System Design and Implementation . . . . . . . . . . . .  79  6.1.4  Data Analysis . . . . . . . . . . . . . . . . . . . . . . . .  85  Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  85  6.2.1  6.3 6.4  H3.1: Can Humans Recognize AHP-based Motions as Hesitations in Situ? . . . . . . . . . . . . . . . . . . . . . . .  86  6.2.2  H3.2: Do Humans Perceive Hesitations More Positively? .  86  6.2.3  H3.3: Does Hesitation Elicit Improved Performance?  . .  93  Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  96  6.3.1  Limitations . . . . . . . . . . . . . . . . . . . . . . . . .  98  Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  99  7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 7.1  Can an Articulated Industrial Robot Arm Communicate Hesitation? 101  7.2  Can an Empirically Grounded Acceleration Profile of Human Hesitations be Used to Generate Robot Hesitations? . . . . . . . . . . 101  7.3  What is the Impact of a Robot’s Hesitation Response to Resource Conflicts in a Human-Robot Shared-Task? . . . . . . . . . . . . . 103  7.4  Recommendations and Future Work . . . . . . . . . . . . . . . . 104  viii  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 A CRS A460 Robot Specifications . . . . . . . . . . . . . . . . . . . . . 117 B Human Motion Trajectory Characteristics . . . . . . . . . . . . . . 121 B.1 Segmentation of Recorded Human Motions . . . . . . . . . . . . 122 B.1.1  Butterworth Filtering algorithm . . . . . . . . . . . . . . 122  B.1.2  Acceleration-based Segmentation Algorithm . . . . . . . 122  B.2 Overview of Position Profiles . . . . . . . . . . . . . . . . . . . . 124 B.3 Descriptive Statistics of Principal Component Analysis Errors . . 128 B.4  AHP  Parameter Values from Human Motions . . . . . . . . . . . . 128  C Advertisements, Consents, and Surveys . . . . . . . . . . . . . . . . 131 C.1 Study I Advertisements, Online Surveys, and Consents . . . . . . 131 C.2 Study II Advertisement, Online Surveys, and Consent . . . . . . . 145 C.3 Study III Advertisements, Questionnaires, and Consent . . . . . . 149 D Acceleration-based Hesitation Profile Trajectory Characterisation and Implementation Algorithms . . . . . . . . . . . . . . . . . . . . 155 D.1 Offline AHP-based Trajectory Generation . . . . . . . . . . . . . 156 D.2  AHP -based Trajectory Implementation for Real-time Human-Robot  Shared Task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 D.2.1 Management of the Robot’s Task . . . . . . . . . . . . . 160 D.2.2 Management of Real-time Gesture Trajectories . . . . . . 162 D.2.3 Calculation of a1 and t1 Parameters for AHP-based Trajectories . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 D.2.4 Generation of AHP Spline Coefficients . . . . . . . . . . . 163 D.2.5 Human State Tracking and Decision Making . . . . . . . 164 E Human Perception of AHP-based Mechanism and its Impact on Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 E.1 Video Observation of Jerkiness and Success from Robot Motions . 168 E.1.1  Perceived Success of Robot Motions . . . . . . . . . . . . 168  E.1.2  Perceived Jerkiness of Robot Motions . . . . . . . . . . . 169  ix  E.2 In Situ Perception of AHP-based Motions . . . . . . . . . . . . . 170 E.2.1  Usefulness . . . . . . . . . . . . . . . . . . . . . . . . . 171  E.2.2  Emotional Satisfaction . . . . . . . . . . . . . . . . . . . 172  E.3 Non-parametric Comparison of Performance Impact of the  AHP -  based Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . 173 E.3.1  Counts of Mistakes . . . . . . . . . . . . . . . . . . . . . 173  E.3.2  Counts of Collisions . . . . . . . . . . . . . . . . . . . . 174  x  List of Tables Table 3.1  Number of online respondents per survey . . . . . . . . . . . .  Table 3.2  Repeated-measures one-way  ANOVA  results for all six surveys  in Study I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Table 4.1  The mean values and  ANOVA  26  results of the halting ratio (C1 )  and yielding ratio (C2 ). . . . . . . . . . . . . . . . . . . . . .  45  results on B1 and B2 ratios. . . . . . . . . . . . . . . .  46  Table 4.2  ANOVA  Table 5.1  Study II two-way repeated-measures  ANOVA  results on hesita-  tion and anthropomorphism scores. . . . . . . . . . . . . . . . Table 6.1  25  64  Conditions for identifying the four states of task-related human motion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  76  Table 6.2  Internal reliabilities of the eight self-reported measures. . . . .  88  Table 6.3  Two-way repeated-measures  ANOVA  results for Study III hu-  man perception measures. . . . . . . . . . . . . . . . . . . . . Table 6.4  Study III mean and standard error of human perception and task performance measures by Condition. . . . . . . . . . . . . . .  Table 6.5  90  Study III mean and standard error of human perception and task performance measures by Encounter. . . . . . . . . . . . . . .  Table 6.6  89  Study III two-way repeated-measures  ANOVA  91  results for task  performance measures. . . . . . . . . . . . . . . . . . . . . .  95  Table 6.7  Distribution of the number of collisions occurred. . . . . . . .  96  Table 6.8  Number of mistakes observed in each condition. . . . . . . . .  96  xi  Table A.1  Soft limits in position, q, velocity, q, ˙ and acceleration, q, ¨ set for the CRS A460 robot arm. . . . . . . . . . . . . . . . . . . . . 118  Table B.1  Range of motion of the three pilot subjects who participated in Study I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125  Table B.2  Sum of squared errors from  PCA  simplification of Chapter 3  subject motion data. . . . . . . . . . . . . . . . . . . . . . . . 128 Table B.3  Descriptive statistics on a1 values of all three subject data from Chapter 3 presented by motion type. . . . . . . . . . . . . . . 129  Table B.4  Descriptive statistics on the temporal values of acceleration peaks.130  Table E.1  Cross tabulation outlining the differences in the counts of mistakes by Condition as a factor. . . . . . . . . . . . . . . . . . . 174  Table E.2  Chi-Square tests of counts of mistake differences by Condition. 174  Table E.3  Cross tabulation outlining the differences in the counts of collisions by Condition as a factor. . . . . . . . . . . . . . . . . . . 175  Table E.4  Chi-Square tests of counts of collisions differences by Condition. 175  xii  List of Figures Figure 3.1  Study I experiment set-up for the human-human interactive pilot. 18  Figure 3.2  Illustration of a three-joint kinematic model approximating the human arm. . . . . . . . . . . . . . . . . . . . . . . . . . . .  Figure 3.3  6-DOF robot arm used for Studies I and II in the elbow-up configuration. . . . . . . . . . . . . . . . . . . . . . . . . . .  Figure 3.4  22  Screen captures of human-human vs. human-robot interaction videos for Study I. . . . . . . . . . . . . . . . . . . . . . . .  Figure 3.6  20  Control diagram showing interpolation and replication of human motion. . . . . . . . . . . . . . . . . . . . . . . . . . .  Figure 3.5  19  23  An example screen capture of the online surveys employed in Study I. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  24  Figure 3.7  Session 1 hesitation perception scores summary for Study I. .  27  Figure 3.8  Session 2 hesitation perception scores summary for Study I. .  28  Figure 3.9  Session 3 hesitation perception scores summary in Study I. . .  29  Figure 4.1  Illustration of the trajectory characterization process. . . . . .  35  Figure 4.2  Segmentation of trajectories using the acceleration-based method. 36  Figure 4.3  A successful reach-retract human motion shown in a side view and a top view with its principal plane. . . . . . . . . . . . . .  Figure 4.4  38  Graphical overview of typology of hesitation and non-hesitation motions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  39  Figure 4.5  Examples of Butterworth-filtered Xo -axis wrist motions. . . .  40  Figure 4.6  Jerk trajectory in Xo -axis. . . . . . . . . . . . . . . . . . . .  41  Figure 4.7  Acceleration profiles of example R-type motions and an S-type motion in the primary (Xo ) axis. . . . . . . . . . . . . . . . . xiii  43  Figure 5.1  Reference trajectories generated for Study II. . . . . . . . . .  Figure 5.2  Screenshot from one of the twelve survey pages shown to Study  58  II online participants. . . . . . . . . . . . . . . . . . . . . . .  62  Figure 5.3  Overview of Study II hesitation and anthropomorphism scores.  65  Figure 6.1  Overview of the Study III experiment process. . . . . . . . . .  70  Figure 6.2  Overview of experimental setup for Study III. . . . . . . . . .  72  Figure 6.3  Overview of the robot’s behaviours in the three conditions. . .  74  Figure 6.4  Time series plots of trials with the Blind, Hesitation, and Robotic Avoidance Conditions. . . . . . . . . . . . . . . . . . . . . .  Figure 6.5  75  Overview of the software architecture that interface the highand low-level control algorithms. . . . . . . . . . . . . . . . .  80  Figure 6.6  Study III experimental setup of the 7-DOF robot. . . . . . . .  81  Figure 6.7  The software system architecture of Study III. . . . . . . . . .  83  Figure 6.8  Overview of the seven significant human perception measures.  92  Figure A.1  Schematics of the 6-DOF robot arm used in Studies I and II. . 118  Figure A.2  Screen capture of the control scheme used for Studies I and II. 120  Figure B.1  Examples of Butterworth-filtered Xo -axis wrist motions. . . . 125  Figure B.2  Examples of Butterworth-filtered Yo -axis wrist motions. . . . 126  Figure B.3  Examples of Butterworth-filtered Zo -axis wrist motions. . . . 127  Figure C.1  Consent form used for the human-human interaction pilot experiment (page 1). . . . . . . . . . . . . . . . . . . . . . . . 133  Figure C.2  Consent form used for the human-human interaction pilot experiment (page 2). . . . . . . . . . . . . . . . . . . . . . . . 134  Figure C.3  Contents of the online advertisement used to recruit subjects for Study I, human-human online surveys. . . . . . . . . . . . 135  Figure C.4  Screen capture of the consent form used for the human-human online surveys. . . . . . . . . . . . . . . . . . . . . . . . . . 136  Figure C.5  Screen capture of online survey for human-human condition, Session 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137  xiv  Figure C.6  Screen capture of online survey for human-human condition, Session 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138  Figure C.7  Screen capture of online survey for human-human condition, Session 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139  Figure C.8  Contents of the online advertisement used to recruit subjects for Study I, human-robot condition online survey. . . . . . . . 140  Figure C.9  Screen capture of the consent form used for the human-robot interaction online surveys. . . . . . . . . . . . . . . . . . . . 141  Figure C.10 Screen capture of online survey for human-robot condition, Session 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Figure C.11 Screen capture of online survey for human-robot condition, Session 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Figure C.12 Screen capture of online survey for human-robot condition, Session 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Figure C.13 Contents of the online advertisement used to recruit subjects for Study II. . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Figure C.14 Screen capture of the consent form used for Study II. . . . . . 147 Figure C.15 Sample page from Study II online survey. . . . . . . . . . . . 148 Figure C.16 Contents of the online advertisement used to recruit subjects for Study III. . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Figure C.17 Advertisement posted at the University of British Columbia campus to recruit subjects for Study III. . . . . . . . . . . . . 151 Figure C.18 Consent form used for Study III. . . . . . . . . . . . . . . . . 152 Figure C.19 Pre-questionnaire used to collect demographic information from the Study III subjects. . . . . . . . . . . . . . . . . . . . . . . 153 Figure C.20 Main questionnaire used to collect the subject’s perception of the robot in Study III. . . . . . . . . . . . . . . . . . . . . . . 154 Figure D.1  Overview of the AHP-based trajectory generation process. . . 159  Figure D.2  The software system architecture of Study III. . . . . . . . . . 161  Figure E.1  Overview of the success score collected from a five-point Likert scale question in Study II. . . . . . . . . . . . . . . . . . . 169  xv  Figure E.2  Overview of the jerkiness score collected from a five-point Likert scale question in Study II. . . . . . . . . . . . . . . . . 170  Figure E.3  Overview of perceived intelligence scores collected from fivepoint Likert scale questions in Study III. . . . . . . . . . . . . 171  Figure E.4  Overview of usefulness scores collected from five-point Likert scale questions in Study III. . . . . . . . . . . . . . . . . . . 172  Figure E.5  Overview of emotional satisfaction scores collected from fivepoint Likert scale questions in Study III. . . . . . . . . . . . . 173  xvi  Glossary AHP  Acceleration-based Hesitation Profile, a characteristic trajectory profile commonly observed in a particular type of hesitation gesture as elaborated in Chapter 4.  ANOVA  Analysis of Variance, a set of statistical techniques to identify sources of variability between groups  PCA  Principal Component Analysis  ROS  Robot Operating System  HH  Human-Human condition  HR  Human-Robot condition  HRI  Human-Robot Interaction  HCI  Human-Computer Interaction  HRST  Human-Robot Shared-Task  xvii  Acknowledgments I would like to thank my supervisors, Drs. Elizabeth A. Croft and Machiel Van der Loos. They have patiently provided me with guidance and support not only for the development of this thesis work, but also for helping me to navigate through academia as a novice researcher. More importantly, they provided me with the freedom to explore the field of Human-Robot Interaction (HRI), while continuing to support my interests in Roboethics. I would also like to thank Dr. Chris A. C. Parker for his mentorship that I sought on a nearly daily basis. He has inspired this thesis project on developing hesitation gestures for Human-Robot Shared-Task (HRST), and his technical assistance and insight for the project have been invaluable. My thanks also go to Drs. Karon MacLean (Department of Computer Science, UBC) and Craig Chapman (Department of Psychology, UBC) for their help in designing the online surveys for Studies I and II, respectively; Dr. John Petkau (Department of Statistics, UBC), Mr. Lei Hua (Department of Statistics, UBC), Dr. Michael R. Borich (Brain Behavior Laboratory, UBC), and Ms. Susana Zoghbi for their statistical consultation of data analysis of Studies I and II; and Dr. Peter Danielson (Centre for Applied Ethics, UBC) and his team for providing me with opportunities to learn qualitative and mixed-methods approaches that enriched the experiment design and data analysis of Study III. The help and support from the members of the CARIS lab have been invaluable. Ergun Calisgan volunteered his time to explore integration of a vision system for Study III. Although the vision system was not employed in the study due to technical issues, his help on investigating the system was very helpful in choosing an alternative approach. Numerous individuals proofread this thesis, including xviii  Tom Huryn, Matthew Pan, Eric Pospisil, Navid Shirzad, Aidin Mirsaeidi, and Dr. Brian Gleeson (Department of Computer Science, UBC). These individuals have also provided valuable feedback throughout my thesis work. I would also like to acknowledge the work of two co-op students, Boyd Panton and Shalaleh Rismani. Boyd Panton helped develop the experimental task of Study I that became the foundation for designing experimental tasks in subsequent studies. Shalaleh Rismani participated in the process of producing videos for Study II, and helped recruit subjects for Studies II and III. Many thanks go to the numerous individuals – especially, Mr. Jason Yip (University of Maryland) – who helped recruit subjects for the three studies. I would also like to thank all of the online survey participants and experiment subjects who volunteered their valuable time for this research. I would like to acknowledge the financial support of the Natural Sciences and Engineering Research Council of Canada, and the Institute for Computing, Information and Cognitive Systems. Finally, I would like to thank my parents and my sister for their love, prayers, and support.  xix  Chapter 1  Introduction Collaborating agents often share spaces, parts, tools and equipment and, in the normal course of their work, encounter conflicts when accessing such shared resources. Humans often resolve such conflicts using both verbal and nonverbal communication to express their intentions and negotiate a solution. However, when such conflict arises between a human and a robot, then what should the robot do? Answers to this question depend on the context. More deeply, the answers to what an agent should do in the context of human interaction are grounded by a set of morals, i.e., “standards of behavior or beliefs concerning what is and is not acceptable” [50]. With the advent of robots entering society as assistants and teammates, it is important to frame the answer to this question before robots are widely deployed in homes and workplaces. While current industrial robots are designed for boring, repetitive and dangerous tasks, their capacity to make context-based decisions and moral judgments remains highly limited compared to that of humans. Robots that share spaces and resources with humans today (e.g., autonomous guided vehicles) typically use collision avoidance mechanisms to deal with such conflicts. Many mobile robot platforms are designed to stop or find an alternate path of travel when dynamic obstacles, such as humans, interfere with their course. These robot behaviours are designed with human safety as the highest priority and have been an effective means for avoiding conflicts and collisions. By design, such systems default to avoidance as the single predetermined solution to human-robot resource conflicts. 1  But what if, similar to human-human interaction, a robot could attempt to negotiate a solution with its human user? Such a system could leverage the high-level decision making skills of humans without undermining the technological benefits that a robot can provide. For such human-robot negotiation to take place, humanrobot teammates must fluently and bidirectionally communicate with each other; a robot needs to communicate its intentions and behaviour states to its users and readily understand human expressions of intentions and internal states. This thesis develops a framework to enable such interactive resolution of human-robot resource conflicts. In particular, this work focuses on how to program a robot to display uncertainty to human observers using hesitation gestures during a conflict in a Human-Robot Shared-Task (HRST) context. The result of allowing a robot to communicate uncertainty opens up the possibility of the robot practising alternate moral behaviours acceptable and understandable to humans in the face of a shared resource conflict. This work is motivated by the nonverbal communication humans use to communicate uncertainty and dynamically resolve conflicts with one another. Hesitation gestures are frequently observed in human-human resource conflict situations. When multiple people reach for the same object at the same time, one or more of the engaged parties often exhibit jerky, stopping hand motions mid-reach. They often resolve the resource conflict via a verbal/nonverbal dialogue involving these hesitation gestures. During resource conflicts, hesitation gestures not only serve the function of avoiding collisions, but also serve as a communication mechanism that help express the intentions of the person who exhibits the gesture. Hesitation is one of many nonverbal cues that humans use to communicate their internal states [1]. Numerous studies in psychology have found that these communicative behaviours also influence the perception and behaviours of their observers. For example, Becchio and colleagues studied the impact of social and psychological elements on the kinematics of human reach-to-grasp motions [5, 6, 62]. Results from their studies demonstrate that the kinematics of these motions, while achieving the same function, vary according to the purpose of the motion, the intentions of the person exhibiting the motion, and the intentions expressed in the motions of another person. A number of nonverbal gestures have also been studied in Human-Robot In2  teraction (HRI) contexts. A large body of work focuses on robot recognition of human nonverbal cues and human recognition of nonverbal cues expressed by a robot. Similarly to the way in which different human motions that serve the same function can communicate different internal states and intentions, a study by Kuli´c and Croft demonstrated that different functional robot trajectories can elicit different human responses to the robot [40]. This finding and many others support the notion that the manner in which a robot collaborates with people in a shared resources environment affects the user’s perception of the robot. A study by Burgoon et al. [13] suggests that, in positive teamwork, each team member has a positive perception of the other and the collaborative task yields a positive output. Therefore, ensuring positive user perception of a robotic partner/teammate is particularly important for improving human-robot collaboration. The contributions of this thesis, comprised of three studies, extend the body of work in nonverbal  HRI .  Prior work has not investigated whether the commu-  nicative content of human hesitation gestures can be represented in the motions of an articulated robot. To fill this knowledge gap, this thesis establishes empirical support for the hypothesis that humans observe a robot’s replication of human hesitation motions as hesitations (Study I). This work also provides an empirically grounded design specification for generating anthromimetic hesitation gesture trajectories on a robot (Study II). This trajectory specification is devised such that, when implemented as a real-time conflict response mechanism, it can be used to generate robot motions that are recognized as hesitations by human observers in situ. The outcome of these two studies enabled the creation of human observable communicative hesitation on a robot arm. This new behaviour permitted the implementation of Study III such that the impact of hesitation as a conflict response mechanism could be investigated. In particular, Study III was conducted to ascertain whether the devised conflict response mechanism, when compared with a traditional collision avoidance mechanism, has a positive impact on human-robot collaboration.  1.1  Thesis Outline  This section describes the organization and contents of the chapters in this thesis.  3  Chapter 2 discusses related works from the field of psychology, Human-Computer Interaction (HCI), and  HRI .  The chapter mainly focuses on studies that discuss  nonverbal human-robot communication and human-robot collaboration. There has been limited research focused on hesitations as kinesic1 hand gestures. Hence, in order to design and implement hesitation gestures on a robot, it is necessary to understand which human motions are perceived as hesitations. Chapter 3 presents the first of three human-subject studies, Study I, designed to empirically identify and record human motions that are perceived as hesitations by human observers. This study uses recorded human motions to test whether a simplified version of human hesitation gestures implemented on a robotic manipulator is also seen as a hesitation gesture. This study hypothesizes that when a robot mimics only the wrist trajectories of human hesitation motions, the robot can be perceived as being hesitant. However, this study does not explore how hesitation motions are different from other types of motions. Based on the positive findings from Study I, Chapter 4 presents qualitative and quantitative observations of the different types of human motions recorded and identified in Study I. This chapter describes the process of extracting key differences between hesitation and non-hesitation trajectories. The extracted trajectory features are formulated into a trajectory design specification, called the Acceleration-based Hesitation Profile (AHP), for generating human-recognizable robot hesitation motions. Chapter 5 presents the second human-subject experiment, Study II, which empirically tests the efficacy of the suggested hesitation trajectory design. Human perception of videos of different AHP-based robot trajectories are empirically compared against videos of other types of robot motions via an online survey. This study tests the hypothesis that  AHP -based  robot motions are perceived as human-  like hesitations by human observers. The study confirms this hypothesis within the anthropometric range of AHP parameter values used to generate the motions. Based on the empirical foundations of Studies I and II, the aforementioned 1 According to Birdwhistell, “kinesics is concerned with abstracting from the continuous muscular  shifts which are characteristic of living physiological systems those groupings of movements which are of significance to the communication process and thus to the interactional systems of particular groups” (in [42], p. 67).”  4  AHP  trajectory specification is implemented in a  HRST  experiment, Study III, as  a real-time resource conflict response mechanism on a 7-DOF robot. This study, presented in Chapter 6, explores the impact that robot-exhibited hesitation gestures have on the performance of a human-robot team and human perception of the robot teammate. The following questions are investigated: Can humans recognize AHP -based  robot motions as hesitations in situ? Do humans perceive a robot more  positively when it hesitates in comparison to when it does not? Does hesitation elicit improved performance of the collaborative task? Functionally, hesitation gestures used in a resource conflict situation achieve the same output as other robot motions that avoid collisions with human users. However, the anthromimetic hesitation gestures designed, implemented, and tested in this thesis caused the users to have a more “human-like” perception of the robot’s behaviour while the robot achieved the same functional task. Chapter 7 discusses the implications of this research in the field of HRI, with a focus on improving the human-robot interaction experience in the conclusions of this thesis.  5  HRST  domain, and presents the overall  Chapter 2  Background and Motivating Literature This chapter reviews previous studies in psychology and Human-Robot Interaction (HRI) to motivate and inform the development of human-like hesitation gestures for a robot in Human-Robot Shared-Task (HRST) contexts. A summary of key findings in the psychology literature discussing human nonverbal behaviours leads this chapter (Section 2.1). Findings reported in the relevant literature emphasize the power of nonverbal communication in human-human interaction. Subsequently, Section 2.2 provides an interdisciplinary overview of previous work discussing hesitations in general and then outlines the need to further understand hesitation gestures in human-human interaction contexts. Section 2.3 introduces the concept of collaboration as discussed in the literature and provides an overview of human-robot communication studies in collaboration contexts. Finally, Section 2.4 reviews literature on how different features of robot motions impact human perception of, and interaction with, a robot. This review provides support from the literature that even an industrial articulated robotic manipulator (a robot arm) can convey anthropomorphic behaviour state to a human observer.  6  2.1  Nonverbal Communication in Human-Human Interaction  Research in psychology suggests that people reveal their intentions and internal states to human observers even through simple motions such as walking or reaching for an object [5, 6, 44, 54]. This is complemented by the natural human ability to infer information from other people’s motions [1, 22, 67]. Results from numerous studies indicate that the human ability to display and understand nonverbal cues is an effective (and even necessary) means of influencing social interactions in an interpersonal setting [1, 51]. On the other hand, persons with deficits in displaying or understanding nonverbal social cues, as often exhibited by children with autism spectrum disorder, experience significant difficulties successfully interacting with others [27]. Psychologists have further explored the extent to which humans recognize intent or infer internal states specifically from human generated motions. In one study, Johansson recorded various human motions under the point-light1 condition, effectively representing the motions as ten simultaneously moving dots. Results from his experiment demonstrate that humans are able to accurately identify human motions even from such a simplified representation [33]. Subsequently, much research demonstrates that humans ascribe animacy and intention not only to motions of biological beings, but also to moving objects, even when such objects are simple geometric shapes [17, 28, 68]. A study by Ju and Takayama demonstrates that even the automatic opening motions of doors are interpreted by humans as exhibiting a gesture [34]. These findings have inspired research into attribution of animacy by humans in the fields of Human-Computer Interaction (HCI) and  HRI  [23, 66]. In  HCI ,  in  particular, Reeves and Nass demonstrated the highly cited finding that humans treat machines as real, social beings [57]2 . 1 Johansson  attached lights and reflective tape on the joints of an actor’s body while the actor demonstrated natural walking, running, and other motions in the dark. Recordings of this motion showed only the joint positions of the actor as point-lights. 2 Reeves and Nass’s work consisted of a series of human-machine interaction experiments that were modified versions of human-human interaction experiments in psychology. They devised a theory from their findings, called the media equation, which states that “People’s responses to media are fundamentally social and natural.” [57]  7  Leveraging on the human ability to ascribe animacy and intentions to moving bodies, this thesis explores how hesitation gestures – one of many nonverbal gestures humans use – can be synthesized into robot motions that communicate a state of uncertainty recognizable by humans. The following section defines and provides a summary of this particular human behaviour.  2.2  Hesitation  Studies in psychology indicate that cognitive conflicts or internal states of uncertainty in humans and animals are often expressed nonverbally. In humans, such nonverbal expressions include shrugs, frowns, palm-up gestures, self-touch gestures and hesitations [18, 24]. Hesitations, in particular, are a type of communicative cue that humans recognize not only from the behaviour of another person, but also that of animals and insects [64, 70]. Literature suggests several causes of hesitation behaviours: cognitive conflicts [64], difficulty in cognitive processing [63] and reluctance to act [59]. These sources of hesitation manifest themselves as a variety of nonverbal cues. Of these cues, discussions on human hesitations have mainly been focused on pauses in speech [32, 43, 47] and periods of indecisiveness during high-level decision making processes [18, 49]. Doob defines hesitation as a temporal measure: “... the time elapsing between the external and internal stimulation of an organism and his/her or its internal/external response.” [18] Consistent with Doob’s definition, most studies that investigate hesitations in humans characterize the behaviour in terms of delays. For example, Klapp et al. conducted a study to investigate hesitations that humans exhibit while concurrently performing discrete and continuous tasks [38]. They measured hesitations in human hand motions as 1/3 seconds3 or more of pause in the subject’s hand while multitasking. This study demonstrates that hesitations in the hand appear as sudden tensing, rather than relaxing, of the muscles and that the number of times a subject hesitates decreases with practice. Measuring hesitations as delays is also found in the  HRI  domain. Bartneck et al. measured human hesitation as the time  3 Klapp et al.[38] empirically determined this value by intentionally interrupting the human subjects, engaged in a continuous task, with an auditory tone.  8  taken for a subject to turn off a robot when instructed to do so [3]. Bartneck and colleagues’ study used this measure to investigate whether human attribution of animacy on a robot is correlated with the subject’s cognitive dilemma of turning off the robot. In another study, Kazuaki et al. programmed hesitations on a robot as the duration of time it takes for a robot (in this case, the AIBO, Sony, Japan) to initiate actions after a human demonstrates to the robot how to shake hands with a person [35]. The results of their study indicate that the management of delays in the robot’s response helps improve people’s experience of teaching a robot. Building on the results of [35], this thesis tests whether robot hesitations manifested as kinesic gestures, rather than delays, will lead to improvements in human-robot collaboration. Although delayed response to a stimulus may occur due to an agent’s hesitation, hesitation does not equal to a delay. For example, communication latency (a type of delay) is not due to the aforementioned sources of hesitation, such as uncertainty, or cognitive conflicts, although communication latency also qualifies as hesitations according to Doob’s definition. This thesis addresses the challenge of designing human-like hesitation motions for a robot. Hence, the model of hesitation as a time delay is likely to be insufficient for generating robot motions that convey a state of uncertainty. Only few studies have measured and investigated the kinematic manifestation of hesitations. In entomology, hesitation behaviours in hoverflies have been defined and measured as the number of forward and backward motions the insect exhibits in the vicinity of a flower before it lands [70]. This definition and characterization of hesitation was arbitrarily selected as a convenient measure for the study in [70], and does not sufficiently describe the nuance of hesitations as gestures humans perceive when observing reaching motions. In addition, this study involved the motions of one fly, rather than two or more flies working as social actors. In primatology, a study investigating cognitive conflict behaviours in apes defined and measured hesitation behaviours of apes as pointing to two different choices simultaneously or altering of their choices [64]. Such behaviour, however, occurs as part of activities that involve deictic gestures, and is not necessarily transferable to communication in resource conflict contexts. In summary, while hesitation has been measured in terms of involuntary and 9  voluntary time delays in humans, and in terms of motions in some biological studies, it has not been well defined in a multi-agent resource conflict situation. Therefore, a more sophisticated understanding of human hesitation as kinesic gestures is necessary before implementing human-recognizable hesitation gestures on a robot in a HRST context.  2.3  Human-Robot Shared Task  In the psychology, HCI and HRI literature, the words joint activity [15], collaboration [26], teamwork [15], and shared cooperative activity [10] are often used interchangeably. These words refer to activities that involve two or more agents having joint intentions and who work together toward a common goal [15]. This thesis considers joint activities that involve collaborative agents (humans and robots) sharing the same physical environment and resources to complete a task. This thesis uses the term Human-Robot Shared-Task (HRST) to refer to this subset of collaborative activities. Human-human collaboration typically involves people with different intentions and capabilities. Without a means to effectively communicate with each other, the collaborating partners would neither be able to establish a common ground nor interweave subplans to achieve the shared goal [15]. In Bratman’s model of successful collaboration, mutual responsiveness, commitment to the joint activity and commitment to mutual support are necessary. None of these can be established without communication between the collaborating agents [10]. Likewise, in order for human-machine collaboration to be successful, communication mechanisms that allow the collaborating agents to interweave plans and actions and to establish mutual understanding are required [26]. Studies demonstrate that joint intentions of collaboration can be established via nonverbal communication. In an experiment by Reed and colleagues, two people worked as a haptically linked dyad to rotate a disk to a target location collaboratively [55]. They found that, even without verbal communication, people quickly negotiate each other’s role within the team using only haptic cues. This study also demonstrated that, in comparison to completing the task alone, there is a significant increase in performance when people worked together as a team. However,  10  when the study was repeated with human-robot dyads, human subjects did not take on a specific role within the collaborative task nor did the dyad yield an improved task performance [56]. The authors suggest that these negative results may be due to the lack of subtle haptic negotiations in the human-robot dyad condition. These studies not only demonstrate the power of nonverbal communication in humanhuman collaboration, but also point out the importance of designing and exploring communication and negotiation mechanisms to improve HRST systems. Human-robot collaboration studies also suggest that user perception and acceptance of a robotic partner increase when the robot behaves or appears more anthropomorphic. In a Wizard of Oz4 experiment involving a collaborative part retrieval task, Hinds and colleagues found that people exhibit more reliance and attribute more credit to their robotic partner when it appears more human-like [29]. Goetz et al. investigated the impact that a humanoid’s social cues have on human acceptance of the robot as a partner [25]. In their Wizard-of-Oz experiment, the robot’s demeanour (playful vs. serious) and the nature of the cooperative task (playful vs. serious) were varied. The results suggest that the subject’s compliance with the robot increases when the robot displays a demeanour that matches the nature of the task. While the  HRI  in [29] and [25] was verbal, a number of studies have demon-  strated the utility of using nonverbal gestures in conjunction with verbal communications in human-robot collaboration tasks [12, 30, 31, 61]. Holroyd and colleagues implemented a set of policies that help select a set of nonverbal gestures that should accompany the robot’s speech in order to effectively communicate with its human partner [30]. They demonstrated the effectiveness of their verbal/nonverbal management system in the collaborative solving of a tangram puzzle. The positive results from the study indicate that more natural management of robot gestures improves user perception of the robot and helps establish a sense of mutual understanding with the robot. Huang and Thomaz also employed nonverbal cues to supplement verbal communication with a robot [31]. They found that such verbal communication, together with supplemental nonverbal gestures, is an effective means to acknowledge establishment of joint attention between human and robot. 4 A popular method of conducting an experiment in HRI where human confederate(s) controls a robot behind the scene and unknownst to the participant [36].  11  This approach improved human understanding of the intended robot behaviour and human-robot task performance [31]. Breazeal and her colleagues conducted an experiment with an expressive 65-DOF robot, Leo, that used shoulder shrug gestures and facial expressions to convey its state of uncertainty to a human collaborator in a joint activity [12]. The results from this study provide strong evidence that combined use of nonverbal gestures and speech to display a robot’s behaviour state can be more effective in improving task performance than using speech alone as the only communication modality. While the findings in [30], [31] and [12] emphasize the power of nonverbal communication in human-robot collaboration, the nature of the collaborative tasks involved implied turn-taking rules between the human and robot that may not be present in many potential human-robot collaboration scenarios. The subject’s role in [12] was to supervise and instruct the robot to perform a manipulation task while the robot waited for the subject’s instruction before performing the task. These roles were reversed in [30]. If fixed rules exist on turn-taking or right of way and both humans and robots follow these rules perfectly, resource conflicts, such as reaching for the same object at the same time, would not occur. However, when such rules are not in place, or if at least one of the collaborating agents is not aware of, or does not comply with, the predefined rules, transparent communication of each agent’s intentions and behaviour states (dominant, submissive, collaborative, pesky) becomes even more essential for navigating the interaction. This thesis contributes to the  HRI  body of work by exploring the effects of nonverbal com-  munication in collaborative scenarios without such predefined/implied hierarchy and turn-taking rules. Hence, the nature of the  HRST  designed for this thesis is  distinguished from these previous studies in that it features a lack of predefined turn-taking rules. In addition, in the case of humans and robots collaborating in noisy industrial environments, human-robot communication involving only nonverbal gestures can be especially important. However, unlike many of the high-DOF robots used in human-robot collaboration studies, most robots in industrial settings are not equipped to display facial gestures representing the robot’s state. Often, it is also impractical for a robot to have a face [9]; moreover, in industry, it is necessary that the worker pay attention to the task at hand, i.e., the workpiece and poten12  tially the robot’s hand or gripper, rather than the robot’s body or face (if present). Nonetheless, recent literature suggests that humans, when interacting with robots, naturally expect robots to follow social conventions even if they are non-facial and non-anthropomorphic [20]. This emphasizes the need to design natural  HRI  for  appearance-constrained robots [9]. The following section describes some of the studies in  HRI  that demonstrate  how different qualities or parameters of robot motion trajectories elicit different human responses or convey different behaviour states to human observers. These contributions, in addition to the psychology literature described in Section 2.1, suggest that motions of even non-facial, non-anthropomorphic robots can be designed to be communicative and expressive.  2.4  Trajectory Implications in Nonverbal Human-Robot Interaction  Many studies in nonverbal  HRI  have focused on generating human-like robot ex-  pression of internal states using full-bodied or head-torso humanoid robots. Typically, recorded human motions are mapped onto joint trajectories of humanoid robots [45, 46, 52]. While studies suggest that this approach is valid in generating human-like robot motions [11, 52], this is not a feasible approach for low-DOF non-humanoid robots that have a significantly different kinematic configuration from the human body. However, research in  HRI  suggests that replicating joint trajectories is not es-  sential to eliciting different human responses to, or perception of, a robot. Flash and Hogan [21] famously proposed a model of human reaching motions as a minimumjerk trajectory of the hand. Kim and colleagues demonstrated, using a humanoid robot, that people ascribe different personalities to the robot when the trajectory parameters of the robot’s gestures, including velocity and frequency, are varied [37]. Complementing the study of robot personality expressed in motion parameters, and also using a humanoid torso robot, Riek and colleagues studied subjects’ attitudes towards, and responsiveness to, three different nonverbal robot gestures with varying smoothness and orientations [58]. The results of the study showed quicker human response to abrupt gestures and front-oriented gestures than smooth 13  or side-oriented gestures. Recent findings by Saerbeck and Bartneck used two different robotic platforms and echo the importance of robot motion quality in eliciting different human responses [60]. Using a facial robot (iCat, Philips Research, Eindhoven, the Netherlands) and a 2-DOF mobile robot (Roomba, iRobot, Massachusetts, USA) this study demonstrated that acceleration and curvature of robot motions have a significant impact in conveying different affect to human observers, whereas the type of robot used does not. In particular, a study by Kuli´c and Croft is especially relevant to understanding the relationship between robot motions and observer response. In their study, human subjects watched an articulated 6-DOF robot (CRS A460, Burlington, ON, Canada) perform a series of pick-and-place and reach-retract motions. Keeping the end positions of the motions the same, two different trajectory planning strategies were used to control the robot [40]. Results of this study indicate that a human observer’s affective response (as recorded by physiological sensors) to the robot changes significantly based on the type of trajectory used to control the robot’s motion even when the trajectories functionally obtain the same result. The results of [37, 40, 58, 60] are consistent with findings from the psychology literature that show that perceived affect is significantly correlated with the kinematics of motion rather than with the shape/form of the moving object [53]. However, these studies focused on the expression of affect in various robot motions, rather than intention or state. In contrast, the recent study by Ende and colleagues focused on conveying communicative messages, rather than just affect, via nonverbal communication. Recordings of a humanoid’s (Justin) and a 7-DOF manipulator’s (SAM) humanlike gestures were used in an online survey [19]. Results of this study show high levels of human identification for a robot’s use of deictic gestures, such as pointing, and terminating gestures, conveying ‘Stop’ or ‘No’, for both types of robots. This result demonstrates that articulated robotic manipulator motions can effectively convey communicative messages as well. In the context of  HRST ,  this thesis explores a research question not yet an-  swered by the substantial body of work in this domain: given that people will ascribe animacy and recognize affect and behaviour states from non-facial robots, can we leverage this phenomenon to communicate hesitant states of an articulated 14  industrial robot arm?  15  Chapter 3  Study I: Mimicking Communicative Content Using End-effector Trajectories This chapter1 considers whether the communicative content within human hesitation gestures can be represented in the motions of an articulated robot arm. Three human subjects participated in a pilot experiment, in which they exhibited hesitation and non-hesitation motions in response to the presence and absence of humanhuman(HH) conflict of resources. The experimenter then programmed a robot arm to replicate these motions in a human-robot (HR) conflict of resources context. In an online survey, 121 participants provided their observations of the videos of human gestures collected from the human subject trials and the videos of the gestures replicated by a robot arm. The hypothesis for this study was that humans will recognize hesitation gestures equally well in robots as in humans. Confirmation of this hypothesis will demonstrate that anthromimetic hesitation in robot gestures can be used as a viable communication mechanism in human-robot interactive domains. This study led to 1 ©2011 IEEE. The majority of this chapter has been modified/reproduced, with permission, from  Moon, A., Parker, C. A. C., Croft, E. A., & Van der Loos, H. F. M. (2011). Did You See It Hesitate? - Empirically Grounded Design of Hesitation Trajectories for Collaborative Robots. 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1994-1999). San Francisco, CA.  16  significant results that support our hypothesis. The remainder of this section is organized as follows. The details of the experimental methodology are provided in Section 3.1. Section 3.2 presents the results of the surveys, followed by a discussion of their implications to the field of HRI and relationship to the remaining chapters of this thesis in Section 3.3 and Section 3.4.  3.1  Experimental Methodology  This section describes how hesitation gestures are generated in a human-human interactive domain (Section 3.1.1), and how these gestures are reproduced on a 6-DOF robotic arm (Section 3.1.2). Survey respondents watched muted video recordings of both the human and robot motions, and attempted to identify the human and robot motions that they perceived to contain hesitation gestures. Our survey methodology is described in Section 3.1.4.  3.1.1 Human Subject Pilot In this pilot experiment, the experimenter and participant engaged in a simple task in which conflicts over a shared resource between the participant and the experimenter naturally occurred. Figure 3.1 shows the experimental set-up. The experimenter and each participant wore noise canceling headphones, and for each session, sat on opposite sides of a table with a small rectangular target (a sponge) at the table centre. In each session, each time the participant and the experimenter heard a beep through their headphones, they reached for and touched the target and then returned their hands to the resting locations as fast as they could. Each person heard independently randomized sequences of beeps such that, by chance, both people would sometimes reach for the target at approximately the same time. One female and two male right-handed undergraduate engineering students participated in this pilot experiment. Each participant engaged in one session of the experiment. Each human-human (HH) session was video recorded and labeled HH-1, HH-2, and HH-3. The experimenter captured the participant’s arm movements using two inertial sensors (Xsens MTx, Enschede, Netherlands) at 50 Hz. Inertial sensors have been 17  Video Camera Sensor 2 Sensor 1 Headphones  Headphones Target  Resting Locations Participant  Experimenter  Figure 3.1: Study I experiment set-up for the human-human interactive pilot. The participant sits opposite the experimenter and wears two inertial sensors on his/her dominant arm. The participant’s resting location and the location of the target mark the two endpoints of the participant’s reach-and-retract motions. (©2011 IEEE) widely used and exploited to study human upper limb movements [65, 71, 72]. As illustrated in Figure 3.1 and Figure 3.2, the experimenter strapped these sensors on the participant’s dominant arm: one between the shoulder and the elbow, and the other between the elbow and the wrist. Prior to each session, the experimenter initialized the sensors to a reference inertial frame. To calculate the wrist trajectories via forward kinematics, the experimenter measured the lengths of the participant’s upper arm and forearm (lse and lew ). The shoulder marked the location of a global frame, and was approximated as a purely spherical joint with zero displacement. Calculation of 3D Cartesian coordinates of the participant’s wrist positions with respect to the shoulder involved gyroscope measurements from the two sensors and the arm lengths of the participant. Converting the gyroscope rate of turn measurements to rotational matrices yielded H Ro1 and H Ro2 2 . These are the orientation of sensor frames F1 and F2 with 2 The prescript ‘H’ denote that the variable/value pertains to the human subject(s) and are described in terms of the human’s coordinate frame. Similarly, the prescript ‘R’ denotes that the variable/value pertains to the robot’s coordinate frame. For vectors, superscripts denote origin of the  18  Zo  w  Yo  pwo X2  F2  pwe  sor  1  lew  X1  F1  Sen  pe o  Z2  Z1  Y1  r2  O  Senso  Xo  Fo  Y2  lse  e  Figure 3.2: Illustration of a three-joint kinematic model approximating the human arm. The origin of the global frame is located on the right shoulder. The positive Xo -axis points towards the front of the person, and the Yo -axis points towards the left shoulder. Variables lse and lew represent the upper arm and forearm lengths. (©2011 IEEE) respect to the global frame, Fo . The vector sum of the shoulder-elbow displacement and the elbow-wrist displacement provides the wrist position with respect to the shoulder, H⃗pow : =  o H Re  [0 −lse 0]T  (3.1)  ⃗e H pw  =  o H Rw  [0 −lew 0]T  (3.2)  ⃗o H pw  =  ⃗o ⃗e H pe + H pw o⃗ o⃗ o⃗ H xw i + H yw j + H zw k  (3.3)  ⃗o H pe  =  vector, whereas the subscript denotes the endpoint of the vector with respect to the origin.  19  3.1.2 Robotic Embodiment of Human Motion An articulated robot arm (CRS A460, Burlington, ON, Canada) with an open controller (Quanser Q8™/Simulink™) embodied human gestures in generating human-robot (HR) equivalents of the HH pilot sessions (see Figure 3.3 for a robot configuration diagram).  Joint 3  O  d ϕ Joint 6  Joint 2  -θ Joint 5 Joint 4  Z  Joint 1  X  Y  Figure 3.3: 6-DOF CRS A460 robot arm in the elbow-up configuration. In Study I, this robot replicated the wrist trajectories of the human subject’s motion from the human-human interactive experiment. This robot was also used in Study II. Attached at the end of the robot is an unactuated hand with zero degrees of freedom. Variables d, θ , and ϕ define the polar coordinate system of the robot. Technical specifications of the robot are outlined in Appendix A. (©2011 IEEE)  Robot Trajectory Generation Since human and robot arms do not embody identical kinematics, the robot reproduced the human wrist trajectory with its wrist in an elbow-up configuration. The maximum reach of the robot used in these experiments is smaller (23.5 cm) than that of participants’ (39.0 cm). Hence, the computed wrist trajectories from the participant’s inertial sensor data were linearly scaled by 60% (β = 0.6) to fit the robot’s range of motion. Appendix A presents the specifications and wrist motion 20  range calculations used in this study. The following equation yields R xwo (t), R yow (t), and R zow (t), the human wrist position at time t that are modified to fit within the robot’s range of motion: ⃗o R pw (t)  = β (H⃗pow (t) − min[H⃗pow (t)]) + min[R⃗pow ]  (3.4)  Here, H⃗pow (t) is the calculated human wrist position in the Fo -frame at time t. Variable min[R⃗pow ] represents a minimum reach position of the robot from Fo to its wrist (see Figure 3.3 for the frame definition). A sigmoid function interpolator applied to the resultant discrete 3D Cartesian trajectories provided a smooth high frequency reference trajectory for the robot (1 kHz). Applying a quintic spline smoothing to the position outputs of the forward kinematics, and taking derivatives of the splines yielded the maximum velocity and acceleration of the trajectories. This method has been advanced by Woltring as the most acceptable derivative estimation method for biomimetics applications [69]. The sigmoid interpolator employed these values to generate the reference trajectory. As shown in Figure 3.4, feeding the interpolated 3D Cartesian coordinates to an inverse kinematics routine finally generated the joint-space trajectories. As an intermediate step, the 3D Cartesian coordinates were converted into polar coordinates (equations (3.5) to (3.7)) for more intuitive visualisation of the robot’s position in its work space (see Figure 3.3 for the polar coordinate frame definition): √  θ ϕ  2  xr2 + y2r + z2r xr = √ 2 xr2 + y2r √ 2 xr2 + y2r −1 ) = cos ( d  d =  (3.5) (3.6) (3.7)  The following inverse kinematics calculations ensure that the robot traces the Cartesian trajectory with its wrist, while its elbow remains up and the wrist maintains a horizontal orientation with respect to the ground: Here, variable ase refers to the robot’s link length between joints 2 and 3, and aew refers to the link length between joints 3 and 5. 21  .  q, q Xref 3D 10Hz Cartesian Data  Continuous Sigmoid Interpolator  Xref 1kHz  Inverse Kinematics  qref  PID Control  vref  6-DOF Robot  Figure 3.4: Control diagram showing sigmoid interpolation of human wrist motions, and the generation of the robot’s wrist motion via a conventional PID controller. (©2011 IEEE) A joint-space PID algorithm controlled the robot kinematics. To improve the fidelity of the wrist trajectory motion, given the limitations in the robot’s maximum velocity and acceleration, the commanded trajectories were slowed by five times for video recording. These hardware limits are outlined in Table A.1 in Appendix A. When recording with the robot, an actor demonstrated the corresponding human trajectories at a rate also five times slower than normal to match the robot’s speed. Subsequently, video recordings of the combined human and robot motions in the HR trials were sped up by five times to eliminate speed discrepancies between the HH and HR sessions. An unactuated hand (sponge-filled glove) was affixed to the robot’s wrist. This prop made the context of the task clear to the observers of the HR videos, and ensured the safety of the actor.  3.1.3 Session Video Capture The survey contained three HH and three HR videos – one HH and one HR videos for the three pilot sessions. Both HH and HR videos show only the dominant hand and arm of the participating agents (human or robot) in the workspace. After cropping extraneous recordings at the beginning and end of the sessions (and muting all recorded sounds), the generated videos ran for about 2 minutes each. Due to an interruption that occurred during video recording, HH-1 only contained half of the recorded Session 1. The surveys used the complete recorded videos for both Sessions 2 and 3. Each full length video contained about sixty reach-and-retract motions by each participant. The survey structure grouped an average of four consecutive reach-and-retract 22  motions by a participant as a segment of the video. Session 1 was divided into eight segments (A to H), Session 2 into 14 segments (A to N), and Session 3 into 15 segments (A to O). The segment labels appeared in the bottom right-hand corner of the video as shown in Figure 3.5).  Participant  Experimenter  Human-Human(HH) Interaction Video  Experimenter  Robot  Human-Robot(HR) Interaction Video  Figure 3.5: Screen captures of human-human (HH) vs. human-robot (HR) interaction videos. In the HR interaction video, the robot replicated the motions of the participant in the HH interaction video. (©2011 IEEE)  3.1.4 Survey Design Collecting data to test the hypothesis involved launching six different online surveys, one survey per video. All six surveys consisted of a short lead-in paragraph instructing the respondents to watch the video with special attention to the agent (human or robot) in focus, followed by a question (“Did the person on the left hesitate?” for HH, and “Did the robot on the right hesitate?” for HR videos) and finally one of the six videos. In all surveys, the respondents had the option of choosing ‘No’, ‘Probably Not’, ‘Probably Yes’, and ‘Yes’ to all segments of the video shown. Figure 3.6 shows a screen capture from one of the online surveys. Appendix C shows screen captures of the remaining surveys and their respective consent forms. Recruitment of survey respondents involved a variety of social media tools (Twitter, Facebook, the first author’s website and blog) and distribution of adver23  tisements to university students. Survey respondents received no compensation. In total, 121 people participated in the six online surveys. Table 3.1 shows the breakdown of the number of survey respondents.  Figure 3.6: An example screen capture of the online surveys employed in Study I. This particular screen capture is from the online survey of HH1.  24  Table 3.1: Number of online respondents per survey Session 1 nHH−1 21 nHR−1 21  Session 2 nHH−2 20 nHR−2 24  Session 3 nHH−3 17 nHR−3 18  3.1.5 Data Analysis Statistical analysis of the survey results involved conducting a repeated-measures analysis of variance (ANOVA) and independent t-tests on the numerically coded levels of hesitation scores: 0-‘No’, 1-‘Probably Not’, 2-‘Probably Yes’, and 3-‘Yes’. Consequently, a higher mean indicated a greater probability of a video segment containing hesitation gesture(s) that is/are visually apparent to observers. The significance level for all inferential statistics were set to α = 0.05. Obtaining a statistical significance from ANOVA of a survey result indicates that at least one of the video segments is perceived as containing hesitation significantly more or less than the other segments of the same video. Since identifying video segments that are perceived to contain hesitation gestures in both versions (HH and HR) of a session is of importance in testing the hypothesis, the analysis also involved pairwise comparisons with Bonferroni correction between the mean scores of segments within each survey. This allowed for empirical identification of segments of a video that obtain high mean hesitation scores (above 2-‘Probably Yes’) and exhibit significantly different mean scores from low mean segments (below 1-‘Probably No’). Investigation of the quality of robot-embodied motion involved conducting independent t-tests between HH and HR versions of all video segments. A nonsignificant result in the t-test of a segment would indicate that the HH and HR versions of the segment are perceived similarly.  3.2  Online Survey Results  The results of the Analysis of Variance (ANOVA) on all six surveys show statistical significance (see Table 3.2). Therefore, for each of the surveys, the respondents were able to observe significant presence or absence of hesitation gestures in at  25  least one of the segments. According to the results of Mauchly’s test, the scores on surveys HH-1, HR-2, HH-3, and HR-3 violate the sphericity assumption. Use of either Greenhouse-Geisser or Huynh-Feldt approaches accounts for these violations. The ANOVA results presented in Table 3.2 summarize the corrected results. Based on this analysis, Section 3.2.1 presents the video segments that are identified as containing hesitations. Section 3.2.2 discusses the level of perception consistency in the HH and HR videos. Table 3.2: Repeated-measures one-way ANOVA results for all six surveys Survey HH-1 HR-1 HH-2 HR-2 HH-3 HR-3  F F(5.81, 116.28) = 21.52 F(7, 133) = 13.56 F(13, 208) = 9.20 F(5.61, 123.42) = 5.48 F(12.95, 220.18) = 6.48 F(5.46, 87.44) = 7.43  p <.05 <.05 <.05 <.05 <.05 <.05  3.2.1 Identification of Segments Containing Hesitation Gestures As shown in Figure 3.7, the segments in HH-1 with mean hesitation scores above 2 (‘Probably Yes’) are segments F and G. Segments with mean values below 1 (‘Probably No’) are D and H. Pairwise comparison with Bonferroni correction indicates that the mean scores of segments F and G are significantly different from that of D and H; this demonstrates that segments F and G contain human hesitation gestures that are recognized by human observers. In HR-1, segments F and G are also the only segments with mean scores above 2. The scores of both F and G show significant differences in means from the lowest-mean segments, below a score of 1 (segments D, E, and H). In HH-2, (see Figure 3.8) segments F, J, K, and L show mean values above 2. These values are significantly different from that of segments B, E, G, and M, all of which scored below a mean of 1. In HR-2, however, only segments F and K received mean scores of above 2. They are significantly different from segments scoring below 1, which were B, D, and M. In HH-3, segments I, J, and N show mean scores above 2. However, only I 26  Mean Hesitation Perception Score  3.00  HH HR  2.00  1.00  0.00 A  B  C  D  E  F  G  H  Video Segment Error bars: 95% CI n(HH-1) = 21, n(HR-1) = 21  Figure 3.7: Session 1 hesitation perception scores summary showing the mean scores and 95% CI for all segments. Analyses show that segments F and G contains hesitation gestures in both human and robot motions with statistical significance. (©2011 IEEE) and N show significant differences from the segments having mean scores below 1 (segments A, B, C, and O). As is apparent from Figure 3.9, HR-3 shows relatively low mean scores in general compared to that of HH-3. Only segment N scores above 2, and all other segments show no significant differences from each other. In HR-3, more than half of the segments score below 1. All but segment N score below 1.5. This indicates the possibility that qualitative differences may exist between HH-3 and HR-3 that are not present in the recordings of Sessions 1 and 2. We discuss this point in Section 3.3.  27  Mean Hesitation Perception Score  3.00  HH HR  2.00  1.00  0.00 A  B  C  D  E  F  G  H  I  J  K  L  M  N  Video Segment Error bars: 95% CI n(HH-2) = 20, n(HR-2) = 24  Figure 3.8: Session 2 hesitation perception scores summary showing the mean scores and 95% CI for all segments. Analyses show that segments F and K contain hesitation gestures in both human and robot embodied motion, whereas segments J and L contain the gestures in human motion only. (©2011 IEEE)  3.2.2 Perception Consistency between Human Gestures and Robot Gestures Investigating the consistencies in perception between scores of HH and HR in all three sessions involved conducting independent t-tests on each pair (HH and HR) of mean values for all segments. The results show highly consistent levels of hesitation in all segments of Session 1; none of the segments show significant differences in scores between HH-1 and HR-1. This provides strong evidence that the robot embodiment of hesitation gestures in this session is equally able to communicate the subtle state of uncer28  Mean Hesitation Perception Score  3.00  HH HR  2.00  1.00  0.00 A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  Video Segment Error bars: 95% CI n(HH-3) = 17, n(HR-3)=18  Figure 3.9: Session 3 hesitation perception scores summary showing the mean scores and 95% CI for all segments. Analyses show that segment N contain hesitation gestures in both human and robot embodied motion, whereas segment I contain the gestures in human motion only. This particular session shows low level of score consistency between HH and HR compare to Sessions 1 and 2. tainty to human observers as the human produced hesitation gestures. Less consistency in hesitation scores exists between HH-2 and HR-2. Of the four segments that significantly contain hesitation gestures in HH-2, two (segments J and L) show significantly lower mean scores in HR-2. These are the only two segments that show significant differences between HH-2 and HR-2. The mean scores of Session 3 show the least amount of consistency. As Figure 3.9 illustrates, the mean scores of HR-3 are lower than that of HH-3 in general. Results of independent t-tests between the means of HH-3 and HR-3 reflect this  29  observation. One third of recorded Session 3 segments show significant difference from HH to HR, indicating high inconsistencies between the scores of HH-3 and HR-3. However, HH-3 and HR-3 mean hesitation scores of segment N, both of which are above 2, are not significantly different from each other.  3.3  Discussion  The results of the analyses provide strong evidence that hesitation gestures embodied in a robot arm can convey the same nonverbal communicative messages as human gestures. The survey participants’ scoring of video segments for hesitation is robust against the presence of extraneous motions, such as natural jitters in the wrist and collisions of the agents’ hands. Multiple instances of collision are present in video segments of Sessions 1 and 3. The abovementioned analyses show that these segments are not identified as significantly containing hesitation gestures. In comparison, motions recorded for Session 2 have an observable level of natural jitter of the participant’s wrist (HH-2) between reaching motions; as a result, robot embodiment of this extraneous motion was apparently not perceived as a hesitation gesture by the survey respondents. If information such as finger movements, wrist angle, and stiffness of the arm or the hand are important features in one’s recognition of hesitation gestures, one could expect to see significantly lower mean scores for all HR segments relative to HH segments. However, this is not the case: the recordings of Session 1 do not show any significant differences in means, and robotic embodiment even score higher in some segments (A, D, and H) than human motions, although not significantly. This is also the case for segments of Session 2, except for two segments that significantly contain hesitation in HH-2 but not in HR-2. However, the survey data show lower mean values in all segments of HR-3 compared to HH-3 with the exception of segment N. Segment N show no significant differences in the two mean values and contains hesitation gestures with significance according to the analysis. The fact that only Session 3 shows such lack of consistencies in the mean scores brings forth the need for further investigation. Future work might allow us to determine qualitative and quantitative differences of  30  motions in Session 3 from those in Sessions 1 and 2, and the key features of motion trajectories that facilitate robust communication of anthromimetic hesitation gestures.  3.3.1 Limitations There are noteworthy discrepancies between robot-embodied motions and the original recorded human motions. The robotic arm has only 6-DOF, compared to a human arm’s 7-DOF, and this study employed only four of the six robot joints to follow human wrist trajectories. This inevitably generated a simplified and less dexterous embodiment of human motion. The robot’s kinematic configuration (elbow-up configuration) is also significantly different from the kinematics of a human arm, resulting in significantly different joint angles to achieve the same wrist trajectories. A few observable differences also exist between the recording of the HH and HR videos. Although the dimensions of the target object are scaled by the same size factor as the reach distances of the robot, the size of the experimenter’s hand could not be scaled. Therefore, the relative sizes of the hands with respect to the target objects are different in HH and HR. The video camera angle was also slightly different, creating observable visual differences in the distances between the two hands, especially when the hands are in the same vertical plane and appear to be touching each other even when they are not in reality. The location of the experimenter’s hand in the video is also different. In the human-human interactions, the experimenter’s hand is located on the right side of the screen, where as her hand appears on the left in human-robot interaction. Since recognition of hesitation gestures should not be affected by the location in which the motions appear, the experiment was recorded without changing the location of the non-mobile robotic platform available. This difference is illustrated in Figure 3.5. Hesitation gestures were robustly recognized in both human and robot motions despite these discrepancies.  31  3.4  Summary  This chapter described the investigation of whether hesitation gestures exhibited by a robot can be recognized by human observers as being similar to the gestures exhibited by human arms. The results of this study demonstrate that anthromimetic hesitation gestures by an articulated robot can be robustly recognized, even when the humans’ wrist trajectories are the only replicated components of the gestures. This is a strong indication that such simplified replications of human wrist trajectories are sufficient to generate robust, visually apparent anthromimetic hesitation gestures. A few segments of motions are recognized as hesitant in a human arm but are not successfully recognized in its robotic embodiment. The next stage of the investigation is to ascertain the fundamental characteristics in the highly correlated segments. This step, presented in Chapter 4, allows the generation of dynamic trajectories a robot can use to exhibit the hesitation gestures in a variety of scenarios.  32  Chapter 4  Designing Communicative Robot Hesitations: Acceleration-based Hesitation Profile In Chapter 3, the results of Study I indicated that observers of a 6-DOF manipulator mimicking wrist trajectories of human hesitation gestures perceived the robot to be hesitating. This empirical result suggests that, despite kinematic and dynamic differences, a robotic manipulator can display the communicative features of hesitation gestures. In order to implement human recognizable hesitation gestures on a robot in real-time Human-Robot Shared-Task (HRST) contexts, key communicative features of human hesitation motions must be extracted and converted into a generalizable trajectory design specification. Therefore, in this chapter1 , key features from recorded human motions are extracted and, based on these features, an end-effector hesitation trajectory specification is proposed. Section 4.1 describes the process of extracting characteristic features from human hesitation trajectories, and outlines the key trajectory differences observed be1 ©2011  IEEE. Parts of this chapter has been modified/reproduced, with permission, from Moon, A., Parker, C. A. C., Croft, E. A., & Van der Loos, H. F. M. (2011). Did You See It Hesitate? Empirically Grounded Design of Hesitation Trajectories for Collaborative Robots. 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1994-1999). San Francisco, CA.  33  tween hesitation gestures and successful reach-retract motions. Section 4.2 presents the proposed trajectory design specification, referred to as the Acceleration-based Hesitation Profile (AHP). Two implementation methods for the AHP are presented in this section. Section 4.3 presents the strengths and limitations of the  AHP ,  and  Section 4.4 provides a summary of this chapter. In the following chapter, Chapter 5, the efficacy of the AHP is tested in an online-based study, Study II.  4.1  Method  The method for extracting characteristic features from human wrist trajectories involves three key steps: a) pre-processing of the trajectory data, b) understanding the differences between hesitations and other motions via qualitative and quantitative observations and, c) based on this understanding, capturing the observed differences as a trajectory specification in a form that facilitates implementation in a robot controller. Figure 4.1 illustrates this process. Section 4.1.1 outlines the pre-processing techniques employed to filter, segment, and simplify the collected trajectories. Section 4.1.2 presents qualitatively observed differences between hesitations and other types of motions and outlines a typology of hesitation developed from the observation. Section 4.1.3 describes quantitative differences between the motion types. It also provides the rationale for characterizing the gesture trajectories in acceleration space.  4.1.1 Pre-processing – Filtering, Segmentation, and Principal Component Analysis In Study I, the inertial sensor data collected from the pilot experiment were converted into 3D Cartesian position time-series data. Along with the position data, the sensors also provided time-stamped linear acceleration trajectories of the human wrist motions in Cartesian space. In order to compare hesitation motions to successful reach-retract motions, these data were filtered post-hoc with a 4th order Butterworth filter with a 6Hz cut-off frequency and zero phase delay – using, respectively, the MATLABTM functions: butter and filtfilt. This approach is conventionally used in human arm motion studies. For example, Berman et al. employed the same filtering technique with a cut-off frequency of 5.5Hz [7], Flash 34  Collect human motion data (Study I, Chapter 3) 3D position and acceleration recording of human wrist trajectories  Video recordings of human motion  Pre-process data Filter data Segment data Simplify data with principal component analysis (PCA)  Observe qualitative differences Typology of hesitation and non-hesitation motions  Observe quantitative differences Quantitative differences in position, velocity, acceleration, and jerk  Characterize trajectory features  Test the characterized features (Study II, Chapter 5)  Figure 4.1: Illustration of the trajectory characterization process. and Hogan used 5.2Hz [21], and Bernhardt et al. used 6Hz [8]. The MATLABTM script for the filter algorithm is provided in Appendix D, Section B.1.1. Segmentation The filtered human-trajectory time-series data was divided into individual motion segments. The start and end of a segment coincided with the start-of-reach and endof-retract motions respectively. The segmentation algorithm used Xo -axis magnitudes of acceleration, the characteristic Xo -axis acceleration extrema in each motion, and a set of threshold values. As defined in Figure 3.2, the Xo -axis points towards the front of the person. Figure 4.2 illustrates the results of the segmenta-  35  Acceleration-Based Segmentation Results Data from Subject1 Position (cm) Velocity(cm/s) Acceleration (10cm/s2, for scale)  Xo−axis  200 First Max.  100 0  Local Min. −100 −200  Additional Min. 1  2  3  4  5  6  1  2  3  4  5  6  1  2  3  4  5  6  Yo−axis  200 100 0 −100  Zo−axis − Gravity Subtracted  −200  200 100 0 −100 −200  Time (s)  Figure 4.2: Segmentation of trajectories using the acceleration-based method described in Section 4.1.1. Three motion segments are depicted. The red dashed lines indicate the beginning, middle, and the end of each motion segment identified from the segmentation algorithm. tion algorithm. The algorithm begins by finding the first instance of maximum magnitude of acceleration in the Xo -axis above a threshold value (set at 1 m/s2 via iterative testing). It then backtracks in time from this point to find the closest local Xo -axis acceleration minimum occurring prior to the maximum. This minimum coincides with the starting point of a reaching motion and, therefore, marks the start of a motion segment. From the minimum, the algorithm moves forward in time to find two additional minima, with the last minimum indicating the end of motion. 36  Post-processing of the output from this algorithm was required for hesitation trajectories, since they tend to have additional extrema. The pseudo code and MATLAB implementation of this algorithm are provided in Appendix D, Section B.1.2. Principal Components Simplification Motion paths from Study I show movement primarily in the sagittal (X-Z) plane (see Figure 4.3), with relatively small medio-lateral (Yo -axis) components. This is true even though no spatial constraints were imposed on the subjects during the experiment. To simplify the characterization process, the recorded 3D Cartesian trajectories were projected onto 2D planes using Principal Component Analysis (PCA) to extract the key orthogonal components that describe each dataset. When applied to individual motion segments, this yields the orientation of the two principal axes of motion with respect to the original axes. Then, the 3D motion trajectory was projected onto the plane constructed with these two principal axes. This projection was done using MATLAB’s princomp command. The plane shown in Figure 4.3 illustrates an example output of the PCA. Descriptive statistics of the sum-of-squared error (SSE) due to projection for each subject’s data are presented in Appendix B, Table B.2.  4.1.2 Qualitative Observations and Typology of Hesitation and Non-Hesitation Motions In order to understand the differences between hesitation and non-hesitation motions, video recordings of all three participants’ motions from Study I were coded for qualitative analysis. Based on this analysis, a typology of human reach-retract motions was developed. Figure 4.4 illustrates the typology. Two types of motions were observed from non-hesitation video segments: successful reach-retract (S-type) motions and collisions. In S-type motions, participants did not encounter any resource conflict and were successful in touching and returning from the target. In collision-type motions, participants reached for the target and had physical contact with the experimenter’s hand while doing so.  37  Principal Component Plot: Subject 1 Reach and Retract Acceleration (SSE: 4.0e+005) Bird Eye View  Side View  2000  1000  1000  500  Z  A (cm/s2)  1500  500  1000  0 500  0  −500  0  1000  −500 1000 500  0 A Y (cm/s2)  0 −1000  A Y (cm/s2)  −1000  −1000  A X (cm/s2)  −500 −1000 A X (cm/s2)  Figure 4.3: A successful reach-retract human motion shown in a side view and a top view with its principal plane. Red data points lie below the plane, and green ones lie above. Two types of hesitation motions were identified from the video segments containing hesitation. In both types of hesitations, the participant’s hand launched towards the target, but halted in midair as the experimenter’s hand moved towards the same target. The motion of the participant’s hand after halting differentiated the two types of hesitations. In one type, the participant’s hand retracted back to the starting position, abandoning motion towards the target. This type of hesitation is herein referred to as a retract-type (R-type) hesitation. In the other type, the participant’s hand, after halting, hovered in place until the experimenter retracted back from the target and then resumed reaching for the target. This type of hesitation is herein referred to as a pause-type (P-type) hesitation. The number of trajectories collected for each motion type are summarized in Figure 4.4. Due to the small sample size of P-type hesitations, it is difficult to find features from the trajectories that are representative of this type of hesitation. Hence, the remainder of the characterization process focuses on R-type hesitations only.  38  Successful Reach-Retract (S-type) Motions (134) Non-Hesitations Collisions (9) Human Reach-Retract Motions Retract Type (R-type) Hesitations (8) Hesitations Pause Type (P-type) Hesitations (4)  Figure 4.4: Graphical overview of typology of hesitation and non-hesitation motions. The numbers in parenthesis indicate the number of motion segments collected for the particular motion type.  4.1.3 Quantitative Observations and Characterization Approach The small sample size and the short durations of the gestures provided poor frequency content resolution. Thus, the trajectory features analysis was done in the time domain only. As shown in Figure 4.5, there are large variations in position trajectories of hesitation motions compared to that of S-type motions. The start time of the retraction phase of R-type hesitations ranges between about 35% to 65% of the total reaching motion time. This is true even when comparing within subjects. Due to the lack of consistent features found in position profiles, trajectory characteristics were examined examined in higher order kinematic profiles. Considering that hesitation gestures are often described as ‘jerky’ motions, Rtype and S-type motions were studied in the jerk profiles. The jerk profiles are produced by numerically differentiating acceleration profiles collected from the inertial sensor. Similar to position profiles, in the jerk profiles, R-type motions demonstrate much larger variations than S-type motions. As shown in Figure 4.6, large differences are also observed even among S-type motions of the same subject. Hence, in the jerk space, it is difficult to discern what unique trajectory patterns exist in R-type motions. Trajectory differences between R-type and S-type motions were found to be most prominent in the acceleration profiles, specifically in terms of the differences in relative acceleration extrema magnitudes and their time values. As shown in Figure 4.7, a maximum forward acceleration is observed shortly after the start of motion, at time t1 , during the launch phase of all R-type motions. Following this  39  Subject1 Xo-Axis Wrist Position 50  S-type motion R-type motion  45  Xo-Axis Position (cm)  40  35  30  25  20  15  10 0  25%  50%  75%  100%  Time Normalized  Figure 4.5: A few examples of Butterworth-filtered Xo -axis wrist motions from Subject 1 in Study I. All trajectories are time-normalized to match the slowest (longest) motion segment. launch acceleration, labeled a1 , R-type motions reach a maximum deceleration, a2 , with magnitude slightly larger than that of the launch acceleration. This deceleration occurs at time t2 that coincides with braking/halting of the hand. The ratio of a2 to a1 (C1 ) represents the abruptness of the halting behaviour in a hesitation motion and is referred to as the halting ratio. The abruptness of the motion is also dependent on how long it takes for the hand to reach the braking deceleration, a2 , from the launch acceleration. This can be represented as a ratio of durations between a1 to a2 to t1 (B1 ). A local maximum acceleration, a3 , follows at time, t3 . This maximum occurs near the start of returning motion, and is much smaller than the launch acceleration. The ratio of a3 to a1 (C2 ) is referred to as the yielding ratio. The complementing ratio of the duration between a2 and a3 to t1 (B2 ) represents how quickly or slowly the halting behaviour is led to the return phase of the motion. Typically, an additional local maximum, a4 , is also observed after a3 40  Subject 1 Principal Axis Jerk Profile  6000  R-type motion S-type motion  4000  Jerk (cm/s3)  2000 0 -2000 -4000 -6000 -8000 0  0.2  0.4  0.2  0.4  0.2  0.4  6000  0.6  0.8  1.0  1.2  0.6  0.8  1.0  1.2  0.6  0.8  1.0  1.2  Time (s) Subject 2 Principal Axis Jerk Profile  4000  Jerk (cm/s3)  2000 0 -2000 -4000 -6000 -8000 0  6000  Time (s) Subject 3 Principal Axis Jerk Profile  Jerk (cm/s3 )  4000 2000 0 -2000 -4000 -6000 -8000 0  Time (s)  Figure 4.6: Jerk trajectory in Xo -axis. Interestingly, Subject 3’s motions distinctly show two sub-groups of S-type motions. 41  at the end of the returning phase of the motion. This returning acceleration trails off until the end of the motion, t f . The values of these key accelerations extracted from the recorded human motions are presented in Appendix B, Section B.4. In contrast to R-type hesitations, S-type motions have a braking deceleration, a2 , of a magnitude similar to the launch acceleration. The second maximum of S-type motions, a3 , occur at the end of the returning phase of the motion; since the return of the hand in S-type motions happens after the subject has successfully reached the target object, the time value of this maximum, t3 , takes place much later than t3 of R-type motions. S-type motions typically do not have any additional maximum, a4 , after a3 , and trail off to zero until the end of motion. For comparison, Figure 4.2 shows an overlay of position, velocity, and acceleration for three S-type motions. Figure 4.7 shows the location of the key acceleration extrema for several R-type motions and an example S-type motion. The halting (C1 ) and returning (C2 ) ratios, and the ratios of durations between the acceleration extrema (B1 and B2 ) can be represented with respect to the launch acceleration: a2 = C1 a1  (4.1)  a3 = C2 a1  (4.2)  t2 − t1 = B1t1  (4.3)  t3 − t2 = B2t1  (4.4)  An Analysis of Variance (ANOVA) was conducted to ascertain whether the relative magnitudes of and the durations between the acceleration extrema are indeed significantly different between R-type and S-type motions. As outlined in Table 4.1, despite the small sample size, the mean values of the yielding ratio in R-type motions are significantly smaller than that of S-type motions, and B2 for R-type motions are also significantly smaller than for S-type motions (see Table 4.2). No significant interaction effect is found between the ratios and motion types (F(1, 142) = 0.001, p = .98). The same analysis on the halting ratio, however, yields inconsistent results. Non-significant ANOVA results are obtained for subjects 1 and 2, indicating that R-  42  R-Type Motions: Xo-axis Acceleration 3000  R-Type Motion 1  (t1,a1)  R-Type Motion 2 R-Type Motion 3  2000  R-Type Motion 4  (t1, a1)  2  Acceleration (cm/s )  1000  S-Type Motion  (t3, a3)  (t3, a3)  0  (t4, a4)  -1000  (t2, a2) -2000  -3000  (t2, a2) -4000  0  20%  40% 60% Time Normalized  80%  100%  Figure 4.7: Acceleration profiles of example R-type motions and an S-type motion in the primary (Xo ) axis. Variables a1 , a2 , a3 ,t1 ,t2 , and t3 represent key acceleration extrema their time values. R-type motions show common acceleration profiles distinct from S-type motions. (©2011 IEEE) type and S-type motions show similar acceleration profiles from t0 to t1 . Provided that the subjects did not plan to hesitate prior to launching the hand, this result is not surprising. However, Subject 3’s R-type motions demonstrated a significantly lower value of halting ratio than that of S-type motions. This inconsistency in the results necessitated further investigation to study a number of key differences between the trajectories of Subject 3’s motions and that of the remaining subjects. As shown in the jerk profiles (see Figure 4.6), Subject 3’s motions can be classified into two distinctly different S-type trajectories – one having much greater positive and negative jerk extrema than the other – both with higher levels of repeatability than subjects 1 and 2. The subject’s acceleration trajectories also demonstrate much larger halting and yielding ratios than the other two subjects. Significant inter-subject discrepancies are found from a 43  repeated-measures  ANOVA .  There is a significant interaction effect between ra-  tios and subjects (F(2, 142) = 11.70, p < .001), with S-type motions of Subject 3 demonstrating a significantly larger mean halting ratio than the remaining two subjects (p < .001 for the pairwise comparisons between subjects 1 and 3, and between subjects 2 and 3). Since empirically identified motion trajectories from more subjects were not collected, there is insufficient information to conclude whether this subject’s R-type motions should be treated as outliers. Nonetheless, given that this subject’s motions demonstrated larger human perception discrepancies in Study I, Subject 3’s hesitation trajectories are excluded from further analysis.  44  Table 4.1: The mean values and ANOVA results of the halting ratio (C1 ) and yielding ratio (C2 ). The values are calculated for each subject, then with the subjects’ data combined. A repeated-measures ANOVA with motion types and subjects as factors demonstrated significant interaction effect between ratios and subjects (F(2, 142) = 11.70, p < .001). No significant interaction exists between the ratios and motion types (F(1, 142) = 0.001, p = .98). Significant pairwise differences with Stype motions are identified via Bonferroni post-hoc analysis, and indicated with the following suffix: t p < .01,∗ p < .05,∗∗ p < .01,∗∗∗ p < .001 Motion  n  S-type 26 R-type 4 Ratio*Motion S-type 52 R-type 2 Ratio*Motion S-type 56 R-type 2 Ratio*Motion S-type 78 R-type 6 Ratio*Motion S-type 134 R-type 8 Ratio*Motion  C1  C2  Subject 1 M: -1.40, SD: 0.31 M:0.78, SD:0.16 M: -1.45, SD: 0.04 M: 0.26, SD: 0.04*** F(1, 28) = 0.98, p = 0.33 F(1, 28) = 58.00, p < 0.001 Subject 2 M: -1.43, SD: 0.22 M: 1.09, SD: 0.24 M: -1.37, SD: 0.01 M: 0.17, SD: 0.33*** F(2, 54) = 0.08, p = 0.92 F(2, 54) = 5.73, p < 0.05 Subject 3 M: -1.80, SD: 0.17 M: 0.71, SD: 0.14 M: -1.26, SD: 0.21** M: 0.35, SD: 0.08** F(2, 56) = 38.58, p < 0.001 F(2, 56) = 14.23, p < 0.001 Subject 1 and Subject 2 M: -1.42, SD: 0.25 M: 0.99, SD: 0.26 M: -1.40, SD: 0.12 M: 0.24, SD: 0.07*** F(2, 84) = 0.84, p = 0.44 F(2, 84) = 21.66, p < 0.001 All Three Subjects M: -1.58, SD: 0.29 M: 0.87, SD: 0.26 M: -1.35, SD: 0.15*** M: 0.28, SD: 0.08*** F(2, 143) = 5.33, p < 0.01 F(2, 143) = 24.33, p < 0.001  45  Table 4.2: ANOVA results on B1 and B2 ratios. Significant pairs are identified via post-hoc analysis. Measures showing significant ANOVA results are indicated with the following suffix: t p < .01,∗ p < .05,∗∗ p < .01,∗∗∗ p < .001 Motion  n  S-type 26 R-type 4 Ratio*Motion S-type 52 R-type 2 Ratio*Motion S-type 56 R-type 2 Ratio*Motion S-type 78 R-type 6 Ratio*Motion S-type 134 R-type 8 Ratio*Motion  4.2  B1  B2 Subject 1 M: 1.40, SD: 0.28 M: 2.04, SD:0.55 M: 0.99, SD: 0.25*** M: 1.05, SD: 0.22*** F(1, 28) = 13.18, p < 0.01 F(1, 28) = 14.49, p < 0.001 Subject 2 M: 0.85, SD: 0.30 M: 1.64, SD: 0.41 M: 0.58, SD: 0.74 M: 1.42, SD: 0.38 F(2, 54) = 2.28, p = 0.11 F(2, 54) = 3.31, p < 0.05 Subject 3 M: 1.08, SD: 0.22 M: 1.44, SD: 0.24 M: 1.62, SD: 0.35*** M: 1.50, SD: 0.53 F(2, 56) = 21.14, p < 0.001 F(2, 56) = 1.99, p = 0.15 Subject 1 and Subject 2 M: 1.03, SD: 0.39 M: 1.78, SD: 0.49 M: 0.89, SD: 0.29 M: 1.14, SD: 0.26 F(2, 84) = 1.52, p = 0.22 F(2, 84) = 4.35, p < 0.05 All Three Subjects M: 1.05, SD: 0.33 M: 1.64, SD: 0.44 M: 0.99, SD: 0.33 M: 1.14, SD: 0.22*** F(2, 5.74) = 0.14, p = 0.87 F(2, 5.66) = 4.303, p = 0.073  Acceleration-based Hesitation Gestures  The ANOVA results support the possibility that the proportions of the extrema and their relative location in time may be key elements for designing hesitation trajectories for robots. Hence, the mean value of the halting ratio (C1 ), yielding ratio  46  (C2 ), B1 , and B2 are extracted from the R-type hesitation acceleration trajectories: C1 = −1.40  (4.5)  C2 = 0.24  (4.6)  B1 = 0.89  (4.7)  B2 = 1.14  (4.8)  By specifying the values of a1 and t1 and smoothly connecting the acceleration extrema that satisfy (4.5) to (4.8), an acceleration profile similar to human R-type hesitations can be generated. The profile produced from this method is herein referred to as Acceleration-based Hesitation Profile (AHP). As a response mechanism, an AHP can be triggered after the robot has already started its motion toward a target position. Section 4.2.1 describes a method for generating an  AHP -based  position trajectory. Section 4.2.2 outlines how the method from Section 4.2.1 can be integrated into a robotic system as a real-time conflict response mechanism. Using the methods outlined in this section, an  AHP  can supplement existing  pick-and-place and reach-retract motions typical of robot motions. In Chapter 5, Study II uses the method described in Section 4.2.1 to pre-generate  AHP -based  motions for a robot. In Chapter 6, AHP is implemented on a real-time HRST system in Study III.  4.2.1  AHP -based  Trajectory Generation  To generate an acceleration profile consistent with  AHP ,  the method described in  this section fits four cubic splines through the five key points of the acceleration profile. The first spline, x¨1 (t), fits the start of the motion (zero acceleration) to a1 , the second, x¨2 (t), fits a1 to a2 , the third, x¨3 (t), fits a2 to a3 , and the fourth, x¨4 (t), connects a3 to zero acceleration at the end of the motion while ensuring proper return of the end-effector to the starting location. Using this approach, the initial and final values of acceleration and jerk can be specified for each spline. Since the splines start and end at the critical points of AHP ,  initial and final values of jerk for all four splines are zero. Cubic Hermite  splines in the acceleration domain with zero tangents (jerk) can be generated as  47  follows: x( ¨ τ ) = (2τ 3 − 3τ 2 + 1)ai + (−2τ 3 + 3τ 2 )a f = 2τ 3 (ai − a f ) − 3τ 2 (ai − a f ) + ai  (4.9)  Here, ai and a f represent the initial and final accelerations of the spline respectively, and the spline parameter, τ , represents time, normalized over the total desired travel time, t f . Substituting the halting and yielding ratios of  AHP  into (4.9)  yields the first three splines expressed in terms of a1 : x¨1 (τ1 ) = −2τ13 a1 + 3τ12 a1 + 0 x¨2 (τ2 ) =  2τ23 a1 (1 +C1 ) − 3τ22 a1 (1 +C1 ) + a1  x¨3 (τ3 ) = 2τ33 a1 (−C1 −C2 ) − 3τ32 a1 (−C1 −C2 ) −C1 a1  (4.10) (4.11) (4.12)  Using a1 and the relationship between the durations between acceleration extrema outlined in (4.3) and (4.4), one can determine the start and end times for each spline and generate an  AHP -based  trajectory that travels the desired distance. The  acceleration splines in terms of non-normalised time values can be expressed as follows: t3 t2 x¨1 (t) = −2 3 a1 + 3 2 a1 + 0 t1 t1  (4.13)  t3 t2 (4.14) a (1 +C ) − 3 a1 (1 +C1 ) + a1 1 1 (t2 − t1 )3 (t2 − t1 )2 t3 t2 x¨3 (t) = 2 a (−C −C ) − 3 a1 (−C1 −C2 ) −C1 a1(4.15) 1 1 2 (t3 − t2 )3 (t3 − t2 )2 x¨2 (t) = 2  This series of smoothly connected cubic splines can be integrated twice to produce a set of quintic splines in position space. Integrating (4.13), (4.14) and (4.15) once, and assuming zero velocity at the onset of the motion, provides quartic velocity splines. Integrating them once more yields position splines of the AHP-based  48  motion: x1 (t) = −  a1t 5 a1t 4 + 2 +0 4t1 10t13  (4.16)  a1t 5 a1 t 4 (1 +C ) − (1 +C1 ) 1 10(t2 − t1 )3 4(t2 − t1 )2 a1t 2 + + x˙1 f t + x1 f 2 a1t 5 a1t 4 x3 (t) = (−C −C ) − (−C1 −C2 ) 1 2 10(t3 − t2 )3 4(t3 − t2 )2 a1C1t 2 + + x˙2 f t + x2 f 2 x2 (t) =  (4.17)  (4.18)  Here, x˙1 f , x˙2 f , x1 f , and x2 f represent final values of x˙1 (t), x˙2 (t), x1 (t), and x2 (t), respectively. The last spline, x4 (t) is generated after the first three splines have been calculated. This is to ensure that the x¨3 f , x˙3 f , and x3 f are used as initial conditions, and x¨4 f = x˙4 f = 0, x4 f = x0 as final conditions of x4 (t) for a smooth returning motion to x0 . To meet all six boundary conditions, a quintic Hermite spline is generated in position space as follows: x4 = (1 − 10τ43 + 15τ44 − 6τ45 )x3 f + (τ4 − 6τ43 + 8τ44 − 3τ45 )x˙3 f 1 3 3 1 + ( τ42 − τ43 + τ44 − τ45 )x¨3 f + (10τ43 − 15τ44 + 6τ45 )x0 (4.19) 2 2 2 2 Consistent with the previous nomenclature, the spline parameter, τ4 , represents time normalized by the total duration of x4 (t). Equations (4.16) to (4.19) represent an  AHP -based  trajectory that is continuous in position, velocity, acceleration,  and jerk. A MATLAB implementation of this AHP-based trajectory generation approach is outlined in Appendix D, Section D.1.  4.2.2 Real-time Implementation In this section, the experimental task from Study I is used as an example scenario to demonstrate how on a real-time  HRST  AHP -based  HRST  trajectory designs can be implemented  system. In Study I, the robot’s task was to perform a series 49  of reach-retract motions while ‘interacting’ with the experimenter. By generating two quintic Hermite splines (one for reach and another for retract), the task of producing human-like reach-retract motions can be automated to replace the pre-generated time-series position data used in Study I. A general quintic Hermite equation can be described as follows: x(t) = H0 xi + H1 x˙i + H2 x¨i + H3 x¨f + H4 x˙f + H5 x f H0 = 1 − 10τ 3 + 15τ 4 − 6τ 5  (4.21)  H1 = τ − 6τ 3 + 8τ 4 − 3τ 5  (4.22)  H2 = 0.5τ − 1.5τ + 1.5τ − 0.5τ 2  3  4  H3 = 0.5τ − τ + 0.5τ 3  4  (4.20)  5  5  (4.23) (4.24)  H4 = −4τ 3 + 7τ 4 − 3τ 5  (4.25)  H5 = 10τ 3 − 15τ 4 + 6τ 5  (4.26)  Here, subscripts i and robot’s home position  f denote the initial o o (R xhome ) at rest (R x˙home  and final positions. Going from the o = R x¨home = 0) to the target position  o ), the following quintic spline yields trajectories of human-like reaching mo(R xtarg  tions: o R xreach (τ )  o o o o = H0 R xhome + H3 R x¨targ + H4 R x˙targ + H5 R xtarg  o R xreach (τ )  o o = (1 − 10τ 3 + 15τ 4 − 6τ 5 ) R xhome + (+0.5τ 3 − τ 4 + 0.5τ 5 ) R x¨targ  (4.27)  o o +(−4τ 3 + 7τ 4 − 3τ 5 ) R x˙targ + (10τ 3 − 15τ 4 + 6τ 5 ) R xtarg (4.28) o o The trajectory for retracting from R xtarg to R xhome can be expressed as follows: o R xretract (τ )  o o o o = H0 R xtarg + H1 R x˙targ + H2 R x¨targ + H5 R xhome  o R xretract (τ )  o o = (1 − 10τ 3 + 15τ 4 − 6τ 5 ) R xtarg + (τ − 6τ 3 + 8τ 4 − 3τ 5 ) R x˙targ  (4.29)  o +(0.5τ 2 − 1.5τ 3 + 1.5τ 4 − 0.5τ 5 ) R x¨home o +(10τ 3 − 15τ 4 + 6τ 5 ) R xhome  (4.30)  By employing the method outlined in Section 4.2.1, one can supplement this reach-retract trajectory generation system with an 50  AHP -based  conflict response  mechanism. When a robot starts to move using a quintic-based reaching trajectory, its launch acceleration, a1 , occurs near the beginning of the robot’s motion. Since the target location and the desired speed of a reaching motion is known before the robot starts o (t1 ) can be calculated a priori. First, the is motion, the value of a1 , t1 , and R xreach  time values of acceleration extrema can be found by calculating the third derivative ... of (4.28), R x oreach (τ ): ...o R x reach (τ )  o o = (−36 + 192τ − 180τ 2 ) R x˙reach (0) + (−9 + 36τ − 30τ 2 ) R x¨reach (0) o o +(3 − 24τ + 30τ 2 ) R x¨reach (τ f ) + (−24 + 168τ − 180τ 2 ) R x˙reach (τ f )  +(60 − 360τ + 360τ 2 )  (4.31)  Subsequently, Equation 4.31 can be re-organized as a second order polynomial: ...o R x reach (τ )  = Aτ 2 + Bτ +C  (4.32)  o o o A = −360 R xreach (0) − 180 R x˙reach (0) − 30 R x¨reach (0) o o o +30 R x¨reach (t f ) − 180 R x˙reach (t f ) + 360 R xreach (t f ) (4.33) o o o B = 360 R xreach (0) + 192 R x˙reach (0) + 36 R x¨reach (0) o o o −24 R x¨reach (t f ) + 168 R x˙reach (t f ) − 360 R xreach (t f ) (4.34) o o o C = −60 R xreach (0) − 36 R x˙reach (0) − 9 R x¨reach (0) o o o +3 R x¨reach (t f ) − 24 R x˙reach (t f ) + 60 R xreach (t f )  (4.35)  ...o Substituting R x reach (τ ) = 0 into (4.32) and applying the quadratic formula (τ = √ −B± B2 −4AC ) 2A  yields the normalized time values of the acceleration extrema. The  minimum positive solution is τ1 = t1 /t f . Then, the value of τ1 can be substituted into the original quintic reaching trao jectory, (4.28), to determine the position of the robot at t1 , R xreach (t1 ) = x1 f . Subo stituting this value into the second derivative of R xreach (τ ) (4.36) yields the value  of a1 . o R x¨reach (τ )  o o = (3τ − 12τ 2 + 10τ 3 ) R x¨reach (τ f ) + (−24τ + 84τ 2 − 60τ 3 ) R x˙reach (τ f ) o +(60τ − 180τ 2 + 120τ 3 ) R xreach (τ f )  51  (4.36)  Using this approach, one can determine parameters a1 and t1 for a hesitation trajectory using the same initial and final conditions used to generate the quintico based reaching trajectory. Once both a1 and R xtarg are known, the coefficients for  splines x2 (t), x3 (t), and x4 (t) for the hesitation trajectory can be calculated using (4.17) to (4.19). If a resource conflict is detected before t1 , then the real-time trajectory controller for the robot can be directed to follow the splines x2 (t), x3 (t), and x4 (t) by o (t) to x2 (t) at t1 . This allows the robot switching its reference trajectory from R xreach  to make a smooth transition from its quintic reaching trajectory to an  AHP -based  trajectory without requiring a complex high speed trajectory controller to transition the motions, such as the one described in [39]. The final deliverable of the  AHP  method is an open-source package written in  C++ and Python, and currently available online for ROS-based systems. The code and other implementation details are outlined in Appendix D, Section D.2.  4.3  Discussion  This chapter presented a robot end-effector trajectory design specification,  AHP ,  which derives robot trajectories from human hesitation motions. Using this approach, only the kinematic output of human hesitation behaviours are considered in designing robot hesitation trajectories. The  AHP  describes hesitation motions as a proportional relationship between  an end-effector’s launch acceleration to the abruptness of its halting and yielding behaviour. Hence, this model of hesitation implicitly specifies magnitudes of jerk during the halting and yielding phases of the motion. The process of extracting the  AHP  from the collection of trajectories was lim-  ited by the number of sample trajectories available. Since only two subjects’ R-type motion trajectories are used to generate the key ratios, the AHP is only representative of a small subset of hesitation gestures. However, the main aim of this characterization process is to extract trajectory features that can be implemented on a robot to generate human-recognizable hesitation motions. Hence, even though the AHP  does not capture trajectory features common to all hesitation gestures, it is  sufficient as a trajectory specification for generating one type of hesitation gesture  52  and can be implemented for any future collections of similar hesitation motions.  4.3.1 Limitations A key limitation of AHP is in the real-time implementation of the designed trajectory. Using the method introduced in Section 4.2.2, the decision to hesitate has to be made before t1 , such that, at t1 , the reference trajectory for the robot can switch o (t1 ) to the start of the second spline of the from R xreach  AHP , x2 (0).  However, it is  realistic to expect a collision to become imminent when t1 has passed. In such a o case, the robot would continue to follow R xreach (t1 ) and undesirably cause a col-  lision. To address this safety issue, the real-time  HRST  Study III (Chapter 6) uses a real-time implementation of  experiment presented in AHP  in conjunction with  an abrupt collision avoidance mechanism. It is possible, however, to extend the allowable period of hesitation decisiono making from t1 to (t2 − δ ). Since the R x¨reach (t) from t1 to t2 share the same ac-  celeration a1 at t1 upon which they both start to decelerate, it is possible to use an o interpolation function to make a smooth transition from R x¨reach (t) to x2 (t) at some  δ seconds before t2 is reached. However, the acceptable lower bound for the value of δ < (t2 − t1 ) is unknown. Hence, this technique requires further investigation and testing. A novel online trajectory generation algorithm recently proposed by Kr¨oger may also help address this problem [39]. As mentioned in previous sections, the P-type hesitation gestures have not been characterized in AHP. This limits the way in which a robot can hesitatingly respond to its observer, at least based on the available dataset. While R-type motions demonstrate an immediate yielding of the resource in conflict, P-type motions could be used to communicate a robot’s persistent ‘intent’ to access the resource as soon as it becomes available. Collection and analysis of a larger set of human P-type hesitation gestures is needed to expand the hesitation trajectory design specification.  4.4  Summary  This chapter described the process of extracting key features from human hesitation trajectories collected in Study I, and presented these features as a trajectory design 53  specification. Qualitative observations of human motions yielded a typology of human motions, in which two different types of hesitations were identified. Of the two, there were more recorded trajectories of retract type (R-type) motions available for this investigation than there were the pause type (P-type). Hence, R-type motions were used to develop the hesitation trajectory specification. The main differences between R-type motions and successful reach-retract (Stype) motions were observed in terms of the relative magnitudes of and durations between acceleration extrema with respect to the launch acceleration. R-type motions typically demonstrate a slightly smaller halting ratio and a significantly smaller yielding ratio than those of S-type motions. The  AHP  captures these ratio  differences as a hesitation trajectory design specification. This chapter described how  AHP -based  motions can be generated offline as well as during a real-time  HRST .  Although there was some empirical evidence that the halting and yielding ratios of R-type hesitations are different from those of S-type motions, the actual efficacy of AHP-based trajectories in providing a communicative function was not tested in Study I. In particular, experimental work is needed to determine whether human observers of  AHP -based  end-effector trajectories would perceive the robot to be  hesitating, and what range of launch acceleration values would yield human-like hesitation motions. Study II presented in the next chapter addresses this need by implementing  AHP -based  motions for an online Human-Robot Interaction (HRI)  survey.  54  Chapter 5  Study II: Evaluating Extracted Communicative Content from Hesitations The previous chapter described the characteristic features of human hesitation gesture trajectories. These features were modeled and presented as Acceleration-based Hesitation Profile (AHP). However, given the small number of samples used in generating the  AHP ,  it is necessary to test whether untrained observers working with  the robot will perceive  AHP -based  robot trajectories as hesitations. In particular,  although it is unlikely that the full spectrum of the parameter values used to produce AHP-based trajectories will be perceived as being hesitant, it is unknown what range of launch accelerations and their associated temporal parameters can be used to generate human-recognizable robot hesitation motions. Hence, to test the efficacy of  AHP ,  this chapter presents a study that empiri-  cally compares human perception of AHP-based motions with three other types of robot motions. These motions are: robotic collision avoidance motions, successful (complete) reach and retract motions, and collisions. For convenience, herein robotic collision avoidance motions are referred to as robotic avoidance motions and successful reach and retract motions are referred to as successful motions. The study presented in this chapter, Study II, consists of an online experimental survey using video recordings of the experimenter and a robot engaged in a series 55  of reach-retract tasks toward a shared target object. The survey questions are designed to measure the perceived anthromimicry and hesitation of the robot motions seen in the video. In order to test effectiveness of AHP within a range of parameter values, this study focuses on testing motions generated using three different levels of end-effector (hand) launch acceleration, R x¨1o . These values were chosen based on recorded human motions as discussed in Chapter 3. The results of the online survey are analysed to test the following three hypotheses: H2.1. Robot end-effector motions generated using AHP convey hesitation to untrained observers, while typical robotic avoidance behaviours do not. H2.2. Robot end-effector motions generated using AHP are perceived to be more humanlike than typical robotic avoidance behaviours. H2.3. Robot trajectories generated via AHP are robust to changes in the initial acceleration parameters with regard to their communicative properties to untrained observers. The remainder of this chapter is organized as follows. Section 5.1 outlines details of the human-robot interaction task used in this study, video recording of the interaction, and details of the online survey. Section 5.2 presents the results of the survey. Sections 5.3 and 5.4 discuss and summarize these results.  5.1  Experimental Methodology  This study is comprised of a four-by-three within-subjects experiment that employed four types of robot motions and three levels of launch accelerations. The experiment employed the same 6-DOF robot introduced in Chapter 3. The experimenter created the Cartesian trajectories for 12 robot motions using the method described in Section 5.1.1, below. The robot followed these reference trajectories to generate the motions. At the same time, the experimenter performed a coordinated reaching motion to provide context for the robot’s motions. The experimenter’s motions were also based on the recorded motions described in Chapter 3. The robot and experimenter motions were video recorded following the method outlined in  56  Section 5.1.2. Using the online survey instrument described in Section 5.1.3, respondents watched and provided their perception feedback on the video recorded robot motions. Collected data were analysed according to the statistical methods described in Section 5.1.4.  5.1.1 Trajectory Generation To simplify the trajectory generation process, all 12 motions were restricted to two-dimensional (Xo Zo -plane) trajectories. The frame definition is consistent with Study I (see Figure 3.3). The reference trajectories in each axis were independently generated and, hence, are discussed separately in this section. The motion was displayed to viewers in a two-dimensional video format (parallel to the Xo Zo -plane). The loss of the third dimension was not expected to be noticeable since only a relatively small amount of medio-lateral motion (in Yo ) was were observed in the data simplification process described in Section 4.1.1. o , t , and As outlined in Chapter 4, only two of the four parameters (R x¨1o , R xtarg 1  t f ) are needed to specify an o R xtarg ,  AHP .  For practical reasons, the location of the target,  was set at the maximum reach distance of the robot. Since it is hypothe-  sized that the trajectory profile, and not the overall time to motion completion (t f , and indirectly t1 ), is the key factor containing communicative content, the launch acceleration parameter, R x¨1o , was chosen as the key control variable in this study. In order to produce high fidelity motion on the robotic platform, within the kinematic limitations outlined in Appendix A, the robot followed the reference trajectories at a rate five times slower than the desired speed of motion. Similar to the approach described in Section 3.1.2, video recordings of the motions were then sped up five times. The slow reference trajectories were generated as a set of quintic splines and sampled at 10 Hz. This sampling rate results in a high fidelity frame rate of the motions (50 Hz) upon speeding up the recorded videos. This is above the standard rate of displaying visual information (24 to 30 fps). The frame rate of the final videos were downgraded to 30 fps. This study used the same control scheme employed in Study I to control the robot. See Section 3.1.2 and Figure 3.4 for more detail.  57  Principal Xo -axis Trajectories The same quintic reference trajectories were used to create both successful and collision motions. These trajectories consisted of two Hermite quintic splines that yield human-like minimum jerk motion [21]; one for reach and another for retract phase of the full motion. Details of these trajectories are presented in Section 4.2.2. To generate robotic avoidance behaviours, the peak positions of the two quintic trajectories used for successful motions were manually modified such that the robot stops at the same distance away from the target as it would for an analogous AHPbased trajectory, and retracts. The  AHP -based  motions were generated via the four-quintic spline generation  o , together approach described in Section 4.2.1. The location of the target, R xtarg with approximated minimum, median, and maximum hand accelerations, ¨xo , obH 1  tained from Chapter 3, provided the parameters for the three AHP-based trajectories. These values are ¨xo ={9.5, 16.5, and 23.5} m/s2 respectively. Figure 5.1 H 1  shows generated reference trajectories of the four types of robot motions.  Figure 5.1: Reference trajectories generated for Study II. The same trajectories were used to generate both the successful and collision conditions.  58  Supplemental Zo -axis Trajectories The reference trajectory in the Zo -axis for all four types of motions consisted of four smoothly connected quintic splines. Based on the motions observed in Study I (see Figure B.3), the first spline connected the initial location of the robot to a constant maximum vertical location, the second spline from the maximum vertical location to the minimum vertical location, the third spline from the minimum to the maximum location, and the fourth spline from the maximum back to the robot’s initial location (see Appendix D for the algorithm used to generate these trajectories). For the  AHP -based  motions, temporal end positions of these splines matched  that of the four splines in the Xo -axis trajectory. For successful motions and collisions, the time at which the robot wrist reaches its maximum height was set as 1.75 t1 , approximately matching the time at which AHP-based motions reach their maximum height. This was mirrored in retracting phase of the successful motions and collisions. Analogous to the Xo -axis trajectory generation, the Zo -axis trajectories for the robotic avoidance motions were generated by manually modifying a successful motion trajectory to hold the robot’s wrist at the same location before retracting. The stopping positions of collisions and the successful motions were the same o . Stopping posiand were set to be in physical contact with the target object, R xtarg  tions of the robotic avoidance and AHP-based motions were kept at the same height, approximately 2 to 4 cm above the upper surface of the experimenter’s hand.  5.1.2 Video Capture In all 12 videos, an experimenter stood facing the robot with an object located on a table between them. Similar to the HRI scenario used for Study I, the experimenter enacted a series of reach-retract motions toward the target object as though sharing the object and triggering different behaviours of the robot. To show the humanrobot interaction context of the robot gestures produced, a human hand rested, reached for, and retracted from the target before and/or after the robot made its gesture. All videos showed at least one human hand motion. All of the human mo-  59  tions were successful in hitting the target object, and care was taken to produce consistent reaching speed/acceleration motion for all videos. Each unlabeled video contained only one of the 12 robot motions. Counterbalancing was done to avoid ordering effects when viewing the videos by organizing the videos in seven pseudorandom orders. Thus, seven different versions of the same online survey were produced. Online respondents were presented to only one of the seven versions of the survey. These pseudo-random orders were chosen to adhere to the following: at least one version of the survey presents one of the four types of motions first; at least one shows one of the three other types of motion (successful, robotic avoidance, collision) just before the first recording of and at least one shows one of successful,  AHP -based  AHP -based,  motion is presented;  or collsion type of motion  just before the first recording of robotic avoidance motion.  5.1.3 Survey Design The participants answered the following four survey questions for each video regarding their perception of the robot motions: Q1 Did the robot successfully hit the target in the middle of the table? (1.Not successful - 5. Successful) Q2 Please rate your impression of the robot’s motion on the following scale: (1.Not hesitant - 5.Hesitant) Q3 Please rate your impression of the robot’s motion on the following scale: (1.Machinelike - 5.Humanlike) Q4 Please rate your impression of the robot’s motion on the following scale: (1.Smooth - 5. Jerky) Question 2 was aimed to test hypothesis H2.1 (conveying hesitation), while Question 3 was aimed to test H2.2 (human-like motion). Question 3 tests human perception of anthropomorphism from the robot motions and is adopted from the Godspeed questionnaire [4]. Questions 1 and 4 are distractors chosen to mitigate possible priming effect on participants’ responses to questions 2 and 3. Figure 5.2  60  shows a screenshot of one of the 12 pages of the survey shown to the participants. Appendix C presents the survey and its human consent form in more detail. The participants were able to play the video as many times as they wished before moving on to the next video. Participant recruitment involved social media tools including Facebook, Twitter, websites and blogs. Participants were not compensated for their participation. The experiment was approved by the University of British Columbia Behavioural Research Ethics Board.  5.1.4 Data Analysis Analyses of the online survey results included a repeated-measures  ANOVA  and  a post-hoc Bonferroni correction on hesitation (Q2) and anthropomorphism (Q3) scores. A significance level of α = 0.05 was used for all inferential statistics. To test H2.1 – that  AHP -based  motions demonstrate significantly higher hesi-  tation scores compared to the scores of other types of motions – the  ANOVA  and  the post-hoc analyses of the hesitation score were conducted with motion types as a factor. Significant findings from these analyses will support H2.1. Likewise, analogous analyses were conducted on the anthropomorphism scores. Significant findings from these results will provide empirical support for H2.2 that AHP-based motions are perceived to be more anthropomorphic than robotic avoidance motions. Considering the three levels of launch accelerations as a factor, a repeatedmeasures  ANOVA  on both the hesitation and anthropomorphism scores of  AHP -  based motions provides empirical testing of H2.3. Lack of significant differences in the two scores will support H2.3 that  AHP -based  motions are perceived to be  hesitant and anthropomorphic regardless of the launch acceleration values used, as long as these values fall within those found in the natural human motion.  61  Figure 5.2: Screenshot from one of the twelve survey pages shown to online participants. Each of the videos embedded in the survey showed the experimenter and the robot reaching for the shared object in the centre of the workspace.  62  5.2  Results  A total of 58 respondents participated in the survey. Missing responses to questions were allowed in proceeding through the survey. Table 5.1 presents the ANOVA results along with a summary of Mauchly’s tests of sphericity. All sphericity violations were corrected using the Greenhouse-Geisser approach. This section discusses results of hesitation and anthropomorphism scores (Q2 and Q3) only, as they are pertinent to testing the hypotheses for Study II. Analyses of perceived jerk and success scores (Q1 and Q4), therefore, are presented in Appendix C.  5.2.1 H2.1: AHP-based Robot Motions are Perceived as Hesitant The ANOVA of hesitation scores (Q2) across the motion types yields a significant result (p < .0001, see Table 5.1). Post-hoc analyses show that human perceptions of hesitations from AHP-based motions are significantly higher than those of robotic avoidance motions (p < .02), and provides strong empirical support for hypothesis H2.1. Post-hoc analyses also provides empirical evidence that AHP-based motions convey hesitation more than successful motions (p < .001) and collisions (p < .001). Figure 5.3 (a) summarizes these results.  5.2.2 H2.2: AHP-based Robot Motions are More Human-like than Robotic Avoidance Motions ANOVA results on the anthropomorphism scores (Q3) are also significant (p < .0001), indicating that at least one of the motion types is perceived as significantly more anthropomorphic than others. Post-hoc analyses indicate that anthropomorphism scores of AHP-based motions, successful motions, and collisions all show an above-neutral mean score and are not significantly different from each other. However, scores of all three motion types are significantly higher than those of the robotic avoidance motions (all with p < .001). This supports hypothesis H2.2 that motions generated using AHP are perceived to be more anthropomorphic than typical robotic avoidance motions. Figure 5.3 (b) graphically summarizes these results.  63  Table 5.1: Two-way repeated-measures ANOVA results comparing perception of hesitation and anthropomorphism across motion types and accelerations. Note the lack of significant differences found in hesitation and anthropomorphism scores across accelerations. Measures showing significant ANOVA results are indicated with the following suffix: ∗∗∗ p < .001 Measure Hesitation∗∗∗ Anthropomorphism∗∗∗  Hesitation Anthropomorphism  ANOVA Motion Type F(2.49, 104.48) = 132.83, p < .0001 F(2.32, 97.54) = 12.45, p < .0001 Acceleration Level F(1.75, 89.07) = 1.58, p = .21 F(1.70, 86.44) = 1.05, p = .34  Mauchly’s Test W (5) = .73, p < .05, ε = .83 W (5) = .63, p < .01, ε = .77 W (2) = .86, p < .05, ε = .87 W (2) = .82, p < .01, ε = .85  5.2.3 H2.3: Non-Expert Observations of AHP-based Motions are Robust to Changes in Acceleration Parameters The online survey responses also support H2.3 that, within the natural range of human motion recorded from the study described in Chapter 3, variations in launch accelerations used to generate AHP do not significantly affect the perception of hesitation or anthropomorphism in these motions. As shown in Table 5.1, hesitation and anthropomorphism scores of  AHP -based  motions, when compared across the  three levels of launch accelerations, show lack of significance (p = .21 and .34 respectively). This indicates that hesitation and anthropomorphism scores of AHPbased motions are not significantly affected by the launch acceleration values used to generate the motions as long as the values are within the range of natural human motion (9.5 to 23.5 m/s2 ). A significant interaction between motion types and acceleration was observed in the anthropomorphism score, F(4.68, 196.65) = 4.03, p < .005, but the effect size was small. The partial eta-squared1 score was only 0.08, indicating that the 1 Partial eta-squared, η 2 , describes the proportion of variance in data that can be attributed to the factor in focus.  64  Figure 5.3: a) Overview of hesitation scores from the five-point Likert scale question demonstrating significantly high hesitation scores for AHPbased hesitation motions; b) Overview of anthropomorphism scores from the five-point Likert scale question demonstrating that AHP-based hesitation motions are perceived to be more anthropomorphic than robotic avoidance motions. interaction effect only accounted for 8% of the overall variance in anthropomorphism. No significant interaction was observed with hesitation scores. Acceleration, by itself, demonstrated negligible effect on both hesitation and anthropomorphism scores, 0.03 and 0.05 partial eta-squared, respectively. That of the motion types, however, was much larger, 0.76 and 0.56 partial eta-squared for hesitation and anthropomorphism, respectively. This implies that types of motions have a larger effect on people’s perception of hesitation and anthropomorphism of a robot’s motion than launch acceleration values used to generate the motions.  65  5.3  Discussion  The results of this study provide strong evidence that  AHP -based  motions are per-  ceived to convey hesitation significantly more than the other types of motions tested. However, it is important to note that robotic avoidance motions are also perceived to convey hesitation. As shown in Figure 5.3, the mean hesitation scores of robotic avoidance motions are above the neutral score, although still significantly below that of AHP -based  AHP -based  motions. This emphasizes the fact that the use of  motions represents one of many approaches to generating motions that  people perceive as hesitant. However, the anthropomorphism scores show a clear perception difference between robotic avoidance and the other type of motions. Robotic avoidance motions received below-neutral mean scores that are significantly lower than the other motion types. This, plus the high anthropomorphism score that  AHP -based  motions  received, demonstrate that motions designed using the AHP trajectory specification produce highly human-like gestures comparable to minimum-jerk reaching trajectories.  5.3.1 Limitations It is important to note that the same channel used to recruit subjects in Study I was also used in this study. As the survey instrument did not collect nor filter online respondents using IP addresses or other identifying information, it is possible that some respondents from the online survey of Study I may also have participated in Study II. Although there are structural differences between the two studies’ online surveys, responses from such participants are likely to be biased. In addition, this study only tested AHP-based motions generated using humanlike range of the launch acceleration parameter (9.5 to 23.5 m/s2 ). Whether the findings from this study will hold for launch acceleration parameter values outside the tested range remains untested. Hence, provided that many industrial robots do not operate at such high levels of acceleration, further testing in a lower acceleration range needs to be conducted.  66  5.4  Summary  Previous chapters demonstrated that naturally occurring human hesitation gestures could be empirically identified, and common features of these gesture trajectories were modeled with AHP. The study presented in this chapter investigated whether AHP  can serve as an effective trajectory design specification for generating human-  like hesitation gestures on a robotic manipulator. Results from this study provide statistically strong evidence, with over 95% likelihood, that non-expert observers perceive  AHP -based  robot motions as con-  veying hesitation significantly more than robotic avoidance motions.  AHP -based  motions were also perceived to be equally human-like as minimum-jerk reachretract motions of a robot, whereas robotic avoidance motions were not. This finding was true regardless of the three different parameter values used to generate the AHP -based  motions.  Based on these findings, the following questions remain: Can non-experts recognize  AHP -based  behaviour of a robot as a hesitation while interacting with a  robot in situ? If so, what are the implications of implementing human recognizable hesitation gestures onto a robot as a resource conflict response mechanism in a human-robot collaboration context? These important questions are explored in an in-person HRI study discussed in the next chapter.  67  Chapter 6  Study III - Evaluating the Impact of Communicative Content The two studies presented in Chapter 3 and Chapter 5 provide strong empirical evidence that humans perceive hesitation in robot motions that a) mimic human wrist motions, and b) are generated using an Acceleration-based Hesitation Profile (AHP). However, these studies did not explore the effect of such anthromimetic hesitation behaviours in a Human-Robot Shared-Task (HRST) context. Considering the larger goal of improving human-robot collaboration, this chapter investigates whether: a) human-robot team performance will be better when the robot uses  AHP -  based hesitation gestures during collision avoidance as opposed to typical, abrupt collision avoidance motions, and b) human teammates in human-robot teams will have more positive feelings toward a robot teammate when the robot uses AHP-based hesitation gestures than when the robot uses abrupt collision avoidance motions. In the previous studies the survey respondents’ primary task was to watch video recordings of the robot motions and report on their observations. The study presented in this chapter, Study III, explores the two questions listed above while investigating whether non-expert recognition of an in-person real  HRST  AHP -based  motions will hold in  context. In order to produce high fidelty motions at an 68  anthropometric range of speeds, a 7-DOF robot (WAM™, Barrett Technologies, Cambridge, MA, USA) capable of high-acceleration motions (peak of 20 m/s2 )1 was used, instead of the 6-DOF CRS A460 robot exployed in Studies I and II. In this experiment, a 7-DOF robot (WAM™, Barrett Technologies, Cambridge, MA, USA). Similar to the experimental task employed in Study I (Chapter 3), a human subject reached for a shared resource. However, rather than observing robot motions from video recordings of human-robot interaction, the subjects directly interacted with the robot. When, by chance, the two agents reached for the shared resource at the same time, the robot responded in one of the following three ways: (i) ignored the presence of the resource conflict and continue reaching for the resource (Blind Condition), (ii) hesitated using an AHP-based trajectory (Hesitation Condition), and (iii) triggered an immediate stop (Robotic Avoidance Condition). These conditions are analogous to the motion types investigated in the online survey of Study II (Chapter 5). To investigate the utility of implementing hesitation gestures as a resource conflict response mechanism, this chapter considers the following hypotheses: H3.1. Robot hesitation motions, designed using AHP, are identified as hesitations when the motions are observed in situ. H3.2. Non-expert human users perceive a robot more positively when the robot responds with AHP-based hesitation gestures compared to when it does not. H3.3. A human-robot team yields better performance in a collaborative task when the robot uses AHP-based hesitation gestures than when it does not. The following sections of this chapter describe the details of the in situ experiment (Section 6.1), outline the results (Section 6.2), and discuss and conclude the implications of the findings (Section 6.3 and Section 6.4). 1 This  value is provided by the manufacturer as the peak end-effector acceleration with a 1kg load on the robot’s end-effector. This robot has the maximum end-effector velocity of 3 m/s. Other technical specifications of the robot is available in [2].  69  Pre-experiment  First Encounter  Consent Pre-experiment questionnaire Explanation of the experimental task and rules Training  Main Experiment Questionnaire Questionnaire Questionnaire  Trial 1 Trial 2 Trial 3  Second Encounter Questionnaire Questionnaire Questionnaire  Trial 1 Trial 2 Trial 3  Post-experiment Post-experiment Interview Interview question 1 Interview question 2  Explanation of the three conditions Interview question 3 (5) Gesture identification experiment  Figure 6.1: Overview of the Study III experiment process. Only five subjects participated in the additional gesture identification experiment.  6.1  Method  An interactive  HRST  was devised for the three by two (condition x encounter)  within-subjects experiment. This section describes the details of the experiment in Section 6.1.1, outlines the measurement instruments used in this study in Section 6.1.2, and presents the overall robotic system devised for the experiment in Section 6.1.3. Section 6.1.4 describes the data analysis method employed in this study. In total, 33 subjects (female: 13, male: 20) were recruited by posting a call for volunteers across the University of British Columbia campus and on the author’s lab website. The advertising materials are presented in Appendix C. The age of the participants ranged from 20 to 52 (M : 26.83, SD : 7.24), and they were mostly unfamiliar with robots in general (M : 1.42, SD : .58, from a five-point Likert scale measure, 1=not familiar at all, 5=very familiar). All of the subjects were right handed by chance. The experiment took place in a lab environment where the experimental area was surrounded by curtains to mitigate the effects of extraneous visual cues.  6.1.1 Experimental Task and Procedure There were three phases to the experiment: Pre-experiment, the main experiment, and the post-experiment. Figure 6.1 shows an overview of the experimental procedure. This section outlines each of the phases in detail.  70  Pre-Experiment Prior to starting the experiment, the experimenter informed the subjects that the task might result in physical contact with the robot, which was covered in soft safety padding. The study was approved by the University of British Columbia Behavioural Research Ethics Board. The consent form and questionnaires used for this study are available in Appendix C. The subjects signed an informed consent form and completed a pre-experiment questionnaire to provide demographic information. The details of the main experiment were then explained to the subjects. All subjects were invited to touch the robot’s padded end-effector to mitigate the fear of potentially colliding with the robot. As well, in order to avoid mistakes caused by a lack of understanding of the task, all subjects went through a training session, performing the task two to three times prior to starting the experiment. After this training session, the experimenter initialized the robot to its starting position. Main Experiment The experimental task was designed to represent a simplified assembly line task. The subject’s task was to pick up marbles from the marbles bin one at a time, and perform an “assembly” by combining each marble with a shape from the shapes bin according to the example marble-shape pairs displayed on the table. The robot’s task was to inspect the marbles bin at intervals during the task. This section outlines the experimental setup, details of the human’s and the robot’s tasks, and the four human states used to detect occurrence of resource conflicts. Experimental Setup.  The subject sat opposite the robot facing the workspace  setup as shown in Figure 6.2. The subject wore a ring attached to a cable potentiometer (SP1-50, Celesco, Chatsworth, CA, USA) on his/her dominant hand at all times while carrying out the experimental task. Data from the cable potentiometer was used to monitor the approximate extension of the subject’s hand during the trials and to identify the subject’s task state. At the start of each trial, the marbles bin located in the centre of the workspace contained twenty marbles (ten clear and ten blue). A shapes bin was placed in front 71  of the subject representing a parts bin in an assembly line, assigned to the human worker. The bin contained small foam items in various shapes (heart, circle, triangle, rectangle), colours (blue, red, purple, yellow, pink), and sizes (large, medium, small). Two pairs bins located on either side of the shapes bin were designated to contain finished products, i.e., pairs comprising a marble and a shape.  Figure 6.2: Overview of experimental setup for Study III. Subjects sat across from the robot. The subject’s task was to pick up marbles from the marbles bin one at a time, “assemble” it with a shape from the shapes bin according to the example marble-shape pairs. The robot’s task was to inspect the marbles bin. The robot is shown in its initial, ready-toreach position.  Human Task. Once cued by the experimenter, the subject’s task was to pick up each marble, one at a time, from the marbles bin using the instrumented dominant hand, pair the marble with a shape object according to the examples shown, and place the pair into the correct pairs bin. The subject used the non-dominant hand to pick up the foam shapes. The foam shape could be of any colour and size as long as the shape matched that of the respective example pair. To mitigate possible training effects, the examples changed at the beginning of every trial in random order. The following rules were explained to the subject: any instance of collision 72  with the robot and of pairing mistakes made during the task resulted in a penalty score for the team. Pairing mistakes were not to be corrected. Robot Task. The robot’s task was to move back and forth between its initial position and the marbles bin to “inspect” the bin fifteen times, thereby sharing the marbles bin with the subject. The robot was programmed to monitor the subject’s reaching speed and match this speed in its own motions. The subjects were aware of this before the beginning of the experiment. The details of this relationship between human motions and robot motions is described in more detail in the following section. When a resource conflict occurred, the robot responded in one of the following three ways depending on the condition assigned to the trial (see below). • Blind Condition: Regardless of the occurrence of human-robot resource conflict, the robot continued to reach for the shared resource. This resulted in collisions or near collision situations between the two agents. • Hesitation Condition: Upon occurrence of human-robot resource conflict, the robot followed an AHP-based trajectory to exhibit a human-like hesitation gesture and then returned to its initial position. Immediately after returning, it attempted to reach for the shared resource again. • Robotic Avoidance Condition: Upon occurrence of a human-robot resource conflict, the robot abruptly stopped and then retracted back to its initial position. Similar to the Hesitation Condition, on return, the robot immediately re-attempted to access the resource. Figure 6.3 presents a flow diagram of the robot’s behaviours for each of the conditions. Figure 6.4 illustrates the flow of interactions for the three conditions. All subjects encountered each of the three conditions once within the first three trials (first encounter) and once within the last three trials (second encounter) of the experiment. The order of the conditions was randomized. At the end of each trial, the subject was conducted away from the experiment area and asked to fill out a questionnaire. 73  Blind Condition  Hesitation Condition  Robotic Avoidance Condition  Calculate AHP parameters and spline coefficients Follow quintic trajectory to reach  Follow quintic trajectory to reach  Follow quintic trajectory to reach  Resource conflict detected?  Resource conflict detected?  Y  N N  t ≥ t1 ?  N  Y Resource conflict detected? N N  Trigger robotic avoidance stop  Switch to AHP-based trajectory  Y  Motion finished? Y  Trigger robotic avoidance stop  Motion finished? Y  Follow quintic trajectory to return Dwell  Follow quintic trajectory to return  Follow quintic trajectory to return  Dwell  Dwell  Figure 6.3: Overview of the robot’s behaviours in the three conditions. Human Task States.  The potentiometer readings were used to recognize occur-  rence of resource conflicts. This was accomplished by identifying four subject states based on the potentiometer readings: dwelling, reaching, reloading, and retracting. The distance of the subject’s hand from the starting location of the hand, |H⃗d a |, was measured using the potentiometer (see Figure 6.2 for frame definition). h  Then a set of conditions, summarized in Table 6.1, were used to identify these four states. A resource conflict was considered to have occurred when the robot had started its motion and the human was in either the reaching or reloading state. The amount of time the subject spent in motions other than reaching, reloading, or retracting was considered dwell time and mostly consisted of sorting through the shapes bin or placing finished pairs into the appropriate pairs bins. The subject’s dwell time and speed of reach were reflected in the robot’s motions. The robot’s dwell times in between its reaches were 80% of the last human dwell time recorded. This imbalance in dwell times between the two agents helped create resource conflicts, while allowing the robot to attempt its inspections more 74  Figure 6.4: Time series plots of trials with the Blind, Hesitation, and Robotic Avoidance Conditions. The plot of the Blind Condition shows collisions at approximately 5 and 9 seconds. The plot of the Hesitation Condition shows the robot’s AHP-based hesitation response at 3 and 10 seconds. Triggering of avoidance behaviours are observed at 2 and 7 seconds in the Robotic Avoidance Condition plot. 75  Table 6.1: Conditions for identifying the four states of task-related human a a motion. Variables H ddwell and H dreload are constant thresholds set at 22 cm and 39 cm, respectively. S prev is the previous state of the human. State (Snew ) Dwelling Reaching Reloading Retracting  Condition a ⃗ ) If (|H dha | ≤ H ddwell a a a ⃗ If (H ddwell < |H dh | < H dreload ) & (S prev = Dwelling) a ≤ |H⃗dha |) If (H dreload a a ) & (S prev = Reload) < |H⃗dha | < H dreload If (H ddwell  frequently. An exact match of speed between the person and the robot’s motions would have resulted in high speed and high acceleration motions that were likely to be threatening to the subjects, not to mention causing mechanical stress on the robot actuators. Therefore, the robot traveled at a slower but proportional rate to that of human’s: Rtreach = 4 H treach , where Rtreach is the amount of time the robot is commanded to travel from its initial position to the marbles bin, and H treach is the duration the subject took to travel from the dwelling state to the reloading state. Post-Experiment After all six trials were completed, the experimenter conducted a post-experiment interview and a follow-up gesture identification experiment designed to test H3.1, discussed below. Post-experiment Interview. The experimenter asked each subject three interview questions. In the first question, the subjects were asked which of the six trials they liked the most. In the second question, the experimenter asked whether the subjects felt any discomfort or nervousness working with the robot. The subject’s qualitatively feedback from these two questions were used to confirm the quantitative findings from the questionnaire. After the subjects answered these two questions, the experimenter explained to the subjects the three different resource conflict response behaviours of the robot. The experimenter outlined the Blind Condition as the one in which the robot did not respond to the subject’s motions at all, whereas the robot did respond to avoid col-  76  lisions in the Robotic Avoidance and Hesitation Conditions. The Robotic Avoidance Condition was described as the one in which the robot stopped abruptly. The Hesitation Condition was described as the one in which the robot hesitated. Afterwards, the experimenter asked the third interview question: whether they noticed the difference between the two conditions, and if so, which trial they think was the Hesitation Condition. Gesture Identification Experiment.  After the post-experiment interview, five of  the subjects who answered affirmatively to the last interview question participated in a short additional experiment. This experiment was designed to test whether the context for observing robot motions (that is, watching a video recording of a robot and an actor, versus observing a robot whilst interacting with it) affects the accuracy of identifying AHP-based motions as hesitation gestures. These subjects were explained that the purpose of this additional experiment was to verify whether the motions they perceived as ‘hesitations’ during the main experiment were indeed motions programmed to convey hesitations. The subjects were not given any additional description of the motion differences between the Robotic Avoidance and the Hesitation Conditions. The robot was programmed to continuously attempt to reach for the marble bowl with one second rests between attempts. The subjects were asked to intentionally interrupt the robot’s reaches by reaching for the marble bin (i.e., to trigger the robot’s collision avoidance) and then to verbally label which of the two robot behaviours (hesitation or robotic avoidance) the robot exhibited in its collision avoidance behaviour.  6.1.2 Measuring Human Perception and Task Performance This section describes the instruments used to measure human perception of the robot and and the human-robot team performance from the main experiment phase of this study. A questionnaire was used to measure five elements of the subject’s perception of the robot, and three elements of the subject’s perception of the human-robot teamwork. The total of eight perception measurements provided a rich set of human perception data to test H3.2 (a human-robot team will perform better when the robot uses AHP-based hesitation gestures than when it does not).  77  Independent of the questionnaire, the experimenter collected five task-performancerelated measures. Human Perception of Robot Measures Five key measures of human perception of the robot were measured using the Godspeed survey [4]. This survey instrument is not the only standard questionnaire available for evaluating various aspects of  HRI ,  but it has been widely accepted  and used within the field. It includes the following elements of human perception important for this study: animacy, anthropomorphism, likeability, perceived intelligence and perceived safety. Human-Robot Team Perception Measures Unlike human perception of robot measures, standardized questionnaires for human perception of human-robot teamwork have yet to be developed in HRI. Hence, the questions used in this study are borrowed from well-documented and widely used instruments in the neighbouring field of Human-Computer Interaction (HCI). In a human-computer interaction study, Burgoon et al. used the Desert Survival Problem 2 to study the impact the different elements of HCI have on the human participant’s perception of the computer as a teammate and the team’s overall performance [13, 14]. A positive scenario is defined as one in which each team member has positive perception of the other, and one that results in a positive output. Three key team measures were identified: interactive measures, social judgment measures and task outcome measures. Questionnaires designed by Burgoon et al. and others comprise previously tested instruments in psychology and HCI. 2 The Desert Survival Problem is one of the most widely used methods of measuring humanhuman and human-computer teamwork, and was proposed by Lafferty and Eady in 1974 [41]. In this scenario-based game, typically consisting of two agents, participants are given background information about being stranded in a desert with a limited number of items they can take with them in their journey of desert survival. The participants independently rank a given list of items in their order of importance in surviving in the desert. Upon initial ranking of the items, the agents discuss each of the survival items as a team. Afterwards, the agent(s) have the option to changing the ranking of the items. Questionnaires typically follow the experimental game and measures each of the teammates’ influence on each other (measure of how many items were ranked differently post-discussion), how positive the influence is on the team’s performance (measure of how many correct answers were obtained), and also how positively a teammate perceives another member of the team.  78  Of the three team measures mentioned above, the social judgment measure is mainly borrowed from a study by Moon (no relation to the author) and Nass [48]. This measure includes elements such as credibility, dominance, usefulness and attractiveness as a partner. The two experiments presented in [48] also used the Desert Survival Problem and investigated whether people’s responses to computer personalities are similar to their responses to analogous human personalities. Considering the survey questions from [13] and [48] that measure interactiveness and social judgment of a partner, only HCI questions applicable to HRI were retained. These questions include measures of dominance, usefulness, and emotional satisfaction. The questionnaire used for this study is presented in Appendix C. Task Performance Measures Analogous to task outcome measures used in Desert Survival Problem experiments, this study employed five performance measures to test the impact AHP-based motions had on the human-robot team. The five performance measures are: • Human performance: The time between the experimenter’s ‘Go’ signal and the time at which the subject completed the task. • Robot performance: The time between the start of the trial and the completion of the robot’s last retracting motion. The experimenter’s ‘Go’ signal coincided with the start of the trial. • Team performance: The larger of the human and robot task completion time. • Mistakes: Each misplaced marble or shape was counted as one mistake, as assessed by the experimenter at the end of each trial. • Collisions: The number of collisions as counted by the experimenter when reviewing the video recording of the experiment.  6.1.3 System Design and Implementation This section describes how the robot’s end-effector motions were generated and managed for the experiment. The experimental setup utilized the 7-DOF WAM 79  Figure 6.5: The software architecture to interface higher level decision making and control algorithm in ROS to lower level real-time control of the WAM arm using the BtClient environment. Client nodes 1 through n represent the various ROS nodes that are used to make higher level control decisions for the robot, as well as interface with the cable potentiometer via an Arduino. running BtClient (Barrett Technologies, Cambridge, MA, USA) as the low-level controller, and Robot Operating System (ROS) (Willow Garage, Menlo Park, CA, USA) as the high-level controller. To interface the high-level controller with the low-level controller, an open source software package, WAMinterface, was used. Figure 6.5 shows the overall controller architecture for the setup. This section is organized in the following order: details of the low-level controller, the high-level controller and its management of gestures, the interfacing algorithms, and the algorithms used for monitoring the human’s states. Low-Level Controller The real-time trajectory controller ran on BtClient. Communication between an external PC with the robot was enabled via the CANbus system outlined in Figure 6.6. Controlling of the robot via segments of quintic splines is enabled by Quintic  80  Figure 6.6: Modified from Figure 34 of Barrett’s WAM user documentation (WAM UserManual AH-00.pdf). The experimental setup uses an external PC with a Xenomai real-time development platform to access the CANbus. Traj Preparation and Quintic Traj Generator functions residing in the BtClient system and customised for this study (see Figure 6.5 for an overview of the system). Quintic Traj Preparation prepares the BtClient system for controlling the end-effector of the robot in Cartesian space via quintic splines. Quintic Traj Generators generates quintic spline trajectories in real-time and servos the robot through the spline. Utilizing the AHP-based trajectory implementation method introduced in Chap81  ter 4, two quintic trajectory generators were programmed into the Quintic Traj Generator. The first generator receives endpoint conditions (position, velocity, acceleration) of a desired primary axis (Yb ) trajectory as input, and servos the robot’s end-effector through a quintic spline that adheres to the boundary conditions (see Figure 6.2 for frame definition). In this study, this function is used to generate reaching and retracting motions of the robot. The second generator receives coefficients of the desired quintic spline as input, also in the Yb -axis, and servos the robot’s end-effector through the spline. This second generator allows calculated AHP spline coefficients to be used in servoing the robot. Both the quintic trajectory generators use the following parabolic trajectory to control the Zb -axis motions as a function of R ybw (t): b R zw (t)  = −2(R ybw (t) + R ybo f f set )2 + 0.4(R ybw (t) + R ybo f f set ) + R zbo f f set  (6.1)  High-Level Controller and Generation of Motion Trajectories The high-level algorithms implemented in  ROS  managed the triggering and com-  manding of the robot’s motions. Figure 6.7 provides a schematic overview of the ROS -based  algorithms.  The gesture launcher node managed triggering of the robot’s motions and the robot’s dwelling behaviour. gesture engine received commands from gesture launcher and managed triggering of robotic avoidance or AHP-based motions. Once gesture engine was given the command to start its reaching motion, the human’s task state was monitored to detect occurrence of resource conflicts. If the human remained in the dwelling or retracting state while the robot was reaching, the robot continued its motion. Upon successfully completing its reach, the robot waited for one second so as to ‘inspect’ the marble bin, and retracted back to its starting position. If the human entered the reload or reaching state while the robot was reaching, the system considered this to be an instance of resource conflict. The conflict was handled according to the session condition. As illustrated in Figure 6.3, in the Blind Condition, the robot was programmed to ignore the resource conflict. 82  Figure 6.7: The software system architecture implemented for the HRST experiment. The WAMServer node interfaces btClient control algorithms that operate outside of ROS to directly control the robot. Further detail of the interface and btClient algorithms are outlined in Figure 6.5. In the Robotic Avoidance Condition, a trajectory command was sent to the robot via WAMinterface to stop at a point 0.1 cm past the current position of the robot on its current path and then retract using a quintic trajectory. The 0.1 cm distance accommodated the motion occurring during the inherent communication delay in the system followed by an abrupt stopping motion. A built-in linear trapezoidal trajectory controller in BtClient was used to generate this motion. This produced abrupt stopping motions without triggering the robot’s torque limits that would otherwise result in a disruptive low-level shutdown by robot’s safety systems. In the Hesitation Condition, gesture engine used calculate param node to compute the launch acceleration, a1 , and its temporal location, t1 . Quintic coefficients for AHP splines were calculated at the start of the reaching motion via get s2 s3 coefs node. Both calculate param and get s2 s3 coefs nodes used the implementation method described in Section 4.2.2. Detailed descriptions of these nodes and the calculation algorithms are presented in Appendix D, Section D.2. If a resource conflict was detected prior to reaching the launch accel83  eration (t < t1 ), then the robot continued its motion until the launch acceleration was reached. At that point, gesture engine switched its reference trajectory to the first of the three remaining piecewise quintic splines (x2 (t) in Chapter 4) of AHP. If the conflict was detected after the launch acceleration (t ≥ t1 ), the robot resorted to the abrupt stopping behaviour designed for the Robotic Avoidance Condition. High-Low Level Controller Interfacing Algorithm Commands and data from ROS requiring actions from BtClient were called using functions in the WAMClientROSFunctions node, which provides access to a list of client functions in ROS that trigger WAM-related function calls. The respective server module for these client calls is also ROS-based, and is called WAMServerROS. The WAMServerROS node interfaced the ROS-based function calls as a client to its respective server commands in the Socket Server Commands module. This module interfaces the ROS-based client/server system to the BtClient system. Any data, including function calls, passed to the socket, were then decoded into individual input variables. Socket-WAM interface deciphered the data using Socket Commands. The decoded and deciphered datacommand set was then repackaged into a form understood by BtClient using WAM Interface node. This node directly communicated with BtClient’s control thread to control the robot. The relationship between these nodes is presented in Figure 6.5. Monitoring Human States The cable potentiometer interfaced with the system using an Arduino platform. The potentiometer measurements provided an approximate extension of the subject’s hand from the edge of the table forward. It was broadcast in  ROS  using an open-  source rosserial package, and is shown as gate interface in Figure 6.7. A separate node, sensor launch, received these measurements, and identified and recorded durations of the four states of the subject’s motion (dwelling, reaching, reloading, and retracting). The decision maker node used these identified human states to infer occurrence of resource conflicts and make decisions  84  on whether to trigger the appropriate collision avoidance motions for the Hesitation and Robotic Avoidance Conditions. The gesture launcher node used the recorded durations of the subject’s dwelling and reaching states to determine respective dwell times (80% of human dwell time) and duration of reach for the robot (Rtreach = 4x H treach ).  6.1.4 Data Analysis With the three conditions of interaction (Blind, Hesitation, and Robotic Avoidance) and two encounters (First, Second) as factors, a two-way repeated-measures Analysis of Variance (ANOVA) was conducted. For questionnaire responses, Cronbach’s alpha values were calculated in order to ensure that the collected data are internally reliable. For performance measures, Cronbach’s alpha calculations are not necessary, since each performance measure consists of only one element. Once significant results are found from the  ANOVA ,  post-hoc analyses were conducted  to identify which conditions or encounters received significantly higher or lower scores. This method provided empirical testing of hypotheses H3.2 and H3.3. In addition to the quantitative analyses, qualitative responses collected from post-experiment interviews were analyzed to find support for the quantitative findings. Interview notes were coded by two individuals. Inter-rater reliability values were calculated via Cohen’s Kappa. Upon confirming a high level of reliability, percentages of the different categories of responses were calculated. This reflected the landscape of the subject’s qualitative feedback, and was compared with the quantitative findings. For the follow-up experiment involving the five subjects, the number of false positives and false negatives were divided by the total number of  AHP -based  and  robotic avoidance motions triggered by the subjects. This provided a quantitative measure of accuracy from human recognition of AHP-based motions.  6.2  Results  This section presents the results from the data analyses outlined in the previous section. The results are presented in the order of the hypotheses. Section 6.2.1 discusses relevant results for testing H3.1, Section 6.2.2 discusses H3.2, and Sec85  tion 6.2.3 discusses H3.3. Of the 33 subjects recruited, data from only 24 subjects (female: 12, male: 12) are analysed and reported here due to to technical problems, subject failure to follow instructions, and/or insufficient occurrence of resource conflicts. All but two subjects had never interacted with the WAM robot before. These two subjects indicated that they had seen the robot at an exhibit or during a lab tour.  6.2.1 H3.1: Can Humans Recognize AHP-based Motions as Hesitations in Situ? The question of whether humans recognize hesitations from AHP-based motions in situ was addressed by the last question in the post-experiment interview and by the results of the gesture identification experiment. Inter-rater reliability for the interview question yielded Cohen’s Kappa of 0.75 (p < .001). This is considered a substantial level of consistency. For the last question of our post-experiment interview, the majority of subjects (79%) said that they noticed the difference between the two stopping motions. The five subjects who participated in the gesture identification experiment each triggered at least five instances of stopping behaviours of the robot. In total, 52 stopping behaviours were triggered and identified in situ; 15 of these were  AHP -  based motions, and 37 were robotic avoidance behaviours. Six robotic avoidance motions were falsely identified as hesitations, and one AHP-based motion was identified as a robotic avoidance motion. This yields a total of 87% accuracy in the subjects’ in situ identification of hesitation gestures from AHP-based motions. This supports H3.1 that the AHP-based trajectories are perceived as hesitations in situ.  6.2.2 H3.2: Do Humans Perceive Hesitations More Positively? This section presents the empirical findings of the eight perception measures collected from the questionnaire. These questions provide a broad test of the hypothesis that when a robot responds with an  AHP -based  hesitation gesture upon  encountering a resource conflict, then humans perceive the robot more positively than when it does not. The qualitative feedback obtained from the post-experiment interview with all subjects supports the quantitative findings: both are presented  86  below. Human Perception Measures Of the eight human perception measures collected, all but the perceived intelligence measure yielded an internal reliability score above 0.70. Hence, the perceived intelligence measure is excluded from the discussion. The number of items used to collect the perception measures was reduced or modified from the original items in Moon and Nass [48] and the Godspeed survey [4]. Nonetheless, internal reliability scores from the Study III questionnaire responses show similar values as the original measures (see Table 6.2). Summarised in Table 6.3 are the repeated-measures  ANOVA  results for the  seven internally reliable measures. Across the three conditions, all but usefulness and emotional satisfaction measures show statistically significant score differences (α = .05). Table 6.4 reports the mean and standard error of the measures and their significant differences across the conditions. Figure 6.8 presents a graphical overview of the scores having significance. Significant differences in scores between the first and second encounters are only found in the perceived safety and animacy measures. A summary of the measures’ means, standard errors, and the presence of significant pairwise differences is presented in Table 6.5. Significance level adjustments for all post-hoc analyses were made with the Bonferroni correction. No significant interaction is found between the factors Condition and Encounter. Only the significant findings are presented below, and the complete statistical results are reported in Appendix E: Dominance. Post-hoc comparisons indicate that the subjects perceive the Blind Condition as significantly more dominant than the Hesitation and Robotic Avoidance Conditions. No significant difference is found between dominance scores of the Hesitation and Robotic Avoidance Conditions (p = 1.00). Figure 6.8 (a) plots the dominance scores.  87  Perceived Safety. Scores from the questionnaire suggest a trend that the Hesitation and Robotic Avoidance Conditions are perceived as more safe than the Blind Condition (p = 0.10 and p = 0.07 respectively); however, this is not significant to  α = .05. No apparent difference is found between the perceived safety of the Hesitation and Robotic Avoidance Conditions (p = 1.00). However, there was a significant increase in perceived safety from the first encounter to the second (p = 0.02). Figure 6.8 (b) graphically summarises these results. Table 6.2: Internal reliabilities of the eight self-reported measures are presented here. Only the measures with Cronbach’s alpha greater or equal to 0.70 are analyzed and reported. All but perceived intelligence meet this requirement. Cronbach’s alpha values for dominance, usefulness, and emotional satisfaction from Moon and Nass’ work were 0.89, 0.80 and 0.86, respectively [48]. Multiple alpha values are reported in Bartneck et al.’s work from cited studies and their range is as follows: anthropomorphism (0.86 to 0.93), animacy (0.70 to 0.76), likeability (0.84 to 0.92), and perceived safety (0.91) [4]. Measures  Cronbach’s alpha  Dominance  0.88  Usefulness  0.79  Emotional Satisfaction  0.84  Perceived Safety Likeability  0.91 0.87  Animacy  0.78  Anthropomorphism  0.81  Perceived Intelligence  0.54  Items Aggressive, Assertive, Competitive, Dominant, Forceful, Independent Efficient, Helpful, Reliable, Useful How much did you like this robot?, How much did you like working with this robot?, Boring (reverse scale), Enjoyable, Engaging Anxious, Agitated Like, Kind, Pleasant, Friendly Apathetic, Artificial, Mechanical, Stagnant Artificial, Fake, Machinelike, Moving Elegantly Incompetent (reverse scale), Intelligent  88  Table 6.3: Two-way repeated-measures ANOVA results are presented for all seven perception measures. Four measures (dominance, emotional satisfaction, animacy, and anthropomorphism) violate the sphericity assumption, and their ANOVA results have been corrected via the GreenhouseGeisser approach. Their respective ε -values are reported here. Measures showing significant ANOVA results are indicated with the following suffixes: t p < .10,∗ p < .05,∗∗∗ p < .001. Measure Dominance∗∗∗ Usefulness Emotional Satisfactiont Perceived Safety∗ Likeability∗∗∗ Animacy∗ Anthropomorphism∗  Dominance Usefulness Emotional Satisfaction Perceived Safety∗ Likeabilityt Animacy∗ Anthropomorphismt  ANOVA  Mauchly’s Test  Condition F(1.42, 31.16) = 40.92, p < .0001 F(2, 44) = .37, p = .69 F(1.48, 32.52) = 2.68, p = 0.10 F(2, 44) = 4.03, p = .02, F(2, 44) = 18.36, p < .0001 F(1.44, 31.66) = 4.96, p = .02 F(1.50, 32.88) = 4.92, p = .02  W (2) = .56, p < .01, ε = .71 W (2) = .96, p = .69 W (2) = .65, p = .01, ε = .74 W (2) = .98, p = .80 W (2) = .76, p = .06 W (2) = .61, p < .01, ε = .72 W (2) = .66, p = .01, ε = .75  Encounter F(1, 22) = .01, p = .94 F(1, 22) =2.17, p = .16 F(1, 22) = 1.89, p = .18 F(1, 22) = 6.46, p = .02 F(1, 22) = 3.64, p = .07 F(1, 22) = 5.63, p = .03 F(1, 22) = 3.86, p = .06  Likeability. Significant differences were found in likeability scores across the conditions (p < .001); both the Hesitation and Robotic Avoidance Conditions are significantly more liked than the Blind Condition (p < .001 and p < .01 respectively). Of the likeability scores, the Hesitation Condition has the highest score, although the Hesitation and Robotic Avoidance Conditions show no significant score difference (p = .50). The scores also tend to increase from the first to the 89  Table 6.4: The mean and standard error, in parentheses, of the human perception and task performance measures are presented according to Condition. Scores from Hesitation and Robotic Avoidance Conditions that have significant differences from that of the Blind Condition are marked according to their significance level as follows: t p < .10,∗ p < .05,∗∗ p < .01,∗∗∗ p < .001. Dependent Measure  Blind  Hesitation  Robotic Avoidance  Dominance Usefulness Emotional Satisfaction Perceived Safety Likeability Animacy Anthropomorphism  Human Perception 3.31 (.17) 2.09∗∗∗ (.12) 3.06 (.13) 3.16 (.16) 3.32 (.14) 3.60 (.15) 3.70 (.20) 4.05t (.17) 2.81 (.14) 3.73∗∗∗ (.13) 2.73 (.14) 3.29∗ (.13) 2.70 (.13) 3.18∗ (.13)  2.03∗∗∗ (.10) 3.11 (.12) 3.59 (.14) 4.09t (.17) 3.56∗∗ (.11) 3.16(.14) 3.03(.13)  Team Performance Robot Performance Human Performance  Task Performance 99.40 (2.96) 137.99∗∗∗ (3.73) 90.20 (1.77) 136.55∗∗∗ (3.96) 92.98 (3.31) 97.73 (3.94)  135.39∗∗∗ (4.27) 133.48∗∗∗ (4.53) 92.82 (2.97)  second encounter, although this trend is without significance (p = .07). Figure 6.8 (c) presents these results. Animacy.  As shown in Figure 6.8 (d), the Hesitation Condition is perceived sig-  nificantly more animate than the Blind Condition (p < .05). The Robotic Avoidance, on the other hand, is not perceived significantly more animate than the Blind Condition (p = .17). No significant difference exists between perceived animacy of the Hesitation and the Robotic Avoidance Conditions (p = .85). The animacy measure also shows a significant increase from the first to the second encounter (p < .05). Anthropomorphism. The robot is perceived more anthropomorphic in the Hesitation Condition than in the Blind Condition (p < .05). However, the robot is not perceived more anthropomorphic in the Robotic Avoidance Condition compared  90  Table 6.5: The mean and standard error, in parentheses, of the human perception and task performance measures are divided by the first and second encounters and presented here. Scores that show significant differences between the two encounters are marked as follows: ∗ p < .05,∗∗ p < .01. Dependent Measure  First Encounter  Second Encounter  Human Perception Dominance 2.48 (.11) Usefulness 3.00 (.13) Emotional Satisfaction 3.44 (.13) Perceived Safety∗ 3.78 (.16) Likeability 3.26 (.10) ∗ Animacy 2.92 (.08) Anthropomorphism 2.83 (.10)  2.47 (.11) 3.22 (.15) 3.57 (.13) 4.12 (.17) 3.48 (.11) 3.20 (.12) 3.10 (.13)  Task Performance Team 128.42 (3.76) Robot Performance∗∗ 124.81 (3.78) Human Performance 94.96 (2.77)  120.10 (2.98) 115.36 (2.67) 94.05 (3.44)  Performance∗∗  to the Blind Condition (p = .28). Anthropomorphism scores of the Hesitation and Robotic Avoidance Conditions do not show a significant difference (p = .49). This measure also seems to increase from the first to the second encounter, although this is not significant (p = .06). A graphical summary of the score is shown in Figure 6.8 (e). Interview Question: Which Trial Did You Like the Best? Two individuals coded the answers to the first interview question with a high level of inter-rater reliability (Cohen’s Kappa of 0.88, p < .001). Seven subjects chose more than one trial, and their choices are weighted accordingly in our analysis. The Hesitation Condition was preferred the most (42%), followed by the Robotic Avoidance Condition (37%) and the Blind Condition (21%). This is consistent with the quantitative finding from the likeability measure. Surprisingly, the subjects who chose a trial with the Blind Condition expressed that they prefer the aggressiveness of the robot. The subjects who chose trials with the Robotic Avoidance and/or Hesitation Conditions preferred the lower dom91  Figure 6.8: Overview of a) dominance, b) perceived safety, c) likeability, d) animacy, and e) anthropomorphism scores collected from five-point Likert scale questions. 92  inance level of the robot. Two interesting comments were made by subjects who chose trials with the Hesitation Condition. One subject expressed that she liked the human-like features of the robot, while the other expressed general preference toward the robot’s AHP-based motions. One subject (subject 17) commented “I guess there was this hesitation happening. So I really felt like there was feedback happening here. So it was conscious of not hitting me, and at the same time, try to do its task.”, and another (subject 28) said “I liked the first one as well, when it hesitated. It seemed... it kind of reminded of someone who is really, really shy, or like a kid who is totally ready to do his job but then stopping.” These comments were made before the experimental conditions were explained to the subjects. Interview Question: Did You Feel Uncomfortable or Nervous? Two coders processed the subjects’ responses to the second interview question with a substantial level of inter-rater reliability (Cohen’s Kappa of 0.75, p < .001). Over half of the subjects (58%) answered yes. The majority of these subjects (57% of all ‘yes’ responses) attributed this to the collision(s) with the robot. Others (36% of all ‘yes’ responses) indicated that they disliked instances where the robot seemed inefficient, such as taking too long to “inspect” the marbles bin, or finishing its task later than the subject. The subjects also expressed that, although they were surprised when a collision happened for the first time, the collision itself was not painful. Some even found the collision(s) rather fun and entertaining.  6.2.3 H3.3: Does Hesitation Elicit Improved Performance? This section discusses the robot, human, and team performance measures collected from this study in order to investigate whether a human-robot team performs better when a robot uses  AHP -based  hesitation gestures than when it does not. This  section discusses the three completion times as the main factors to test H3.3. The counts of collisions and mistakes are discussed as supplementary, yet important, factors to consider in weighing the performance of a team.  93  Human, Robot, and Team Task Completion Time Across the three conditions of interaction, significant results are found in the team and robot performance (see Table 6.6 for ANOVA results, and Table 6.4 for pairwise comparisons). For both team and robot performance measures, the Blind Condition shows significantly better performances than the Robotic Avoidance and Hesitation Conditions. The Robotic Avoidance and Hesitation Conditions do not show significantly better or worse performance from each other. The team and robot performance measures also show significant differences between the first and second encounters. As shown in Table 6.5, both the robot and team have significantly faster task completion times in the second encounters. This may be explained by the decreased number of AHP-based motions and robotic avoidance motions triggered in the second encounter. In the first encounter, a total of 102  AHP -based  motions and 182 robotic avoidance motions were triggered,  whereas 90 AHP-based motions and 147 robotic avoidance motions were triggered in the second encounter. Hence, the larger number of stopping motions triggered in the first encounter explains the longer task completion times for the robot and, subsequently, the team in the first encounter. On the other hand, the human’s performance does not suggest significant differences across conditions and encounters. This indicates that human task performance is not significantly affected by the different conflict response behaviours of the robot, nor are there significant training effects throughout the trials. This raises the question of how the team and the robot’s performance scores can significantly decrease from the first to the second encounters while the human performance remains the same. Since triggering of either of the stopping motions was solely dependent on the behaviour of the human subjects, one can postulate that the subjects learned to behave in ways that reduced the number of near-collision situations with the robot while not affecting the performance of their own task. These results fail to support our hypothesis H3.3 that  AHP -based  hesitations  in a human-robot collaboration result in improved task performance. However, it also suggests that the communicative feature of the AHP-based conflict response strategy does not hinder performance of the robot and the team.  94  Table 6.6: Two-way repeated-measures ANOVA results for the three task performance measures are presented here. Measures with a significant ANOVA result are highlighted as follows: ∗∗ p < .01,∗∗∗ p < .001 Performance Measure  ANOVA  Mauchly’s Test  Team∗∗∗ Robot∗∗∗ Human  Condition F(2, 46) = 74.88, p < .0001 F(2, 46) = 103.46, p < .0001 F(2, 46) = 1.56, p = .22  W (2) = .91, p = .34 W (2) = .95, p = .59 W (2) = .97, p = .74  Team∗∗  Robot∗∗ Human  Encounter F(1, 23) = 8.51, p < .01 F(1, 23) = 11.64, p < .01 F(1, 23) = .15, p = .67  Collisions and Mistakes Since the robot was programmed to avoid collisions in both the Hesitation and Robotic Avoidance Conditions, no collisions occurred in those conditions. As many as six collisions occurred in trials with the Blind Condition. The number of collisions found between the Blind Condition and the two no-collision conditions are statistically significant (X 2 (8, N = 144) = 75.79, p < .001). Section E.3.2 of Appendix E outlines details of the statistical analysis. Obviously, no statistical significance is found between the number of collisions in the Robotic Avoidance Condition and that of Hesitation conditions. Nonetheless, the author believes that the occurrence of collisions should be considered in conjunction with the task completion times when considering the overall desirability of human-robot collaboration performance. The distribution of collisions is presented in Table 6.7. Mistakes are also a quantitative negative measure of team performance. In a real assembly line scenario, making a mistake can be costly in terms of both resources and time required to correct the mistake. There is not enough statistical power to report on a significant difference across the three conditions (X 2 (6, N = 144) = 3.29, p = .77). Nonetheless, in the raw data, the highest number of mistakes was found in the Blind Condition, and the least number of mistakes in the Hesitation Condition (see Table 6.8). Section E.3.1 of Appendix E presents the 95  details of the non-parametric test conducted on the mistakes measure. Table 6.7: The distribution of the number of collisions occurred. Each cell indicates the number of subjects who collided with the robot. No collision occurred for both the Hesitation and Robotic Avoidance Conditions. Condition Blind Hesitation Robotic Avoidance  Number of Collisions per Trial 0 1 2 3 4 5 6 18 16 10 3 0 0 1 48 0 0 0 0 0 0 48 0 0 0 0 0 0  Table 6.8: The number of mistakes made in each condition is summarised by condition. Each cell contains the number of subjects who made mistakes. The total number of mistakes reported on the bottom row is the sum of the mistakes made by all subjects.  1 Mistake 2 Mistakes 3 Mistakes Total Number of Mistakes  6.3  Blind 4 1 1 9  Hesitation 3 0 0 3  Robotic Avoidance 4 1 0 6  Discussion  The findings from Study III, together with those of Study II, demonstrate that nonexpert human subjects are able to identify the proposed AHP-based trajectories as humanlike hesitations. Compared to Study II, accurate kinematic control of the robot in Study III was difficult to achieve due to the robot’s native real-time control architecture. Nevertheless, human subjects recognized the AHP-based motions as hesitations, and accurately distinguished them as different from abrupt stopping behaviours. The fact that subjects were able to identify nuances from the brief motions of a robotic manipulator alludes to the possible usefulness of anthromimetic gestures in human-robot collaboration. However, the impact of AHP-based hesitations in human-robot collaboration requires further investigation. The fact that only the Hesitation, and not the Robotic 96  Avoidance, condition was perceived significantly more anthropomorphic and animate than the Blind Condition suggests that AHP-based hesitations are perceived in a more positive light. Nonetheless, there is a lack of significance in human perception and performance data between the two non-aggressive conditions. The Blind Condition was the least preferred of the three experimental conditions, and the robot in the Blind Condition was perceived to be significantly more dominant and less likable than both the Hesitation and Robotic Avoidance Conditions. This was true even though collisions with the robot did not physically harm the subjects. Hence, despite the lack of human perception differences between the Hesitation and Robotic Avoidance Conditions, the fact that the Blind Condition yielded significantly lower human perception scores emphasizes the importance of responding to, rather than ignoring, a human-robot resource conflict. The findings from this study, therefore, indicate that a robot should not ignore human-robot resource conflicts even when the imminent collisions are not expected to physically harm the human user. Given the evidence that subjects were able to recognize the  AHP -based  mo-  tions as hesitations in this study, it remains unknown as to why the perception and performance measures were not significantly different between the Hesitation and Robotic Avoidance Conditions. It is worth considering a number of plausible explanations. First, none of the three conditions induced physical harm to the subjects, and all subjects were aware of the possible collisions with the robot. While it is unethical to test conditions in which subjects are physically or psychologically harmed, the subjects may have quickly internalised the lack of real danger, thereby contributing to the non-significance of the results obtained. This is evidenced by the qualitative finding that some of the subjects found the collisions entertaining. This underlines the challenge of creating experiments that reflect potential real-life human-robot collaboration scenarios. The author believes that creating a perception of possible danger while not undermining the safety of subjects would yield results more reflective of reality. Second, by the nature of the task, the subject’s task performance was not directly affected by the performance of the robot; subjects did not require the robot to succeed its “inspection” task in continuing their task. Hence, nothing stopped 97  the subjects from ignoring the robot and performing their tasks that, in turn, penalised the performance of the robot in both the Hesitation and Robotic Avoidance Conditions. Furthermore, the subjects were aware that they were being timed. This may have motivated the subjects to finish their tasks as quickly as possible, regardless of the robot’s behaviour. Thus, it should not come as a surprise that human performance remained unaffected across conditions. Indeed, the Blind Condition generated the best team and robot task completion times. However, the performance came with larger counts of mistakes and collisions. Considering industrial applications in which correcting for mistakes or occurrence of collisions can seriously affect completion times of a task and the quality of the finished product, it is important to consider these secondary performance measures. In this study, a separate performance score that encompasses the team’s task completion time, mistakes, and collisions was not calculated, since the amount of penalty applied to each of the performance measures can affect the final score. However, with a fair means to calculate such penalty scores, it is possible to conjecture that the Hesitation Condition would have shown the highest performance of the three conditions.  6.3.1 Limitations One of the key limitations of this study is in the implementation of the Hesitation Condition. Since the decision whether to trigger an AHP-based motion needs to be made before t1 is reached, all other occurrences of resource conflicts after t1 must be dealt with using robotic avoidance motions in order to avoid collisions. Hence, some of the subjects encountered both  AHP -based  and robotic avoidance motions  in the Hesitation Condition. This may have contributed to the lack of significance in the human perception measures as well as performance measures. Due to the multiple layers of interfacing required for the system setup, the system did not truly operate in real-time. The  ROS -based  algorithms were run in  a non-real-time environment, whereas the BtClient system operated in real-time. Hence, the ROS-based algorithms sometimes caused delay in sending commands to the real-time system and may have affected human perception of the robot motions. In addition, due to the limited number of subjects recruited for this study, the  98  number of mistakes made did not yield significant results. Retrospectively, the team performance and human performance scores may have demonstrated significant differences across the conditions if the subjects were asked to correct for their mistakes during the experiment, resulting in a natural performance penalty with increased mistakes. The experimental task can also be improved to yield more realistic performance and perception results. If the task were redesigned such that the robot’s access to the shared resource affects the performance of the human’s task, then a more collaborative human-robot social dynamic could be established.  6.4  Summary  This chapter presented strong evidence that trajectories generated using the AHPbased approach are perceived as more anthropomorphic and convey hesitation more effectively than robotic avoidance motions. Findings from Studies II and III support this to be true whether the motion is observed via a video recording or in situ while the subject is engaged in a collaborative task with the robot. The qualitative and quantitative data from this study indicate that a robot utilizing hesitation gestures is preferred over a robot that does not respond to human motions at all, but is not significantly more liked than a robot exhibiting abrupt stopping behaviours. The robot is considered less dominant and more animate when it hesitates or abruptly stops than when it does not respond to subjects at all. The results show an overall positive perception of the robot when it responds with hesitation than when it ignores humans, but this perception is not significantly more positive than robotic avoidance responses. Results of this study also show that, while the use of a “blind” robot that does not respond to human motions yields faster team completion times for a collaborative task than a robot that hesitates or abruptly stops, this may be at a cost of an increased collisions and mistakes. Accounting for the number of collisions, mistakes, and the fact that the completion time of the human’s task was unaffected by the addition of AHP-based hesitation gestures, the author remains optimistic that a human-robot collaboration system with hesitation gestures will produce a positive overall increase in task performance.  99  Chapter 7  Conclusion This thesis started with the question of what should a robot do when it faces a resource conflict with a human user. It was proposed that a robot could negotiate through this context-dependent problem with the human user if the robot is equipped with natural human-robot communication tools. In an attempt to build a framework that allows human-robot teams to resolve resource conflicts, this thesis focuses on developing a robot’s ability to communicate its behaviour states to the user. In particular, this thesis answers the following questions, which are addressed individually in the sections below: a) can an articulated industrial robot arm communicate hesitation? (Section 7.1); b) can an empirically grounded acceleration profile of human hesitation trajectories be used to generate hesitation motions for a robot? (Section 7.2); c) what is the impact of a robot’s hesitation response to resource conflicts in a Human-Robot Shared-Task (HRST) (Section 7.3)? In the three studies presented in this work, anthromimetic hesitation gestures are proposed, designed and experimentally tested as a novel and communicative robot responses to answer these questions. Study I and the subsequent analyses in Chapter 4 contribute to a better understanding of human hesitations manifested as kinesic gestures. This new knowledge about the trajectory features of hesitation gestures was used to design hesitation gestures for a robot. Two different studies, Studies II and III, in Chapters 5 and 6 respectively, contribute to the field of nonverbal Human-Robot Interaction (HRI) by 100  demonstrating that humans recognize and differentiate the designed hesitant robot motions. Section 7.4 discusses limitations of the work and outlines recommendations for future work.  7.1  Can an Articulated Industrial Robot Arm Communicate Hesitation?  Study I was aimed to answer the question of whether an articulated industrial robot arm can communicate hesitation. In this study, human-human interaction was used to capture wrist trajectories of human hesitation motions. A robot mimicked the human motions with is end-effector to create human-robot interaction analogous to the recorded human-human interaction. Human perception of hesitation from the robot’s motions was collected via online surveys. The results of the surveys demonstrate that robotic manipulator end-effector motions can convey hesitation to human observers. These results empirically support the idea that the communicative content of human hesitations can be simplified to 3D Cartesian position trajectories of a person’s wrist.  7.2  Can an Empirically Grounded Acceleration Profile of Human Hesitations be Used to Generate Robot Hesitations?  The results of Study I inspired the following questions: what are the characteristic features of the trajectories that convey hesitation to human observers and can these characteristic features be used to design human-like hesitation gestures for a robot? The qualitative and quantitative analyses described in Chapter 4 aimed to answer the former question. The qualitative analysis of collected human motions from Study I resolved two different types of hesitations in the presence of a shared resource conflict. R-type hesitations were typified by hand retraction back to the home position. P-type hesitations were typified by the hand hovering or pausing before continuing towards the target once the shared resource became free of conflict. R-type hesitations were quantitatively compared against successful reachretract (S-type) motions. The results of this analysis indicated that R-type motions  101  can be differentiated from S-type motions in the time domain by their acceleration extrema. Based on the quantitative differences in R-type and S-type motions, a hesitation trajectory design specification was developed. This specification – Acceleration-based Hesitation Profile (AHP) – describes a hesitation trajectory in terms of a) how abruptly the robot should halt in relation to how quickly it launched towards the target object, and b) how smoothly the robot should yield and return to its initial position. Study II, presented in Chapter 5, was designed to answer the question of whether the AHP can be used to generate human-like hesitation gestures for a robot. In the study, online participants watched three different  AHP -based  motions along with  other robot trajectories to test the efficacy of AHP. The results from this study suggest that AHP can be used to generate human-recognizable hesitation motions, and demonstrate that communicative content of hesitation gestures can be captured in 2D Cartesian trajectories. Only the motions in the principal axis need to follow an  AHP ,  and the secondary axis can supplement the principal axis to generate a  human-like path of reach towards the target. In Study III, the AHP was implemented in a real-time human-robot interaction system to further answer this question. In the study, the AHP was used to generate hesitation gestures on a robot in response to spontaneously occurring human-robot resource conflicts. The results from the study demonstrate that humans perceive hesitation from AHP-based motions while interacting with the robot and recognize these motions to be different from motions of an abrupt robotic collision avoidance mechanism. Since the  AHP  only specifies robot motions in one dimension, the 6- and 7-  DOF robots used in Studies II and III, respectively, did not use all their DOFs in generating the AHP-based trajectories. Nonetheless, human subjects were able to recognize the robots’  AHP -based  hesitation motions. Based on this strong empiri-  cal evidence, even lower-DOF robots may be able to exhibit human-recognizable hesitations using AHP.  102  7.3  What is the Impact of a Robot’s Hesitation Response to Resource Conflicts in a Human-Robot Shared-Task?  With the positive findings from Studies I and II, Study III was aimed to answer the subsequent question of whether the anthromimetic hesitation response to resource conflicts positively impacts human-robot collaboration. The subjects were asked to participate in a HRST in which the robot either did not respond to resource conflicts, respond to the conflict using  AHP -based  motions, or respond to the conflict using  typical robotic collision avoidance motions. Questionnaires and interview results from Study III found support that a robot is more positively perceived by human users when it responds to conflicts than when it does not. This finding was true even though the subjects knew they would not be physically harmed by the robot’s lack of response to conflicts. This finding suggests that a robot should always respond to resource conflicts, rather than ignore it, even if the robot is designed to be safe for human-robot collisions. In addition, the results from Study III provide support for the hypothesis that a robot is more positively perceived by human users when it responds to conflicts with  AHP -based  motions than when it does not respond at all. However, human  perception of the robot and the task performances is neither improved nor hindered by  AHP -based  robot responses with respect to robotic avoidance motions. The  anthromimetic conflict response mechanism did not yield any improvements in task completion time when compared with robotic avoidance responses. Nonetheless, counts of secondary performance measures, including the number of mistakes made and collisions occurred during the task, suggest that  AHP -based  robot re-  sponses may yield improvements in performance if a different human-robot collaboration had been tested. Although numerous studies already demonstrate which robots that use nonverbal gestures have a positive impact on human-robot teamwork, these studies have been limited to collaborative tasks that include clear turn-taking rules or hierarchical roles for human and robot. Study III contributes to the body of work in nonverbal HRI by exploring nonverbal human-robot communication within a team context that lacks predefined turn-taking rules and an assumed hierarchy. 103  7.4  Recommendations and Future Work  In light of the findings from this thesis, some key questions remain: Do the different types of hesitations by a robot carry a different meaning to its human users? What would be the impact of a robot using one type of hesitation instead of another? When should a robot hesitate or not hesitate? Can we map socially acceptable yielding behaviours of a robot as hesitation trajectory parameters, thereby embedding low-level behaviour-based ethics onto a robot? More importantly, do hesitation behaviours of a robot influence the human user’s decision to yield to the robot? If so, then do negotiated resolution of human-robot resource conflicts result in better management of shared resources than when the robot always yields to humans? With the empirically validated hesitation trajectory design devised in this thesis, these important questions can be investigated to improve human-robot collaboration. With regards to direct follow up on the studies completed and robot trajectories proposed herein, it should be noted that the CRS A460 robot used in both Studies I and II followed the reference trajectories five times slower than natural human speed during video recording in order to generate high fidelity motion. This sheds some light on the limitations of Study II, which tested the efficacy of the  AHP  within human-like launch acceleration parameter values. The human-like range of launch acceleration demands a magnitude of deceleration even larger than that of the launch acceleration to follow the AHP. Like the 6-DOF robot, many industrial robots are not capable of generating high acceleration motions that match human speeds. Hence, prior to implementing the AHP on a slower robot, further testing is necessary to verify the efficacy of the AHP at a lower range of launch accelerations. Similar to human-human interactions, a system that enables human-robot nonverbal negotiation and resolution of resource conflicts requires a robot to both express its behaviour-states as well as understand what is expressed by humans. This thesis work only addressed robot expression of hesitation to its human observers. The author posits that, with improved technologies to robustly understand human expression of intentions and internal states in real-time, a robot would be able to use hesitation gestures to resolve resource conflicts with its human partners. Although significant perception and performance differences are not observed between AHP-  104  based robot responses and robotic avoidance motions of Study III, greater differences in these teamwork measures may be observed when bidirectional humanrobot nonverbal communication mechanisms are established.  105  Bibliography [1] M. Argyle. The Psychology of Interpersonal Behaviour. Penguin, 5th edition, 1994. ISBN 0140172742. URL http://www.amazon.co.uk/ Psychology-Interpersonal-Behaviour-Penguin/dp/0140172742. → pages 2, 7 [2] Barrett Technology Inc. Datasheet - WAM (WAM-02.2011). Technical report, Barrett Technology Inc., Cambridge, Massachusetts, 2011. → pages 69 [3] C. Bartneck, T. Kanda, O. Mubin, and A. Al Mahmud. Does the Design of a Robot Influence Its Animacy and Perceived Intelligence? International Journal of Social Robotics, 1(2):195–204, Feb. 2009. ISSN 1875-4791. doi:10.1007/s12369-009-0013-7. URL http://www.springerlink.com/index/10.1007/s12369-009-0013-7. → pages 9 [4] C. Bartneck, D. Kuli´c, E. Croft, and S. Zoghbi. Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. International Journal of Social Robotics, 1 (1):71–81, Nov. 2009. ISSN 1875-4791. doi:10.1007/s12369-008-0001-3. URL http://www.springerlink.com/content/d422u846113572qn/http: //www.springerlink.com/index/d422u846113572qn.pdf. → pages 60, 78, 87, 88 [5] C. Becchio, L. Sartori, M. Bulgheroni, and U. Castiello. Both your intention and mine are reflected in the kinematics of my reach-to-grasp movement. Cognition, 106(2):894–912, Mar. 2008. ISSN 0010-0277. doi:10.1016/j.cognition.2007.05.004. URL http://www.ncbi.nlm.nih.gov/pubmed/17585893. → pages 2, 7 [6] C. Becchio, L. Sartori, and U. Castiello. Toward You: The Social Side of Actions. Current Directions in Psychological Science, 19(3):183–188, June 2010. ISSN 0963-7214. doi:10.1177/0963721410370131. URL 106  http://cdp.sagepub.com/lookup/doi/10.1177/0963721410370131. → pages  2, 7 [7] S. Berman, D. G. Liebermann, and T. Flash. Application of motor algebra to the analysis of human arm movements. Robotica, 26(4):435–451, 2008. ISSN 0263-5747. URL http://portal.acm.org/citation.cfm?id=1394718. → pages 34 [8] J. Bernhardt, P. J. Bate, and T. A. Matyas. Accuracy of observational kinematic assessment of upper-limb movements. Physical Therapy, 78(3): 259–70, Mar. 1998. ISSN 0031-9023. URL http://ptjournal.apta.org/content/78/3/259.abstract. → pages 35 [9] C. L. Bethel and R. R. Murphy. Affective expression in appearance constrained robots. In Proceeding of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction - HRI ’06, page 327, New York, New York, USA, 2006. ACM Press. ISBN 1595932941. doi:10.1145/1121241.1121299. URL http://portal.acm.org/citation.cfm?id=1121299http: //portal.acm.org/citation.cfm?doid=1121241.1121299. → pages 12, 13  [10] M. Bratman. Shared cooperative activity. The Philosophical Review, 101(2): 327–341, 1992. URL http://www.jstor.org/stable/10.2307/2185537. → pages 10 [11] C. Breazeal and B. Scassellati. Robots that imitate humans. Trends in Cognitive Sciences, 6(11):481–487, Nov. 2002. ISSN 1879-307X. URL http://www.ncbi.nlm.nih.gov/pubmed/12457900. → pages 13 [12] C. Breazeal, C. Kidd, A. Thomaz, G. Hoffman, and M. Berlin. Effects of Nonverbal Communication on Efficiency and Robustness in Human-Robot Teamwork. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 383–388. Ieee, 2005. ISBN 0-7803-8912-3. doi:10.1109/IROS.2005.1545011. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1545011. → pages 11, 12 [13] J. K. Burgoon, J. a. Bonito, A. Ramirez, N. E. Dunbar, K. Kam, and J. Fischer. Testing the Interactivity Principle: Effects of Mediation, Propinquity, and Verbal and Nonverbal Modalities in Interpersonal Interaction. Journal of Communication, 52(3):657–677, Sept. 2002. ISSN 0021-9916. doi:10.1111/j.1460-2466.2002.tb02567.x. URL 107  http://doi.wiley.com/10.1111/j.1460-2466.2002.tb02567.x. → pages 3, 78,  79 [14] J. K. J. Burgoon, J. J. A. Bonito, B. Bengtsson, A. Ramirez, N. E. Dunbar, and N. Miczo. Testing the interactivity model: Communication processes, partner assessments, and the quality of collaborative work. Journal of Management Information Systems, 16(3):33–56, 2000. ISSN 07421222. URL http://portal.acm.org/citation.cfm?id=1195839. → pages 78 [15] P. R. Cohen and H. J. Levesque. Teamwork. Noˆus, 25(4):487–512, 1991. → pages 10 [16] CRS Robotics Cooperation. A465 Robot Arm User Guide. Technical report, CRS Robotics Cooperation, Burlington, ON, Canada, 2000. → pages 117, 118 [17] W. H. Dittrich and S. E. Lea. Visual perception of intentional motion. Perception, 23(3):253–68, Jan. 1994. ISSN 0301-0066. URL http://www.ncbi.nlm.nih.gov/pubmed/7971105. → pages 7 [18] L. W. Doob. Hesitation: impulsivity and reflection. Greenwood Press, Westport, CT, 1990. ISBN 0313274460. URL http://books.google.com/books?id=Q7B9AAAAMAAJ&pgis=1http: //www.questia.com/read/27488222. → pages 8  [19] T. Ende, S. Haddadin, S. Parusel, T. Wusthoff, M. Hassenzahl, and A. Albu-Schaffer. A human-centered approach to robot gesture based communication within collaborative working processes. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 3367–3374. IEEE, Sept. 2011. ISBN 978-1-61284-456-5. doi:10.1109/IROS.2011.6094592. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6094592. → pages 14 [20] T. Fincannon, L. Barnes, R. Murphy, and D. Riddle. Evidence of the need for social intelligence in rescue robots. In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), volume 2, pages 1089–1095. IEEE, 2004. ISBN 0-7803-8463-6. doi:10.1109/IROS.2004.1389542. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1389542.  → pages 13  108  [21] T. Flash and N. Hogan. The Coordination of Arm Movements: Mathematical Model. Journal of Neuroscience, 5(7):1688–1703, 1985. URL http://www.jneurosci.org/cgi/content/abstract/5/7/1688. → pages 13, 35, 58 [22] R. Fox and C. McDaniel. The perception of biological motion by human infants. Science, 218(4571):486–487, Oct. 1982. ISSN 0036-8075. doi:10.1126/science.7123249. URL http://www.sciencemag.org/content/218/4571/486.abstract. → pages 7 [23] H. Fukuda and K. Ueda. Interaction with a Moving Object Affects Ones Perception of Its Animacy. Int J Soc Robotics, 2(2):187–193, Mar. 2010. ISSN 1875-4791. doi:10.1007/s12369-010-0045-z. URL http://www.springerlink.com/index/10.1007/s12369-010-0045-z. → pages 7 [24] D. B. Givens. The Nonverbal Dictionary of Gestures, Signs & Body Language Cues. Center for Nonverbal Studies Press, Spokane, Washington, 2002. URL http://www.mikolaj.info/edu/Body Language - List of Signs n Gestures.pdf. → pages 8 [25] J. Goetz, S. Kiesler, and A. Powers. Matching robot appearance and behavior to tasks to improve human-robot cooperation. In The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003., pages 55–60. IEEE, 2003. ISBN 0-7803-8136-X. doi:10.1109/ROMAN.2003.1251796. URL http://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=1251796http: //ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1251796. →  pages 11 [26] B. J. Grosz. Collaborative Systems. AI Magazine, 17(2):67–85, 1996. → pages 10 [27] V. B. Gupta. History, Definition and Classification of Autism Spectrum Disorders. In V. B. Gupta, editor, Autistic Spectrum Disorders in Children, chapter 1, pages 85–123. Marcel Dekker Inc.,, New York, 2004. ISBN 0824750616. URL http://books.google.com/books?hl=en&lr=&id=tOZqDydjMMIC&pgis=1. → pages 7 [28] F. Heider and M. Simmel. An Experimental Study of Apparent Behavior. The American Journal of Psychology, 57(2):243 – 259, 1944. URL http://www.citeulike.org/user/justaubrey/article/1107150. → pages 7 109  [29] P. Hinds, T. Roberts, and H. Jones. Whose Job Is It Anyway? A Study of Human-Robot Interaction in a Collaborative Task. Human-Computer Interaction, 19(1):151–181, June 2004. ISSN 0737-0024. doi:10.1207/s15327051hci1901\&2\ 7. URL http://www.informaworld.com/openurl?genre=article&doi=10.1207/ s15327051hci1901&2 7&magic=crossref|| D404A21C5BB053405B1A640AFFD44AE3. → pages 11  [30] A. Holroyd, C. Rich, C. L. Sidner, and B. Ponsler. Generating connection events for human-robot collaboration. In 2011 RO-MAN, pages 241–246. IEEE, July 2011. ISBN 978-1-4577-1571-6. doi:10.1109/ROMAN.2011.6005245. URL http://ieeexplore.ieee.org/xpl/freeabs all.jsp?arnumber=6005245. → pages 11, 12 [31] C.-M. Huang and A. L. Thomaz. Effects of responding to, initiating and ensuring joint attention in human-robot interaction. In 2011 RO-MAN, pages 65–71. IEEE, July 2011. ISBN 978-1-4577-1571-6. doi:10.1109/ROMAN.2011.6005230. URL http://ieeexplore.ieee.org/xpl/freeabs all.jsp?arnumber=6005230. → pages 11, 12 [32] J. ILLES. Neurolinguistic features of spontaneous language production dissociate three forms of neurodegenerative disease: Alzheimer’s, Huntington’s, and Parkinson’s*1. Brain and Language, 37(4):628–642, Nov. 1989. ISSN 0093934X. doi:10.1016/0093-934X(89)90116-8. URL http://linkinghub.elsevier.com/retrieve/pii/0093-934X(89)90116-8. → pages 8 [33] G. Johansson. Visual perception of biological motion and a model for its analysis. Perception & Psychophysics, 14(2):201–211, June 1973. ISSN 0031-5117. doi:10.3758/BF03212378. URL http://www.springerlink.com/index/10.3758/BF03212378. → pages 7 [34] W. Ju and L. Takayama. Approachability: How People Interpret Automatic Door Movement as Gesture. Int J Design, 3(2), Aug. 2009. URL citeulike-article-id:6390837http: //www.ijdesign.org/ojs/index.php/IJDesign/article/view/574/244. → pages 7  [35] T. Kazuaki, O. Motoyuki, and O. Natsuki. The hesitation of a robot: A delay in its motion increases learning efficiency and impresses humans as teachable. In 2010 5th ACM/IEEE International Conference on 110  Human-Robot Interaction (HRI), volume 8821007, pages 189–190, Osaka, Japan, Mar. 2010. IEEE. ISBN 978-1-4244-4892-0. doi:10.1109/HRI.2010.5453200. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5453200. → pages 9 [36] J. F. Kelley. An iterative design methodology for user-friendly natural language office information applications. ACM Transactions on Information Systems, 2(1):26–41, Jan. 1984. ISSN 10468188. doi:10.1145/357417.357420. URL http://dl.acm.org/citation.cfm?id=357417.357420. → pages 11 [37] H. Kim, S. S. S. Kwak, and M. Kim. Personality design of sociable robots by control of gesture design factors. In RO-MAN 2008, pages 494–499, Munich, Aug. 2008. Ieee. ISBN 978-1-4244-2212-8. doi:10.1109/ROMAN.2008.4600715. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4600715.  → pages 13, 14 [38] S. T. Klapp, P. A. Kelly, and A. Netick. Hesitations in continuous tracking induced by a concurrent discrete task. Human Factors, 29(3):327–337, 1987. → pages 8 [39] T. Kroger. Online Trajectory Generation: Straight-Line Trajectories. IEEE Transactions on Robotics, 27(5):1010–1016, Oct. 2011. ISSN 1552-3098. doi:10.1109/TRO.2011.2158021. URL http://ieeexplore.ieee.org/xpl/freeabs all.jsp?arnumber=5887431. → pages 52, 53 [40] D. Kulic and E. Croft. Physiological and subjective responses to articulated robot motion. Robotica, 25(01):13, Aug. 2006. ISSN 0263-5747. doi:10.1017/S0263574706002955. URL http://www.journals.cambridge.org/abstract S0263574706002955. → pages 3, 14 [41] J. C. Lafferty and P. M. Eady. The desert survival problem. Experimental Learning Methods., Plymouth , MI, 1974. URL http://www.citeulike.org/user/mortimer/article/2214983. → pages 78 [42] D. Leathers. Successful Nonverbal Communication: Principles and Applications (3rd Edition). Allyn & Bacon, 1997. ISBN 0205262309. URL http://www.amazon.com/  111  Successful-Nonverbal-Communication-Principles-Applications/dp/ 0205262309. → pages 4  [43] A. Lindsey, J. Greene, R. Parker, and M. Sassi. Effects of advance message formulation on message encoding: Evidence of cognitively based hesitation in the production of multiple-goal messages. Communication Quarterly, 43 (3):320–331, 1995. ISSN 0146-3373. doi:10.1080/01463379509369979. URL http://scholar.google.com/scholar?hl=en&btnG=Search&q=intitle: Effects+of+advance+message+formulation+on+message+encoding: +Evidence+of+cognitively+based+hesitation+in+the+production+of+ multiple-goal+messages#0. → pages 8  [44] V. Manera, C. Becchio, A. Cavallo, L. Sartori, and U. Castiello. Cooperation or competition? Discriminating between social intentions by observing prehensile movements. Experimental brain research. Experimentelle Hirnforschung. Exp´erimentation c´er´ebrale, 211(3-4):547–56, June 2011. ISSN 1432-1106. doi:10.1007/s00221-011-2649-4. URL http://www.ncbi.nlm.nih.gov/pubmed/21465414. → pages 7 [45] M. Mataric. Getting humanoids to move and imitate. In IROS 2000, volume 15, pages 18–24, July 2000. doi:10.1109/5254.867908. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=867908.  → pages 13 [46] D. Matsui, T. Minato, K. MacDorman, and H. Ishiguro. Generating Natural Motion in an Android by Mapping Human Motion. In IROS 2005, pages 1089–1096, 2005. ISBN 0-7803-8912-3. doi:10.1109/IROS.2005.1545125. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1545125. → pages 13 [47] S. Merlo and P. A. Barbosa. Hesitation phenomena: a dynamical perspective. Cognitive Processing, 11(3):251–61, Aug. 2010. ISSN 1612-4790. doi:10.1007/s10339-009-0348-x. URL http://www.ncbi.nlm.nih.gov/pubmed/19916035. → pages 8 [48] Y. MOON and C. NASS. How ”Real” Are Computer Personalities?: Psychological Responses to Personality Types in Human-Computer Interaction. Communication Research, 23(6):651–674, Dec. 1996. ISSN 0093-6502. doi:10.1177/009365096023006002. URL http://crx.sagepub.com/cgi/doi/10.1177/009365096023006002. → pages 79, 87, 88 112  [49] Y. Ogai and T. Ikegami. Microslip as a Simulated Artificial Mind. Adaptive Behavior, 16(2/3):129–147, Apr. 2008. ISSN 1059-7123. doi:10.1177/1059712308089182. URL http://adb.sagepub.com/content/16/2-3/129.abstracthttp: //adb.sagepub.com/content/16/2-3/129.full.pdfhttp: //adb.sagepub.com/cgi/doi/10.1177/1059712308089182. → pages 8  [50] Oxford Online Dictionary. “moral”, 2012. URL http: //oxforddictionaries.com/definition/moral?region=us&q=morals#moral 5. →  pages 1 [51] P. Philippot, R. S. Feldman, and E. J. Coats, editors. The Social Context of Nonverbal Behavior (Studies in Emotion and Social Interaction). Cambridge University Press, 1999. ISBN 0521583713. URL http://www.amazon.com/ Context-Nonverbal-Behavior-Studies-Interaction/dp/0521583713. → pages 7 [52] N. Pollard, J. Hodgins, M. Riley, and C. Atkeson. Adapting human motion for the control of a humanoid robot. In ICRA 2002, pages 1390–1397, Washington, 2002. ISBN 0-7803-7272-7. doi:10.1109/ROBOT.2002.1014737. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1014737. → pages 13 [53] F. Pollick. Perceiving affect from arm movement. Cognition, 82(2): B51–B61, Dec. 2001. ISSN 00100277. doi:10.1016/S0010-0277(01)00147-0. URL http://dx.doi.org/10.1016/S0010-0277(01)00147-0. → pages 14 [54] F. E. Pollick. The Features People Use to Recognize Human Movement Style. Lecture Notes in Computer Science: Gesture-Based Communication in Human-Computer Interaction, 2915:467–468, 2004. doi:10.1007/b95740. URL http://www.springerlink.com/content/qnbtu7b25t0kguha/. → pages 7 [55] K. Reed, M. Peshkin, M. J. Hartmann, M. Grabowecky, J. Patton, and P. M. Vishton. Haptically linked dyads: are two motor-control systems better than one? Psychological Science, 17(5):365–6, May 2006. ISSN 0956-7976. doi:10.1111/j.1467-9280.2006.01712.x. URL http://www.ncbi.nlm.nih.gov/pubmed/16683920. → pages 10 [56] K. B. Reed, J. Patton, and M. Peshkin. Replicating Human-Human Physical Interaction. In Proceedings 2007 IEEE International Conference on 113  Robotics and Automation, number April, pages 3615–3620, Roma, Italy, Apr. 2007. IEEE. ISBN 1-4244-0602-1. doi:10.1109/ROBOT.2007.364032. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4209650. → pages 11 [57] B. Reeves and C. Nass. The Media Equation: How People Treat Computers, Television, and New Media like Real People and Places. Cambridge University Press, 1996. ISBN 157586052X. URL http://www.amazon.com/ The-Media-Equation-Television-Information/dp/1575860538. → pages 7 [58] L. D. Riek, T.-c. Rabinowitch, P. Bremner, A. G. Pipe, M. Fraser, and P. Robinson. Cooperative Gestures : Effective Signaling for Humanoid Robots. In HRI 2010, pages 61–68, Osaka, Japan, 2010. ACM/IEEE. → pages 13, 14 [59] P. Rober. Some Hypotheses about Hesitations and their Nonverbal Expression in Family Therapy Practice. Journal of Family Therapy, 24(2): 187–204, 2002. ISSN 1467-6427. doi:10.1111/1467-6427.00211. URL http: //www3.interscience.wiley.com/cgi-bin/abstract/118914502/ABSTRACT. →  pages 8 [60] M. Saerbeck and C. Bartneck. Perception of Affect Elicited by Robot Motion. In Proceeding of the 5th ACM/IEEE international conference on Human-robot interaction - HRI ’10, pages 53–60, New York, New York, USA, 2010. ACM/IEEE. ISBN 9781424448937. doi:10.1145/1734454.1734473. URL http://portal.acm.org/citation.cfm?doid=1734454.1734473. → pages 14 [61] M. Salem, K. Rohlfing, S. Kopp, and F. Joublin. A friendly gesture: Investigating the effect of multimodal robot behavior in human-robot interaction. In 2011 RO-MAN, pages 247–252. IEEE, July 2011. ISBN 978-1-4577-1571-6. doi:10.1109/ROMAN.2011.6005285. URL http://ieeexplore.ieee.org/xpl/freeabs all.jsp?arnumber=6005285. → pages 11 [62] L. Sartori, C. Becchio, M. Bulgheroni, and U. Castiello. Modulation of the action control system by social intention: unexpected social requests override preplanned action. Journal of Experimental Psychology. Human Perception and Performance, 35(5):1490–500, Oct. 2009. ISSN 1939-1277. doi:10.1037/a0015777. URL http://apps.isiknowledge.com/full record.do? 114  product=UA&search mode=GeneralSearch&qid=29&SID= 4B4fPD4ijj1e8NBnMHj&page=1&doc=1&colname=WOS. → pages 2  [63] J. F. Sousa-Poza and R. Rohrberg. Body Movement in Relation To Type of Information (Person- and Nonperson-Oriented) and Cognitive Style (Field Dependence) 1. Human Communication Research, 4(1):19–29, Sept. 1977. ISSN 0360-3989. doi:10.1111/j.1468-2958.1977.tb00592.x. URL http: //www.blackwell-synergy.com/doi/abs/10.1111/j.1468-2958.1977.tb00592.x.  → pages 8 [64] C. Suda and J. Call. What Does an Intermediate Success Rate Mean? An Analysis of a Piagetian Liquid Conservation Task in the Great Apes. Cognition, 99(1):53–71, Feb. 2006. URL http://www.eric.ed.gov/ERICWebPortal/detail?accno=EJ729778. → pages 8, 9 [65] S. B. Thies, P. Tresadern, L. Kenney, D. Howard, J. Y. Goulermas, C. Smith, and J. Rigby. Comparison of linear accelerations from three measurement systems during “reach & grasp”. Medical Engineering & Physics, 29(9): 967–72, Nov. 2007. ISSN 1350-4533. doi:10.1016/j.medengphy.2006.10.012. URL http://www.ncbi.nlm.nih.gov/pubmed/17126061. → pages 18 [66] K. R. Thorisson, Justine Cassell. the Power of a Nod and a Glance: Envelope Vs. Emotional Feedback in Animated Conversational Agents. Applied Artificial Intelligence, 13(4-5):519–538, May 1999. ISSN 0883-9514. doi:10.1080/088395199117360. URL http://www.informaworld.com/ openurl?genre=article&doi=10.1080/088395199117360&magic=crossref|| D404A21C5BB053405B1A640AFFD44AE3. → pages 7 [67] M. Tomasello, M. Carpenter, J. Call, T. Behne, and H. Moll. Understanding and sharing intentions: the origins of cultural cognition. Behavioral and Brain Sciences, 28(5):675–735, Oct. 2005. ISSN 0140-525X. doi:10.1017/S0140525X05000129. URL http://www.ncbi.nlm.nih.gov/pubmed/16262930. → pages 7 [68] P. D. Tremoulet and J. Feldman. The influence of spatial context and the role of intentionality in the interpretation of animacy from motion. Perception & Psychophysics, 68(6):1047–58, Aug. 2006. ISSN 0031-5117. URL http://www.ncbi.nlm.nih.gov/pubmed/17153197. → pages 7  115  [69] H. J. Woltring. On Optimal Smoothing and Derivative Estimation from Noisy Displacement Data in Biomechanics. Human Movement Science, 4: 229–245, 1985. → pages 21 [70] T. Yokoi and K. Fujisaki. Hesitation behaviour of hoverflies Sphaerophoria spp. to avoid ambush by crab spiders. Die Naturwissenschaften, 96(2): 195–200, Feb. 2009. ISSN 0028-1042. doi:10.1007/s00114-008-0459-8. URL http://www.springerlink.com/content/u72m427q103m43uk/. → pages 8, 9 [71] H. Zhou and H. Hu. Reducing Drifts in the Inertial Measurements of Wrist and Elbow Positions. IEEE Trans on Instrumentation and Measurement, 59 (3):575–585, Mar. 2010. ISSN 0018-9456. doi:10.1109/TIM.2009.2025065. URL http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5247091. → pages 18 [72] H. Zhou, H. Hu, N. Harris, and J. Hammerton. Applications of wearable inertial sensors in estimation of upper limb movements. Biomedical Signal Processing and Control, 1(1):22–32, 2006. ISSN 17468094. doi:10.1016/j.bspc.2006.03.001. URL http://dx.doi.org/10.1016/j.bspc.2006.03.001. → pages 18  116  Appendix A  CRS A460 Robot Specifications This appendix presents key technical specifications of the CRS A460 robot arm that affect the robot motions produced for Studies I and II. Figure 3.3, which shows a schematic of the robot, has been reproduced in Figure A.1 for convenience. Further technical details of the robot can be found in [16]. In Study I, presented in Chapter 3, the robot replicated a set of recorded human motions. In preparing the human trajectories to be replicated by the manipulator’s end-effector, the range of motion of the robot was considered. As outlined in Table B.1, motions of Subject 2 show the maximum range of reach in the Xo -axis for all subjects (39 cm)1 . However, this range of motion of the robot is smaller. The distance between joints 2 and 3 is 30 cm of the robot, and that of joints 3 and 5 is 33 cm. This yields the maximum position for the robot’s wrist, measured as the distance between joints 2 and 5, is 63 cm. In the elbow-up configuration, the wrist reaches its minimum position, 36 cm, when q3 is at its minimum, 70°. This yields a total of 27 cm range of motion for the wrist. Hence, the human motions were scaled accordingly in order to match the maximum achievable range of motion of the robot. In replicating the human motions, it was critical that high fidelity motion be produced by the robot. Human wrist motions recorded from the inertial sensors demonstrated peak linear accelerations in the Xo -axis ranging from 6.0 to 21 cm/s2 , and respective decelerations ranging from -29 to -11 cm/s2 . However, the CRS 1 X -axis o  is the principal axis of motion as defined in Figure 3.2  117  Joint 3  O  d ϕ Joint 6  Joint 2  -θ Joint 5 Joint 4  Z  Joint 1  X  Y  Figure A.1: Schematics of the 6-DOF CRS A460 robot arm in the elbow-up configuration. This figure is repeated from Figure 3.3. Table A.1: Soft limits in position, q, velocity, q, ˙ and acceleration, q, ¨ set for the CRS A460 robot arm. These soft limits are set to prevent the robot from mechanical damage and are more conservative limits than the hard limits provided in [16].  q, (rad) q, ˙ (rad/s) q, ¨ (rad/s2 )  q1 2.93 3.30 18.80  q2 1.50 3.30 18.80  q3 1.94 3.30 18.80  q4 3.07 2.99 37.28  q5 1.76 3.02 37.65  q6 3.07 2.99 37.28  A460 robot is not capable of producing such high range of acceleration. Table A.1 outlines the software limit of the robot employed to ensure safe operation of the robot. The robot’s maximum linear path velocity of the wrist in its Xo -axis is 0.76 m/s, and the maximum velocity of the compounded joint interpolated motions is 4.57 m/s. In order to produce high fidelity replication of human motion using the robot, human motions were converted into reference trajectories of the robot, and slowed down to fit within its maximum capacity. However, this resulted in large overshoot of the joint angles responsible for generating linear forward motions of the end118  effector (q2 and q3 ). Calculating the difference between the reference and recorded trajectories yielded ±0.07 radians of error for q2 , and ±0.15 radians for q3 . Upon the process of observing the error between the commanded and recorded joint positions of these two joints, the robot finally replicated human motions five times slower during recording. Video recording of these motions were sped up in order to match the desired human speed of motion in Study I. A MATLAB simulink model was developed to control the robot motions. This model is presented in Figure A.2.  119  Figure A.2: Screen capture of the control scheme used to servo the CRS A460 robot through 3D Cartesian reference trajectories.  120  Appendix B  Human Motion Trajectory Characteristics Contents B.1  Segmentation of Recorded Human Motions . . . . . . . . . . 122 B.1.1  Butterworth Filtering algorithm . . . . . . . . . . . 122  B.1.2  Acceleration-based Segmentation Algorithm . . . . 122  B.2  Overview of Position Profiles . . . . . . . . . . . . . . . . . 124  B.3  Descriptive Statistics of Principal Component Analysis Errors 128  B.4  AHP  Parameter Values from Human Motions . . . . . . . . . 128  This appendix presents quantitative findings of the recorded human motions from Study I (Chapter 3) that inform the development of Acceleration-based Hesitation Profile (AHP) (presented in Chapter 4). Section B.1 presents the filtering and segmentation algorithms used to prepare the human motions for the quantitative analysis outlined in Chapter 4. Section B.2 presents an overview of the collected, filtered, and segmented human motion’s position profiles. As described in Chapter 4, human motion data was simplified from 3D to 2D before being used for generating the AHP. Section B.3 presents the errors associated with the simplification technique employed in the process. To calculate the  AHP  ratio values (C1 ,C2 , B1  and B2 defined in Chapter 4), acceleration extrema values from the collected human motions were extracted. Section B.4 presents these values. 121  B.1 Segmentation of Recorded Human Motions The linear acceleration measurement of the human motions collected from Study I were filtered and used to segment the human wrist trajectory data collected from inertial sensors (see Chapter 3 for details of the human motion collection and use of the sensors). This section presents the details of the algorithm used to filter and segment the recorded human motions. Section B.1.1 describes the algorithm used to filter the data. Section B.1.2 describes the algorithm used to segment the data.  B.1.1 Butterworth Filtering algorithm The following MATLAB function, TruncatedAccPlot wristonly, was used to filter acceleration recordings of human wrist trajectories using a 4th order Butterworth filter. The algorithm described Section B.1.2 uses the output of this program to segment human wrist trajectory data. Presented below is a pseudo code for the TruncatedAccPlot wristonly function.  1  Load s u b j e c t s p e c i f i c data f i l e s  2  Truncate t h e i n e r t i a l sensor data t o e l i m i n a t e data n o t p e r t i n e n t t o t h e  ...  experiment 3  Convert w r i s t a c c e l e r a t i o n r e a d i n g s i n t o cm / s ˆ 2 f o r p r o c e s s i n g  4 5  f o r a l l data p o i n t s , W r i s t a c c e l e r a t i o n i n t h e g l o b a l frame = R o t a t i o n a l m a t r i x f o r t h e  6  ...  Shoulder −W r i s t sensor * Shoulder −W r i s t A c c e l e r a t i o n v a l u e ; 7  end  8 9  W r i s t a c c e l e r a t i o n i n t h e g l o b a l frame = F i l t e r d a t a ( W r i s t a c c e l e r a t i o n  ...  i n t h e g l o b a l frame ) ;  B.1.2 Acceleration-based Segmentation Algorithm In this section, the AccelerationBasedSegmentation1.m script is presented. This MATLAB script is used to segment the human wrist trajectory data collected from Chapter 3. A pseudo code outlining the flow of the segmentation algorithm is presented below.  122  1  Get b u t t e r w o r t h f i l t e r e d a c c e l e r a t i o n data from t h e  ...  TruncatedAccPlot wristonly function 2  Set a t h r e s h o l d f o r t h e f i r s t maxima , F i r s t M a x i m a T h r e s h o l d  3  Set a t h r e s h o l d f o r t h e t h i r d minima , ThirdMinimaThreshold  4  I n i t i a l i z e Boundary p o s i t i o n t o be zero  5  S t a r t w i t h i =2 , j =2  6 7  while i ≤length ( A c ce l e ra t io n in X )  8  Number of minima =0;  9  i f ( A c c e l e r a t i o n i n X ( i ) >(F i r s t M a x i m a T h r e s h o l d ) ) &&  ... ...  ( A c c e l e r a t i o n i n X ( i −1)≤ ( F i r s t M a x i m a T h r e s h o l d ) ) && ( Boundary ( j − 1 ,3) ==0) ; 10  f o r l = 1: −1: −30 i f ( A c c e l e r a t i o n i n X ( i + l ) ≥ A c c e l e r a t i o n i n X ( i + l −1) )&&  11  ... ...  ( A c c e l e r a t i o n i n X ( i + l −1)≤ A c c e l e r a t i o n i n X ( i + l −2) ) %Find a minima i n r e v e r s e o r d e r i = i + l −3;%minima found , e x i t t h i s f o r l o o p and  12  ...  c o n t i n u e .%r e s e t i t o s t a r t from h e r e . break ;  13  end  14 15  end  16 17 18  % S t a r t s e a r c h i n g f o r t h e t h r e e minimas f o r k = 1 : 1 : 1 0 0 %f o r t h e maximum number o f datasamples i n one  ...  reach motion 19  i f ( A c c e l e r a t i o n i n X ( i +k ) ≥ A c c e l e r a t i o n i n X ( i +k +1) )&&  ... ...  ( A c c e l e r a t i o n i n X ( i +k +1) ≤ A c c e l e r a t i o n i n X ( i +k +2) ) %Find a minima 20  i f ( Boundary ( j − 1 ,3) ==0) && ( minimacount == 0 )  21  Boundary ( j , 1 ) = ( i +k +1) ;  22  Boundary ( j , 2 ) = A c c e l e r a t i o n i n X ( i +k +1) ;  %Timestamp %Record  ...  t h e v a l u e o f AccX a t t h e s t a r t o f motion 23  Boundary ( j , 3 ) = 1 ;  24  j = j +1  25  k = k+1  26 27  %I n d i c a t i v e o f motion s t a r t  Number of minima = Number of minima+1 e l s e i f ( Boundary ( j − 1 ,3) ==1)  28  Boundary ( j , 1 ) = ( i +k +1) ;  29  Boundary ( j , 2 ) = A c c e l e r a t i o n i n X ( i +k +1) ;  %Timestamp %Record  t h e v a l u e o f AccX a t t h e s t a r t o f motion 30  Boundary ( j , 3 ) = 0 . 5 ;  31  Number of minima = Number of minima+1  32  j = j +1  33  k = k+1  34  %I n d i c a t i v e o f q u i n t i c s p l i t  e l s e i f ( Boundary ( j − 1 ,3)==0 . 5 ) &&( A c c e l e r a t i o n i n X ( i +k )  35  ≥ ThirdMinimaThreshold )  36  Boundary ( j , 1 ) = ( i +k +1) ;  123  %Timestamp  ...  Boundary ( j , 2 ) = A c c e l e r a t i o n i n X ( i +k +1) ;  37  %Record  ...  t h e v a l u e o f AccX a t t h e s t a r t o f motion 38  Boundary ( j , 3 ) = 0 ;  39  Number of minima = Number of minima+1  40  j = j +1  41  k = k+1  %I n d i c a t i v e o f motion end  end  42 43  end  44  i f ( Number of minima ==3) Number of minima =0;  45 46  i = i +k ;  47  break ; end  48 49  end  50  end  51  i = i +1;  52 53  end  B.2 Overview of Position Profiles This section presents an overview of position profiles observed from the human motion trajectories collected in Study I. All recorded data from the inertial sensors used in Study I were filtered using a 4th order Butterworth filter using the algorithm described in Section B.1. The filtered position profiles of human motions are presented in figures B.1, B.2 and B.3. Since mimicking the recorded human motion trajectories was of interest in generating human-robot interaction videos, it was necessary to calculate the recorded human range of motions. Table B.1 presents the minimum and maximum position values collected from all three Study I pilot experiment subjects. These values were calculated via forward kinematics approach outlined in Chapter 3. Motions of Subject 2 show the maximum range of reach in the Xo -axis across the three subjects (39 cm). Appendix A describes how this value compares to the range of motions of the CRS A460 robot used in Studies I and II, and how these motions are scaled for Study I.  124  Table B.1: Range of motion of the three pilot subjects who participated in Study I. These values were calculated via the forward kinematics approach described in Chapter 3. The values in the parentheses are minimum and maximum position values, in that order, of the recorded subject motions.  Xo (cm) Yo (cm) Zo (cm)  Subject 1 (13.21, 50.26) (-12.76, 11.31) (-36.54, -18.21)  Subject 2 (13.84, 52.90) (1.05, 12.54) (-26.66, -6.39)  Subject 3 (6.20, 44.34) (-4.03, 6.80) (-27.15, -9.41)  Subject1 Xo-Axis Wrist Position 50  S-type motion R-type motion  45  Xo-Axis Position (cm)  40  35  30  25  20  15  10 0  25%  50%  75%  100%  Time Normalized  Figure B.1: A few examples of Butterworth-filtered Xo -axis wrist motions from Subject 1 in Study I. This figure is reproduced from Figure 4.5. All trajectories are time-normalized to match the slowest (longest) motion segment.  125  Subject1 Yo-Axis Wrist Position 4  S-type motion R-type motion  2  Yo-Axis Position (cm)  0  2  4  6  8  10  12 0  25%  50%  75%  100%  Time Normalized  Figure B.2: A few examples of Butterworth-filtered Yo -axis wrist motions from Subject 1 in Study I. All trajectories are time-normalized to match the slowest (longest) motion segment.  126  Subject1 Zo-Axis Wrist Position 35  S-type motion  Zo-Axis Position (cm)  R-type motion  30  25  20  0  25%  50%  75%  100%  Time Normalized  Figure B.3: A few examples of Butterworth-filtered Zo -axis wrist motions from Subject 1 in Study I. All trajectories are time-normalized to match the slowest (longest) motion segment.  127  B.3 Descriptive Statistics of Principal Component Analysis Errors In Chapter 4, human motions are characterized by the acceleration profile of the motions’ principal axis. Identifying the principal axis for each motion segment required Principal Component Analysis to simply the 3D motion into 2D. This section outlines the errors from the simplification process. As shown in Table B.2 the mean and standard deviation of the sum of squared errors for each subject are quite small. Table B.2: Sum of squared errors from PCA simplification of Chapter 3 subject motion data. Units all in cm2 . Subject 1 2 3  B.4  AHP  Mean 30 22 42  SD 57 24 17  Min 4 5 6  Max 327 185 78  Parameter Values from Human Motions  The acceleration ratios used in  AHP  were calculated from the recorded human ac-  celeration profiles. This section outlines the acceleration and temporal parameter values used to calculate the ratios. All acceleration values reported are based on the filtered and segmented data. A modified version of the MATLAB script used to segment the human trajectories was used to determine extrema of acceleration and their temporal parameters. The segmentation algorithm is outlined in detail in Section B.1.2. Presented in Table B.3 are values of the launch accelerations used for AHP ratio calculation. Presented in Table B.4 are the descriptive statistics of the temporal parameters used to calculate B1 and B2 ratios.  128  Table B.3: Descriptive statistics on a1 values of all three subject data from Chapter 3 presented by motion type. All units are in cm/s2 . Significant ANOVA results are found in the acceleration values of successful reachretract motions, F(2, 130) = 25.502, p < .001, but not for P-type or Rtype hesitation motions, F(1, 3) = 1.92, p = .26 and F(2, 5) = .77, p = .51 respectively. LB and UB indicate the lower and upper bounds of the 95% confidence interval respectively. N  Mean  SD  1 2 3 Total  26 51 56 133  1488 1496 1817 1629  S-type Motions 367 72 1339 207 29 1437 240 32 1752 302 26 1577  1 2 3 Total 1 2 3 Total  SE  95% C.I. LB UB  Subj  Min  Max  1636 1554 1881 1681  602 1031 1141 602  2120 1960 2260 2260  0 4 1 5  P-type Hesitation Motions . . . . . 1292 238 119 913 1671 924 . . . . 924 1219 264 118 891 1546  . 956 924 924  . 1515  4 2 2 8  R-type Hesitation Motions 1689 469 234 944 2436 1156 665 470 -4815 7128 1326 550 389 -3620 6272 1465 512 181 1037 1893  1346 686 937 686  2380 1626 1715 2380  129  1515  Table B.4: Descriptive statistics on the temporal values of acceleration peaks based on all three subject motions collected from Chapter 3. All units are in seconds. LB and UB indicate the lower and upper bounds of the 95% confidence interval respectively. Motion Type  N  Mean  SD  SE  95% C.I. LB UB  Min  Max  S-Type R-Type P-Type Total  134 8 4 146  0.19 0.16 0.24 0.19  t1 0.04 0.04 0.11 0.05  0.00 0.01 0.05 0.00  0.18 0.13 0.07 0.18  0.20 0.19 0.41 0.20  0.12 0.12 0.16 0.12  0.36 0.24 0.4 0.4  S-Type R-Type P-Type Total  134 8 4 146  1.05 1.14 1.09 1.06  (t2 − t1 )/t1 0.33 0.03 0.47 0.16 0.88 0.44 0.36 0.03  0.99 0.75 -0.30 1.00  1.11 1.53 2.49 1.12  0.47 0.58 0.40 0.40  2.38 1.88 2.38 2.38  1.63 3.13 1.75 1.72  (t3 − t2 )/t1 0.44 0.04 1.36 0.48 1.06 0.53 0.64 0.05  1.56 1.99 0.05 1.62  1.71 4.27 3.44 1.82  0.75 1.13 0.65 0.65  4.25 4.86 3.18 4.86  S-Type R-Type P-Type Total  134 8 4 146  130  Appendix C  Advertisements, Consents, and Surveys Contents C.1  Study I Advertisements, Online Surveys, and Consents . . . . 131  C.2  Study II Advertisement, Online Surveys, and Consent . . . . 145  C.3  Study III Advertisements, Questionnaires, and Consent . . . . 149  This appendix outlines the details of the online surveys used for Studies I and II, as well as the questionnaire used for Study III. Consent forms and advertisement materials used for the studies are also presented in this appendix. This appendix is divided into three sections: Section C.1 presents the three different consent forms and the online surveys used for Study I; Section C.2 presents the consent form and the online survey used for Study II; and Section C.3 presents the consent form, pre-experiment questionnaire, and main questionnaire used for Study III.  C.1 Study I Advertisements, Online Surveys, and Consents Three different consent forms were used in Study I. One was employed for the pilot experiment involving human-human interaction, in which the participants’ motions during the interaction were captured via two inertial sensors. The consent form is 131  presented in Figure C.1. The second consent form (see Figure C.4) was used for the HH online surveys, where the participants watched videos of the human-human interaction recorded from the pilot experiment. The HH online surveys were advertised online as per Figure C.3. Screen captures of the HH online surveys are presented in figures C.5 to C.7. The third consent form (see Figure C.9) was used for the HR online surveys that presented videos of human-robot interactions analogous to the human-human interactions. The HR online surveys were advertised online using the contents presented in Figure C.8. Screen captures of the HR online surveys are presented in figures C.10 to C.12.  132  Figure C.1: Consent form used for the human-human interaction pilot experiment (page 1).  133  which has restricted secure access and is locked at all times. Only your hand and arm motion will be videotaped, and potentially identifying features such as your face will not be videotaped. If you have any concerns about your treatment or rights as a research subject, you may telephone the Research Subject Information Line in the UBC Office of Research Services at the University of British Columbia, at (604) 822-8598.  By signing this form, you consent to participate in this study, and acknowledge you have received a copy of this consent form. Name (print):______________________________________ Date:_________________ Signature:_______________________________________________  Last revised: April 17, 2012  consent form Motion Capture - rev2.doc  Page 2 of 2  Figure C.2: Consent form used for the human-human interaction pilot experiment (page 2).  134  Re: Call for volunteers for a Human-Robot Interaction study We are offering you the opportunity to contribute to the advancement of human-robot relations. Increasing widespread implementation of robots has revealed that effective robot-human communication is a vital element to creating a friendly shared workspace environment. We are investigating your perception of a human-human interaction (HHI). Once the research is complete the data obtained will be used to attempt the development of a human-robot interaction in which the robot’s actions are perceived in the same manner as the HHI. The study will be conducted via an online survey. It will consist of a short video of HHI, which will be followed by a few questions pertaining to the video. The survey should take no longer than 10 minutes. We need volunteers to participate in the study. A consent form will be available as the first page of the survey. You will be required to complete the form in order to participate in the study. The link to the study is posted at http://caris.mech.ubc.ca/?pageid=4.401  For information/concerns regarding the survey please contact: AJung Moon survey@amoon.ca <omit> (604)822-3147 Thank you very much for your help. <omit> AJung Moon, Masters Candidate, UBC Mechanical Engineering ajung.moon@gmail.com Mike Van der Loos, Associate Professor, UBC Mechanical Engineering vdl@mech.ubc.ca Elizabeth Croft, Professor, UBC Mechanical Engineering, ecroft@mech.ubc.ca  Last Revised: April 17, 2012  Call for Volunteers Gesture Survey rev1.docx  Figure C.3: Contents of the online advertisement used to recruit subjects for Study I, HH online surveys. The study was advertised on the Collaborative Advanced Robotics and Intelligent Systems Laboratory website and other social media tools including facebook, twitter, and the author’s website. 135  Figure C.4: Screen capture of the consent form used for the HH online surveys. The same consent form was used for all three HH surveys.  136  Figure C.5: Screen capture of online survey for HH-1. This figure is a repeat of Figure 3.6.  137  Figure C.6: Screen capture of online survey for human-human condition, Session 2.  138  Figure C.7: Screen capture of online survey for human-human condition, Session 3.  139  Re: Call for volunteers for a Human-Robot Interaction study We are offering you the opportunity to contribute to the advancement of human-robot relations. Increasing widespread implementation of robots has revealed that effective robot-human communication is a vital element to creating a friendly shared workspace environment. We are investigating your perception of a human-human interaction (HHI) and/or human-robot interaction (HRI). Once the research is complete the data obtained will be used to attempt the development of a human-robot interaction in which the robot’s actions are perceived in the same manner as the HHI. The study will be conducted via an online survey. It will consist of a short video of HHI and/or HRI, which will be followed by a few questions pertaining to the video. The survey should take no longer than 10 minutes. We need volunteers to participate in the study. A consent form will be available as the first page of the survey. You will be required to complete the form in order to participate in the study. The link to the study is posted at http://caris.mech.ubc.ca/?pageid=4.401  For information/concerns regarding the survey please contact: AJung Moon survey@amoon.ca <omit> (604)822-3147 Thank you very much for your help. <omit> AJung Moon, Masters Candidate, UBC Mechanical Engineering ajung.moon@gmail.com Mike Van der Loos, Associate Professor, UBC Mechanical Engineering vdl@mech.ubc.ca Elizabeth Croft, Professor, UBC Mechanical Engineering, ecroft@mech.ubc.ca  Last Revised: April 17, 2012  Call for Volunteers Gesture Survey rev2.docx  Figure C.8: Contents of the online advertisement used to recruit subjects for Study I, HR online survey. The study was advertised on the Collaborative Advanced Robotics and Intelligent Systems Laboratory website and other social media tools including facebook, twitter, and the author’s website. 140  Figure C.9: Screen capture of the consent form used for the human-robot interaction online surveys. The same consent form was used for all three HR surveys. 141  Figure C.10: Screen capture of online survey for human-robot condition, Session 1.  142  Figure C.11: Screen capture of online survey for human-robot condition, Session 2.  143  Figure C.12: Screen capture of online survey for human-robot condition, Session 3.  144  C.2 Study II Advertisement, Online Surveys, and Consent In Study II, seven versions of the same online survey, each containing a different pseudo-random order of  HRI  videos was used. All versions of the survey used a  single consent form. This consent form is presented in Figure C.14. The study was advertised via online media tools including twitter, facebook, and the lab and the author’s website. The advertised material is presented in Figure C.13. Each survey contained 12 pages, each page containing a video and the same four survey questions. A sample page is shown in Figure C.15.  145  Re: Call for volunteers for a Human-Robot Interaction study We are offering you the opportunity to contribute to the advancement of human-robot relations. Increasing widespread implementation of robots has revealed that effective robot-human communication is a vital element to creating a friendly shared workspace environment. We are investigating your perception of a human-human interaction (HHI) and/or human-robot interaction (HRI). Once the research is complete the data obtained will be used to attempt the development of a human-robot interaction in which the robot’s actions are perceived in the same manner as the HHI. The study will be conducted via an online survey. It will consist of twelve short videos (< 30sec) of HHI and/or HRI, which will be followed by a few questions pertaining to the video. The survey should take no longer than 20 minutes. We need volunteers to participate in the study. A consent form will be available as the first page of the survey. You will be required to complete the form in order to participate in the study. The link to the study is posted at http://caris.mech.ubc.ca/?pageid=4.401  For information/concerns regarding the survey please contact: AJung Moon survey@amoon.ca <omit> (604)822-3147 Thank you very much for your help. <omit> AJung Moon, Masters Candidate, UBC Mechanical Engineering ajung.moon@gmail.com Mike Van der Loos, Associate Professor, UBC Mechanical Engineering vdl@mech.ubc.ca Elizabeth Croft, Professor, UBC Mechanical Engineering, ecroft@mech.ubc.ca  Last Revised: April 17, 2012  Call for Volunteers Robot Gesture Survey rev1.docx  Figure C.13: Contents of the online advertisement used to recruit subjects for Study II. The study was advertised on the Collaborative Advanced Robotics and Intelligent Systems Laboratory website. Links to this advertisement was distributed via other online media tools, including twitter, facebook, and the author’s website. 146  Figure C.14: Screen capture of the consent form used for the human-robot interaction online surveys outlined in Chapter 5. The same consent form was used for all surveys in Study II. 147  Figure C.15: This is an example screenshot from one of the 12 pages of survey shown to online participants. All pages of the survey contained the same questions in the same order. Only the contents of the embedded video changed. This screen capture is also presented in Figure 5.2. 148  C.3 Study III Advertisements, Questionnaires, and Consent Subjects for Study III were recruited via posted advertisements at the University of British Columbia Vancouver campus and the lab’s website. Figure C.16 and Figure C.17 present the call for volunteers for the study. In Study III, all subjects signed a consent form (see Figure C.18) prior to beginning the experiment. The subjects then completed a pre-questionnaire that was used to collect demographic information (see Figure C.19). During the main experiment, at the end of each trial, the subjects provided feedback on their perception of the robot using the questionnaire presented in Figure C.20.  149  Re: [Call for Volunteers] Sorting Hearts and Circles with a Robot – A Human-Robot Collaboration Study At the CARIS Lab (ICICS building, x015), we are conducting an exciting human-robot interaction experiment to investigate whether a robot that uses humanlike gestures can work as a better teammate than robots that don’t when humans and robots collaborate with each other. We would like to invite you to participate in our study. It will take no more than 45 minutes of your time, and you will be asked to interact with a robot at our lab. The study will involve you wearing a cable-based sensor on your finger while sorting a number of small objects in collaboration with a robot. Prior to the experiment and between the sessions of sorting task, you will be asked to fill out a questionnaire. At the very end of the experiment, we will ask you for your feedback on the robot’s behaviours. The experiment will be video recorded as part of the experiment as well as for analysis purposes. However, the recordings will not be made public without your consent. We believe that the results of our study will contribute to creating a friendly human-robot shared workspace environment. A consent form will be available on site, as well as prior to the experiment. You will be required to complete the form in order to participate in the study. To participate in the study, or have concerns about the study, please contact: AJung Moon ajmoon@interchange.ubc.ca <omit> (604)822-3147 Thank you very much for your help. <omit> AJung Moon, Masters Candidate, UBC Mechanical Engineering ajmoon@interchange.ubc.ca Mike Van der Loos, Associate Professor, UBC Mechanical Engineering vdl@mech.ubc.ca Elizabeth Croft, Professor, UBC Mechanical Engineering, ecroft@mech.ubc.ca  Last Revised: April 17, 2012  Call for Volunteers HR Interaction rev1.doc  Figure C.16: Contents of the online advertisement used to recruit subjects for Study III. The study was advertised on the Collaborative Advanced Robotics and Intelligent Systems Laboratory website.  150  Sorting Hearts and Circles  The Human-Robot Experiment @CARIS Lab ICICS x015 http://to.ly/bj61 ajung@amoon.ca <omit> 604-822-3147  The Human-Robot Experiment @CARIS Lab ICICS x015 http://to.ly/bj61 ajung@amoon.ca <omit> 604-822-3147  The Human-Robot Experiment @CARIS Lab ICICS x015 http://to.ly/bj61 ajung@amoon.ca <omit> 604-822-3147  The Human-Robot Experiment @CARIS Lab ICICS x015 http://to.ly/bj61 ajung@amoon.ca <omit> 604-822-3147  The Human-Robot Experiment @CARIS Lab ICICS x015 http://to.ly/bj61 ajung@amoon.ca <omit> 604-822-3147  The Human-Robot Experiment @CARIS Lab ICICS x015 http://to.ly/bj61 ajung@amoon.ca <omit> 604-822-3147  <omit>  The Human-Robot Experiment @CARIS Lab ICICS x015 http://to.ly/bj61 ajung@amoon.ca 604-822-3147  The Human-Robot Experiment @CARIS Lab ICICS x015 http://to.ly/bj61 ajung@amoon.ca <omit> 604-822-3147  <omit>  The Human-Robot Experiment @CARIS Lab ICICS x015 http://to.ly/bj61 ajung@amoon.ca 604-822-3147  <omit>  The Human-Robot Experiment @CARIS Lab ICICS x015 http://to.ly/bj61 ajung@amoon.ca 604-822-3147  151  ?!  Robot with a  Scan it here  The CARIS Lab (ICICS x015) is looking for healthy adult volunteers to participate in a fun human-robot collaboration study.  You will be asked to sort a number of small objects with a robot. With your help, we will be able to investigate whether a robot that uses humanlike gestures will be a better teammate than robots that don’t. The study will run from late October to early November, 2011.  Visit http://to.ly/bj61 for more information, OR Contact AJung at ajung@amoon.ca to participate. <omit>  Figure C.17: Advertisement posted at the University of British Columbia campus to recruit subjects for Study III.  Figure C.18: Consent form used for Study III. 152  Subject #:  Date:  1. What is your age? ____________ 2. What is your gender?  Female /  3. What is your dominant hand?  Right-handed  /  Male Left-handed  (If you are ambidextrous, please circle the one you’d like to use for the experiment.)  4. How familiar are you in working with a robot arm? Not familiar at all  1  2  3  4  5. Have you ever worked or interacted with this particular robot?  5 Yes  Very familiar /  No  6. If you answered ‘Yes’ in the above question, please describe your experience with the robot below:  Figure C.19: Pre-questionnaire used to collect demographic information from the Study III subjects. 153  Subject #:  Condition #:  t(complete):  Collision:  1. Please rate YOUR emotional state on these scales: Anxious 1 2 3 Agitated 1 2 3 2. How much did you like this robot? Not at all 1 2  3  3. How much did you like working with this robot? Not at all 1 2 3  4 4  5 5  Mistakes:  Relaxed Calm  4  5  Very much  4  5  Very much  4. For each word below, please indicate how well it describes your INTERACTION with the robot. Describes very poorly  Describes very well  Boring  1  2  3  4  5  Enjoyable  1  2  3  4  5  Engaging  1  2  3  4  5  5. For each word below, please indicate how well it describes the ROBOT you just worked with. Describes very poorly  Describes very well  Aggressive  1  2  3  4  5  Independent  1  2  3  4  5  Helpful  1  2  3  4  5  Assertive  1  2  3  4  5  Efficient  1  2  3  4  5  Useful  1  2  3  4  5  Competitive  1  2  3  4  5  Dominant  1  2  3  4  5  Reliable  1  2  3  4  5  Forceful  1  2  3  4  5  6. Please rate your impression of the ROBOT on these scales: Apathetic Mechanical Pleasant Intelligent Fake Incompetent Machinelike Friendly Moving elegantly Stagnant Like Kind Artificial  1 1 1 1 1 1 1 1 1 1 1 1 1  2 2 2 2 2 2 2 2 2 2 2 2 2  3 3 3 3 3 3 3 3 3 3 3 3 3  4 4 4 4 4 4 4 4 4 4 4 4 4  5 5 5 5 5 5 5 5 5 5 5 5 5  Responsive Organic Unpleasant Unintelligent Natural Competent Humanlike Unfriendly Moving rigidly Lively Dislike Unkind Lifelike  Figure C.20: Main questionnaire used to collect the subject’s perception of the robot in Study III. 154  Appendix D  Acceleration-based Hesitation Profile Trajectory Characterisation and Implementation Algorithms Contents D.1  Offline Acceleration-based Hesitation Profile (AHP)-based Trajectory Generation . . . . . . . . . . . . . . . . . . . . . . . 156  D.2  AHP -based  Trajectory Implementation for Real-time HumanRobot Shared Task . . . . . . . . . . . . . . . . . . . . . . . 160 D.2.1  Management of the Robot’s Task . . . . . . . . . . . 160  D.2.2  Management of Real-time Gesture Trajectories . . . 162  D.2.3  Calculation of a1 and t1 Parameters for AHP-based Trajectories . . . . . . . . . . . . . . . . . . . . . . 163  D.2.4  Generation of AHP Spline Coefficients . . . . . . . . 163  D.2.5  Human State Tracking and Decision Making . . . . 164  This appendix presents the details of the algorithms used to generate robot trajectories based on the Acceleration-based Hesitation Profile (AHP). As out155  lined in Chapter 4,  AHP -based  trajectories can be generated offline, and as a re-  sponse mechanism for a real-time Human-Robot Shared-Task (HRST). Section D.1 presents MATLAB implementation of generating Section D.2 presents an implementation of  AHP  AHP -based  trajectories offline.  as a real-time resource conflict  response mechanism in Robot Operating System (ROS) and BtClient environment operating a 7-DOF robotic manipulator used in Study III (WAM™, Barrett Technologies, Cambridge, MA, USA).  D.1  Offline AHP-based Trajectory Generation  This section discusses in detail the generation and implementation of reference trajectories used by the 6-DOF robot for Study II. As described in Chapter 5, 12 different motions were generated and tested. Figure D.1 provides an overview of the trajectory generation process. All reference trajectory generation codes presented in this section are written in MATLAB. Thethe quinticpoints wz outputAV function generates time-series position data for the robot by calling the quinticpoints wz outputAV function. Upon receiving reference trajectories for individual motion segments from the quinticpoints wz outputAV function, the script appends these motions as a stream of multiple motion segments. The following pseudo code outlines the algorithm for this script.  1  I n i t i a l i z e c o n s t a n t s : maximum p o s t i o n s o f t h e r o b o t ,  i n i t i a l position  ...  c o o r d i n a t e s , f a c t o r t o slow down t h e t r a j e c t o r y 2  L i n e a r l y i n t e r p o l a t e from s t a r t i n g p o s i t i o n o f r o b o t t o t h e i n i t i a l p o s i t i o n  3  f o r i = 1 : 1 :N [ Ax , Vx , m o t i o n i n X , m o t i o n i n Z ] =  4  ...  q u i n t i c p o i n t s w z o u t p u t A V ( MotionType ( i ) , a1 ( i ) , t 1 ( i ) ,  ...  minimum z axis position ( i ) , slow down factor ) ; 5  Append m o t i o n i n X t o e a r l i e r X−a x i s t r a j e c t o r i e s  6  Append m o t i o n i n Z t o e a r l i e r Z−a x i s t r a j e c t o r i e s  7  Append an empty t r a j e c t o r y t o r e s t between motions f o r both X− and Z−a x i s t r a j e c t o r i e s  8  end  9 10  Append t i m e stamps t o m o t i o n i n X  11  Append t i m e stamps t o m o t i o n i n Z  156  ...  Once the quinticpoints wz outputAV function is called, it receives the type of motion to be generated, two parameter values, minimum Z-axis position, and a scaling factor to slow down the reference trajectory. Using this information, the function generates the requested  AHP -based  trajectories in the X-axis, gener-  ates a Z-axis that accommodates the X-axis, and returns the position profile of the trajectory. The following pseudo code outlines this algorithm.  1  D e f i n e t h e AHP r a t i o s , C1 , C2 , B1 , and B2.  2  Calculate a c c e l e r a t i o n p r o f i l e of Splines 1 through 3  3  Compute f i n a l a c c e l e r a t i o n v a l u e o f S p l i n e 3  4  Compute f i n a l v e l o c i t y v a l u e s f o r S p l i n e s 1 t h r o u g h 3  5  Compute f i n a l p o s i t i o n v a l u e s f o r S p l i n e s 1 t h r o u g h 3  6  Compute S p l i n e 4 u s i n g t h e f i n a l p o s i t i o n , v e l o c i t y , and a c c e l e r a t i o n  ...  value of Spline 3 7  Append p o s i t i o n t r a j e c t o r i e s o f S p l i n e s 1 t h r o u g h 4  8  C a l l g e n z q u i n t i c s f u n c t i o n and r e c e i v e f o u r Z−a x i s s p l i n e s , z1 , z2 , z3  ...  and z 4 . 9  Solve s y m b o l i c Z−a x i s s p l i n e s z1 , z2 , z3 and z4 from g e n z q u i n t i c s a t  ...  every sampling p e r i o d 10  Append t h e Z−a x i s p o s i t i o n t r a j e c t o r i e s  The Z-axis calculation of the quinticpoints wz outputAV function is accomplished by calling the gen z quintics function. This function symbolically produces four quintic trajectories that span from one end of a spline to the next. The first Z-axis spline, for example, spans the entire duration of the first X-axis  AHP  spline, and the second Z-axis spline starts at t1 and spans the en-  tire duration of the second X-axis  AHP  spline and so on. Presented below is the  gen z quintics algorithm.  1  f u n c t i o n [ z1 , z2 , z3 , z4 ] = g e n z q u i n t i c s ( zmax1 , zlow , zmax2 , z1t , z2t , z3t , z 4 t )  2  syms t ;  3  a max1 = 0 . 0 2 ;  4  a max2 = 0 . 0 2 ;  5  a low = −a max1 ;  6 7  z1 = q u i n t i c s p l i n e s y m g e n ( 0 , 0 , 0 , zmax1 , 0 , a max1 ) ;  8  z1 = subs ( z1 , { t } , { t / z 1 t } ) ;  9  z2 = q u i n t i c s p l i n e s y m g e n ( zmax1 , 0 , a max1 , zlow , 0 , a low ) ;  157  ...  10  z2 = subs ( z2 , { t } , { t / z 2 t } ) ;  11  z3 = q u i n t i c s p l i n e s y m g e n ( zlow , 0 , a low , zmax2 , 0 , a max2 ) ;  12  z3 = subs ( z3 , { t } , { t / z 3 t } ) ;  13  z4 = q u i n t i c s p l i n e s y m g e n ( zmax2 , 0 , a max2 , 0 , 0 , 0 ) ;  14  z4 = subs ( z4 , { t } , { t / z 4 t } ) ;  158  Start the Reference Trajectory Generator (DataPt to SignalGenerater Long Apr4 acc.m)  Reference Trajectory Generator calls Quintic Point Generators (quinticpoints wz outputAV.m)  Quintic Point Generator calculates time index, and acceleration, velocity, and position version of the four spline AHP-based motions  Quintic Point Generator calls Analytic Quintic Generator and produces four connected splines for the Z-axis motion (gen z quintics.m)  Quintic Point Generator receives the four splines in analytic form, and samples them with the generated time index at ts . Figure D.1: Overview of the AHP-based trajectory generation process.  159  D.2  AHP -based  Trajectory Implementation for Real-time Human-Robot Shared Task  This section presents pseudo codes of the algorithms implemented in ROS environment to conduct the experiment in Study III. The relationship between the nodes have been outlined in Chapter 6, and a graphical overview of these nodes are replicated here (see Figure D.2). Section D.2.1 presents the pseudo code for the gesture launcher node, which manages the running of the entire experiment. Section D.2.2 presents the gesture engine node that manages the triggering of different trajectory splines for different experimental conditions. The gesture engine node uses an independent node to calculate the AHP parameters necessary for computing AHP spline coefficients. Algorithms for this node, the calculate parameter node, is presented in Section D.2.3. Section D.2.4 discusses the get s2 s3 coefs node that uses the calculated  AHP  parameter values from calculate parameter  to compute coefficients for splines 2 and 3 of  AHP -based  trajectories. Finally,  Section D.2.5 presents the decision maker node that is used to keep track of the four human task states. The decision maker node is called by the gesture engine node to determine whether a collision is imminent or not. All nodes presented in this section are written in C++.  D.2.1  Management of the Robot’s Task  In this section, the gesture launcher node that manages the robot’s task behaviour is described. Once triggered, this node is provided with, by the experimenter, the number of times the robot must successfully inspect the marbles bin, and the experimental condition in which it should operate.  1  Sleep f o r t h e i n i t i a l d w e l l i n g t i m e o f 4 seconds  2  C a l l g e s t u r e e n g i n e t o move  3  m o t i o n c o u n t ++ i f ( S−t y p e motion completed )  4 5 6  s c o u n t ++ else  7  i f (R−t y p e motion completed )  8  r c o u n t ++  160  Figure D.2: The software system architecture implemented for the Study III HRST experiment replicated from Figure 6.7. The WAMServer node interfaces btClient control algorithms that operate outside of ROS to directly control the robot. Further detail of the interface and btClient algorithms are outlined in Figure 6.5.  9  e l s e i f ( R o b o t i c Avoidance motion completed ) r a c o u n t ++  10 11 12 13 14  else report error w h i l e ( s c o u n t < requested number f o r S−t y p e reach ) { Get human t a s k s t a t e t i m e s from d e c i s i o n m a k e r node  15  Sleep f o r 80% o f human d w e l l t i m e  16  Request motion from g e s t u r e e n g i n e node w i t h reach t i m e 4x human  ...  reach t i m e i f ( S−t y p e motion completed )  17 18 19  s c o u n t ++ else  20  i f (R−t y p e motion completed )  21  r c o u n t ++  22  e l s e i f ( R o b o t i c Avoidance motion completed ) r a c o u n t ++  23 24 25 26 27  else report error i f (R−t y p e o f R o b o t i c Avoidance motion completed ) Request motion from g e s t u r e e n g i n e node w i t h reach t i m e 4x human reach t i m e  161  ...  28  m o t i o n c o u n t ++;  29  else report error  30 31  Report t a s k c o m p l e t i o n t i m e  32  Return  D.2.2  Management of Real-time Gesture Trajectories  In this section the gesture engine node that manages the triggering of different robot motion trajectories (including  AHP -based  motions) is presented. This  node receives commands from the gesture launcher node that is responsible for tracking the dwell times for the robot and triggering the robot to start its reaching motion via the gesture engine node. A flow diagram outlining this node is presented in Figure 6.3.  1  Initialize clients  2  Call calculate param  3  i f ( C a l l c a l c u l a t e p a r a m == success )  4  Call get s2 s3 coefs  5  i f ( C a l l g e t s 2 s 3 c o e f s == success )  6  Call move to cartesians  7  Sleep u n t i l ( t = t 1 − 0 . 0 4 )  8  Call decision maker  9  i f ( Experiment Condition != Blind ) i f ( Experiment C o n d i t i o n == H e s i t a t i o n && d e c i s i o n m a k e r  10  ...  == c o n f l i c t i m m i n e n t ) 11  C a l l m o v e t o c a r t e s i a n q u i n t f o r s p l i n e 2 movements  12  Wait u n t i l t r a j e c t o r y f i n i s h e d  13  Call move to cartesian quint with spline 3 c o e f f i c i e n t s  14  Wait u n t i l t r a j e c t o r y f i n i s h e d else  15  while ( ! abort motion )  16 17  Call decision maker  18  i f ( d e c i s i o n m a k e r == c o n f l i c t i m m i n e n t )  19  Get c u r r e n t p o s i t i o n  20  Move t o ( c u r r e n t p o s i t i o n ) + 0.01 Wait u n t i l t r a j e c t o r y f i n i s h e d  21  else  22  Sleep f o r 0.05 seconds  23  i f ( T r a j e c t o r y F i n i s h e d == True )  24  Sleep f o r 1 second  25 26  else  162  Wait u n t i l t r a j e c t o r y f i n i s h e d  27  Call move to cartesian to r e t r a c t  28  Wait f o r t r a j e c t o r y f i n i s h e d  29 30  Return  Calculation of a1 and t1 Parameters for AHP-based Trajectories  D.2.3  This section describes how the key parameters, a1 and t1 , are calculated in the ROS environment via the calculate param server node. This node calculates the two key AHP parameters, and provides the information to the gesture engine node (see Section D.2.2) to allow smooth transition to take place between the Stype motions (successful reach-retract motions generated by quintic splines) and the  AHP  splines. The following pseudo code outlines the calculate param  node.  1  I n p u t : ( i n i t i a l and f i n a l c o n d i t i o n s o f a q u i n t i c s p l i n e ) q0 , v0 , a0 ,  ...  q1 , v1 , a1  −36* v0 − 9 * a0 + 3 * a1 −24* v1 + 60 * q1 360 * q0 +192 * v0 +36 * a0 −24* a1 +168 * v1 −360* q1 D e f i n e a = −360* q0 −180* v0 −30* a0 +30 * a1 −180* v1 +360 * q1  2  Define c =  3  Define b =  4  −60* q0  5 6  D e f i n e t b = ( − b + s q r t ( b * b − 4* a * c ) ) / ( 2 * a )  7  D e f i n e t b 1 = ( − b − s q r t ( b * b −4 * a * c ) ) / ( 2 * a )  8  i f ( tb ≤ tb1 )  9 10 11  t1 = tb * f i n a l t i m e ; else i f ( tb1 < tb ) t1 = tb1 * f i n a l t i m e ;  12 13 14  C a l c u l a t e p o s i t i o n a t a1 u s i n g t 1  15  C a l c u l a t e v e l o c i t y a t a1 u s i n g t 1  16  C a l c u l a t e a1 ( launch a c c e l e r a t i o n ) u s i n g t 1  17  Return  D.2.4  Generation of AHP Spline Coefficients  This section describes how the coefficients for the  AHP  splines are calculated for  real-time trajectory planning. The get s2 s3 coefs node is a server node in ROS  that generates the coefficients for  AHP  163  splines 2 and 3. The  AHP  equations  presented in Chapter 4 is used to calculate the spline coefficients. The following pseudo code outlines the get s2 s3 coefs node.  1  D e f i n e c h a r a c t e r i s t i c a c c e l e r a t i o n r a t i o c o n s t a n t s c1 , c2 , b1 , b2  2 3  Spline2 coef5 =  a1 * 0 . 1 * ( 1 + c1 ) / ( b2 * b2 * b2 ) ;  4  Spline2 coef4 =  a1 * ( − 0.25) * ( 1 + c1 ) / ( b2 * b2 ) ;  5  Spline2 coef3 =  0;  6  Spline2 coef2 =  a1 * 0 . 5 ;  7  Spline2 coef1 =  v e l o c i t y of Spline1 at t1 ;  8  Spline2 coef0 =  p o s i t i o n of Spline1 at t1 ;  9 10  Compute S p l i n e 2 f i n a l p o s i t i o n  11  Compute S p l i n e 2 f i n a l v e l o c i t y  12  Compute S p l i n e 2 f i n a l a c c e l e r a t i o n  13 14  Spline3 coef5 =  req . a * 0.1 * ( − c1−c2 ) / ( b3 * b3 * b3 ) ;  15  Spline3 coef4 =  req . a * ( − 0.25) * ( − c1−c2 ) / ( b3 * b3 ) ;  16  Spline3 coef3 =  0;  17  Spline3 coef2 =  req . a * ( − 0 . 5 ) * c1 ;  18  Spline3 coef1 =  dp2 f ;  19  Spline3 coef0 =  res . p2 f ;  20 21  Compute S p l i n e 3 f i n a l p o s i t i o n  22  Compute S p l i n e 3 f i n a l v e l o c i t y  23  Compute S p l i n e 3 f i n a l a c c e l e r a t i o n  24 25  Return  D.2.5  Human State Tracking and Decision Making  In this section, the decision maker node is presented. This node monitors the cable potentiometer readings and is used to keep track of human task states and determine occurrence of collisions. The following pseudo code describes how the human states are determined, and how the duration in each of the four states are recorded.  1  Define d w e l l t h r e s h o l d  2  Define r e l o a d t h r e s h o l d  3 4  i f ( cable < dwell ) {  164  5  i f ( s t a t e == d w e l l i n g )  6  dwell start = curr time  7  state = dwelling  8  e l s e i f ( s t a t e == r e a c h i n g ) dwell start = curr time  9  state = dwelling  10 11  else s t i l l d w e l l i n g , o r e r r o r . Do n o t h i n g  12 13 14 15  else i f ( cable > dwell ) i f ( cable > reload ) i f ( s t a t e == r e l o a d i n g )  16 17  reload start = curr time  18  reach time = curr time − r e a c h s t a r t e l s e i f ( s t a t e == r e t r a c t e d and r e t u r n e d t o r e l o a d i n g )  19  reload start = curr time  20  state = reloading  21 22  else i f ( s t a t e == r e a c h i n g )  23  do n o t h i n g  24  e l s e i f ( s t a t e == r e l o a d i n g )  25  reload time = curr time − reload start  26  state = retracting  27  e l s e i f ( s t a t e == r e t r a c t i n g )  28  do n o t h i n g  29  else  30 31  reach start = curr time  32  S u b t r a c t c u r r e n t t i m e from t h e g l o b a l v a r i a b l e d w e l l s t a r t  33  dwell time = curr time − dwell start state = reaching  34 35 36  else do n o t h i n g  When the node is called to make a decision on whether a collision is imminent or not, the following algorithm is triggered.  1 2 3 4 5 6  i f ( c a b l e > d w e l l t h r e s h o l d && s t a t e == r e a c h i n g ) c o l l i s i o n i s imminent e l s e i f ( c a b l e > d w e l l t h r e s h o l d && s t a t e == r e l o a d i n g ) c o l l i s i o n i s imminent else no imminent c o l l i s i o n  165  The same node, when called by the gesture launcher node, returns the times recorded for each of the key human states. The following pseudo code demonstrates how this node compares the recorded human task state times to a fixed maximum and minimum thresholds and returns dwell time and reach time to be used by the robot.  1  D e f i n e maximum reach time , reach time max t o be 0 . 6 seconds  2  D e f i n e minimum reach time , r e a c h t i m e m i n t o be 0 . 2 seconds  3  D e f i n e maximum d w e l l time , d w e l l t i m e m a x t o be 4 . 0 seconds  4  D e f i n e minimum d w e l l time , d w e l l t i m e m i n t o be 0 . 5 seconds  5 6 7 8 9  i f ( r e a c h t i m e > reach time max ) r e a c h t i m e = reach time max else i f ( reach time < reach time min ) reach time = reach time min ;  10 11  i f ( d w e l l t i m e > d w e l l t i m e m a x ) / / i f d w e l l t i m e i s somehow ridiculous , correct i t  12 13 14  dwell time = dwell time max else i f ( dwell time < dwell time min ) dwell time = dwell time min  15 16  r e t u r n d w e l l t i m e and r e a c h t i m e  166  ...  Appendix E  Human Perception of AHP-based Mechanism and its Impact on Performance Contents E.1  E.2  E.3  Video Observation of Jerkiness and Success from Robot Motions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 E.1.1  Perceived Success of Robot Motions . . . . . . . . . 168  E.1.2  Perceived Jerkiness of Robot Motions . . . . . . . . 169  In Situ Perception of AHP-based Motions . . . . . . . . . . . 170 E.2.1  Usefulness . . . . . . . . . . . . . . . . . . . . . . 171  E.2.2  Emotional Satisfaction . . . . . . . . . . . . . . . . 172  Non-parametric Comparison of Performance Impact of the AHP -based Mechanism . . . . . . . . . . . . . . . . . . . . . 173 E.3.1  Counts of Mistakes . . . . . . . . . . . . . . . . . . 173  E.3.2  Counts of Collisions . . . . . . . . . . . . . . . . . 174  In this appendix, measures from Studies II and III that have not been discussed in Chapter 5 and Chapter 6 are presented. The scores of the two distractor questions from Study II are not discussed in Chapter 5, and are presented in Section E.1.  167  Human perception measurements from Study III that do not yield statistically significant finding are discussed in Section E.2. Details of the non-parametric analysis conducted on the collision and mistake measures are presented in Section E.3.  E.1 Video Observation of Jerkiness and Success from Robot Motions In Study II, presented in Chapter 5, human perception of robot motions were investigated via an online survey. Of the four questions, two questions, Q1 and Q4, were distractor questions: Q1 Did the robot successfully hit the target in the middle of the table? (1.Not successful - 5. Successful) Q4 Please rate your impression of the robot’s motion on the following scale: (1.Smooth - 5. Jerky) A repeated-measures ANOVA was conducted on all four questions. However, the results for these two questions do not test hypotheses H2.1 and H2.2. Nonetheless, they provide interesting insights into human perception of robotic collision avoidance motions in comparison to  AHP -based  motions. This section discusses these  results. Consistent with the results reported in Chapter 5, all sphericity violations in the Analysis of Variance (ANOVA) were corrected using Greenhouse-Geisser approach.  E.1.1 Perceived Success of Robot Motions The responses to the first distractor question, Q1 (success score), yield an expected result. Overall, successful motions received a significantly higher score (M=4.70, SE=.08) than all other motion types (F(1.63, 68.36) = 244.67, p < .0001). The success score did not change across the different acceleration values used to generate the motions (F(2, 84) = .52, p = .60). Post-hoc analysis with Bonferroni correction indicates that this score difference between the successful motions and the other motion types are all significant to p¡.001 level. Figure E.1 shows the distribution of scores for this question.  168  Figure E.1: Overview of the success score collected from a five-point Likert scale question in Study II.  E.1.2 Perceived Jerkiness of Robot Motions The results of a repeated-measures ANOVA indicate that the responses to the second distractor question (Q4) also show significant differences across the motion types (F(2.45, 102.82) = 11.33, p < .0001). Upon conducting a one-sample t-test of the jerkiness score against the neutral score, only the Robotic Avoidance motion types demonstrate an above-neutral score. All other motion types – Successful, Collision, and  AHP -based  Hesitation – showed jerkiness score below the neutral score,  indicating that these motions are perceived as smooth motions (p < .05 or better for all motion types). The perceived jerkiness of the motions did not significantly vary across the three levels of acceleration (F(2, 84) = 2.25, p = .14). Figure E.2 shows the distribution of jerkiness scores.  169  Figure E.2: Overview of the jerkiness score collected from a five-point Likert scale question in Study II.  E.2 In Situ Perception of AHP-based Motions This section discusses human perception measurements collected from Study III. In Study III, presented in Chapter 6, two different survey instruments were combined to measure human perception of the 7-DOF WAM robot from an  HRST  ex-  periment. The experimental conditions included three different robot responses to occurrence of human-robot resource conflicts: in the Blind Condition, the robot did not respond to the conflict at all; in the Hesitation Condition, the robot used AHP -based  trajectories communicate its behaviour state of uncertainty to the sub-  ject while avoiding the imminent collision; in the Robotic Avoidance Condition, the robot abruptly stopped to avoid the imminent collision. Three human perception measurements collected from the study do not demon-  170  Figure E.3: Overview of perceived intelligence scores collected from fivepoint Likert scale questions in Study III. strate statistical sigificance, and are discussed in this section. These measures are usefulness, emotional satisfaction, and perceived intelligence. As demonstrated in Table 6.2, the perceived intelligence measure did not yield an acceptable level of internal reliability. Hence, rather than discussing the measurement scores that are not reliable, the measured perceived intelligence is presented in Figure E.3. The usefulness and emotional satisfaction scores were internally reliable (Cronbach’s alpha above 0.7 for both measures). Hence, they are discussed in the following sections.  E.2.1 Usefulness The Hesitation Condition shows the highest mean usefulness score compared to the other two conditions. However, the results from a repeated-measures  ANOVA  indicate that these score differences are not statistically significant (F(2, 44)=.37, p=.69). No significant score difference is found between the first and second encounters either. Nonetheless, the second encounter show a higher mean score than the first. A graphical overview of the usefulness scores are shown in Figure E.4  171  Figure E.4: Overview of usefulness scores collected from five-point Likert scale questions in Study III.  E.2.2 Emotional Satisfaction Similar to the usefulness measure, the second encounter of the Hesitation Condition, in particular, show the highest emotional satisfaction. However, the results from a repeated-measures  ANOVA  indicate that the scores are not signifi-  cantly different across Conditions (F(1.48, 32.52) = 2.68, p = .10) or Encounters (F(1, 22) = 1.89, p = .18). Emotional satisfaction scores for each conditions and encounters are presented in Figure E.5.  172  Figure E.5: Overview of emotional satisfaction scores collected from fivepoint Likert scale questions in Study III.  E.3 Non-parametric Comparison of Performance Impact of the AHP-based Mechanism In Study III, the number of collisions and mistakes occurred during the experiment are considered as secondary measures of human-robot performance. This section discusses the Chi-Square test conducted on the non-parametric measures. Section E.3.1 presents the mistakes measure, and Section E.3.2 discusses the collision measure.  E.3.1 Counts of Mistakes In order to compare the number of mistakes made in each Conditions and Encounters, the counts of mistakes are cross tabulated for Chi-Squared analysis. The cross tabulation is presented in Table E.1. Chi-Square test indicates that the counts of mistakes are not significantly different across the conditions (X 2 (6, N = 144) = 3.29, p = .77). Most subjects, as shown in Table E.1, did not make any mistakes resulting in similar non-parametric distribution of mistakes across the conditions. This implies that, due to the small effect, much larger number of subjects should be recruited to find significance in this measure.  173  Table E.1: Cross tabulation outlining the differences in the counts of mistakes by Condition as a factor. Condition Blind  Hesitation  Robotic Avoidance Total  Count Exp. Count % of Total Count Exp. Count % of Total Count Exp. Count % of Total Count Exp. Count % of Total  0 42 43.3 29.2% 45 43.3 31.3% 43 43.3 29.2% 130 130 90.3%  1 4 3.7 2.8% 3 3.7 2.1% 4 3.7 2.8% 11 11 7.6%  Mistakes 2 1 .7 .7% 0 .7 .0% 1 .7 .7% 2 2 1.4%  3 1 .3 .7% 0 .3 .0% 0 .3 .7% 1 1 .7%  Total 48 48.0 33.3% 48 48.0 33.3% 48 48.0 33.3% 144 144 100.0%  Table E.2: Chi-Square tests of counts of mistake differences by Condition.  Pearson Chi-Square Likelihood Ratio Linear-by-Linear Association Number of Valid Cases  Value 3.29 4.11 .52 144  DOF 6 6 1  Asymp. Sig. (2-sided) .77 .661 .47  E.3.2 Counts of Collisions This section presents the number of collisions made by the subjects during the main experiment of Study III. Presented in Table E.3 is a cross tabulation of the collision measure organized by Condition. Chi-Square test results (see Table E.4) demonstrate that there is a significant difference in the collision scores (X 2 (1, N = 144) = 75.8, p < .001). This difference is between the Blind Condition and the non-collision conditions (Hesitation and Robotic Avoidance Conditions). This is a trivial result, considering that the robot motions in Hesitation and Robotic Avoidance Conditions were designed to avoid collisions.  174  Table E.3: Cross tabulation outlining the differences in the counts of collisions by Condition as a factor. Condition Blind  Hesitation  Robotic Avoidance Total  Count Exp. Count % of Total Count Exp. Count % of Total Count Exp. Count % of Total Count Exp. Count % of Total  0 18 38 12.5% 48 38.0 33.3% 48 38.0 33.3% 114 114 79.2%  Mistakes 1 2 16 10 5.3 3.3 11.1% 6.9% 0 0 5.3 3.3 .0% .0% 0 0 5.3 3.3 .0% .0% 16 10 16 10 11.1% 6.9%  3 3 1.0 2.1% 0 1.0 .0% 0 1.0 .0% 3 3 2.1%  6 1 0.3 .7% 0 .3 .0% 0 .3 .0% 1 1 .7%  Total 48 48.0 33.3% 48 48.0 33.3% 48 48.0 33.3% 144 144 100.0%  Table E.4: Chi-Square tests of counts of collisions differences by Condition.  Pearson Chi-Square Likelihood Ratio Linear-by-Linear Association Number of Valid Cases  Value 75.79 83.87 38.38 144  175  DOF 8 8 1  Asymp. Sig. (2-sided) .00 .00 .47  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0103462/manifest

Comment

Related Items