Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Data-driven design of expressive robot hands and hand gestures : applications for collaborative human-robot… Sheikholeslami, Sara 2017

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2017_may_sheikholeslami_sara.pdf [ 7.64MB ]
Metadata
JSON: 24-1.0344007.json
JSON-LD: 24-1.0344007-ld.json
RDF/XML (Pretty): 24-1.0344007-rdf.xml
RDF/JSON: 24-1.0344007-rdf.json
Turtle: 24-1.0344007-turtle.txt
N-Triples: 24-1.0344007-rdf-ntriples.txt
Original Record: 24-1.0344007-source.json
Full Text
24-1.0344007-fulltext.txt
Citation
24-1.0344007.ris

Full Text

Data-driven Design of ExpressiveRobot Hands and Hand GesturesApplications for CollaborativeHuman-Robot InteractionbySara SheikholeslamiB.ASc., The University of British Columbia, 2014A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF APPLIED SCIENCEinThe Faculty of Graduate and Postdoctoral Studies(Mechanical Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)April, 2017c© Sara Sheikholeslami 2017AbstractFast and reliable communication between human workers and robotic assis-tants (RAs) is essential for successful collaboration between these agents.This is especially true for typically noisy manufacturing environments thatrender verbal communication less effective. This thesis investigates theefficacy of nonverbal communication capabilities of robotic manipulatorsthat have poseable, three-fingered end-effectors (hands). This work exploresthe extent to which different poses of a typical robotic gripper can effec-tively communicate instructional messages during human-robot collabora-tion. Within the context of a collaborative car door assembly task, a seriesof three studies were conducted. Study 1 empirically explored the type ofhand configurations that humans use to nonverbally instruct another per-son (N=17). Based on the findings from Study 1, Study 2 examined howwell human gestures with frequently used hand configurations were under-stood by recipients of the message (N=140). Finally, Study 3 implementedthe most human-recognized human hand configurations on a 7-degree-of-freedom (DOF) robotic manipulator to investigate the efficacy of havinghuman-inspired hand poses on a robotic hand compared to an unposed hand(N=100).Contributions of this work include the presentation of a set of hand config-urations humans commonly use to instruct another person in a collaborativeassembly scenario, as well as Recognition Rate and Recognition Confidencemeasures for the gestures that humans and robots expressed using differenthand configurations. These experimental results indicate that most gesturesare better recognized with a higher level of confidence when displayed witha posed robot hand. Guidelines and principles are provided based on theseresults for the mechanical design of robotic hands.iiPrefaceThis thesis is submitted in partial fulfillment of the requirements for thedegree of Master of Applied Science in Mechanical Engineering at the Uni-versity of British Columbia (UBC).The material presented in this thesis is available as two works: A con-ference paper and a journal paper. The author was responsible for per-forming a literature review, developing software, conducting human studiesand data analysis, and writing the manuscripts. The conference paper waspresented by the author at the IEEE/RSJ International Conference on In-telligent Robots and Systems (IROS) in 2015. The journal paper has beenrecommended for publication in the International Journal of Robotics Re-search (IJRR) (impact factor 2.54, ranked #1 in Robotics).Conference PublicationsSara Sheikholeslami, AJung Moon, and Elizabeth A. Croft. Exploring theeffect of robot hand configurations in directional gestures for human-robotinteraction. In IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS ’15), Hamburg, Germany, September–October 2015Journal Articles (accepted)Sara Sheikholeslami, AJung Moon, and Elizabeth A. Croft. Cooperativegestures for industry: Exploring the efficacy of robot hand configurations inexpression of instructional gestures for human-robot interaction. Interna-tional Journal of Robotics ResearchAn outline of the three experiments presented in this thesis has been ac-cepted as part of a book chapter, which was under its last round of revisionsat the time this thesis was submitted:Book Chapters (Pre-print)iiiPrefaceJustin W. Hart, Sara Sheikholeslami, and Elizabeth A. Croft. Developingrobot assistants with communicative cues for safe, fluent HRI. In J. ScholzH. Abbass and D. Reid, editors, Foundations of Trusted Autonomy. Springer,Berlin, Germany, 2016 - pre-printAll human-participant experiments described in this thesis were approvedby the University of British Columbia Behavioural Research Ethics Board(H10-00503).ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixGlossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Background and Motivating Literature . . . . . . . . . . . . 62.1 Nonverbal Communication in HHI . . . . . . . . . . . . . . . 62.2 Nonverbal Communication in HRI . . . . . . . . . . . . . . . 72.3 A Review of Industrial Robotic Hands . . . . . . . . . . . . . 93 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.1 Study 1 (Pilot): Identifying Human Hand Gestures Based onObservations from Human-Human Collaboration . . . . . . . 143.2 Study 2: Human Perceptions of Human Hand Gestures Basedon Observations from Human-Human Collaboration . . . . . 273.3 Study 3: Human Perceptions of Robot Hand Gestures Basedon Observations from Human-Robot Collaboration . . . . . . 294 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . 344.1 Study 1 (Pilot) Results: Identifying Human Hand Gestures . 344.1.1 Directional Gestures . . . . . . . . . . . . . . . . . . . 35vTable of Contents4.1.2 Orientation Gestures . . . . . . . . . . . . . . . . . . 354.1.3 Manipulation Gestures . . . . . . . . . . . . . . . . . 364.1.4 Feedback Gestures . . . . . . . . . . . . . . . . . . . . 374.2 Study 2 and Study 3 Results: Analysing Recognition Rate andRecognition Confidence of Human and Robot Hand Gestures 384.2.1 Directional Gestures . . . . . . . . . . . . . . . . . . . 414.2.1.1 Directional Gesture: Up . . . . . . . . . . . 414.2.1.2 Directional Gesture: Down . . . . . . . . . 414.2.1.3 Directional Gesture: Left . . . . . . . . . . 424.2.1.4 Directional Gesture: Right . . . . . . . . . 424.2.1.5 Discussion . . . . . . . . . . . . . . . . . . . 454.2.2 Orientation Gestures . . . . . . . . . . . . . . . . . . 464.2.2.1 Orientation Gesture: < 45◦ . . . . . . . . . 464.2.2.2 Orientation Gesture: 90◦ . . . . . . . . . . . 464.2.2.3 Orientation Gesture: 180◦ . . . . . . . . . . 484.2.2.4 Discussion . . . . . . . . . . . . . . . . . . . 484.2.3 Manipulation Gestures . . . . . . . . . . . . . . . . . 494.2.3.1 Manipulation Gesture: Install . . . . . . . . 494.2.3.2 Manipulation Gesture: Remove . . . . . . 514.2.3.3 Manipulation Gesture: PickUp . . . . . . . 524.2.3.4 Manipulation Gesture: Place . . . . . . . . 524.2.3.5 Manipulation Gesture: Swap . . . . . . . . 524.2.3.6 Discussion . . . . . . . . . . . . . . . . . . . 534.2.4 Feedback Gestures . . . . . . . . . . . . . . . . . . . . 544.2.4.1 Feedback Gesture: Confirm . . . . . . . . . 544.2.4.2 Feedback Gesture: Stop . . . . . . . . . . . 564.2.4.3 Discussion . . . . . . . . . . . . . . . . . . . 564.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575 Guidelines for the Design of Expressive Robotic Hands . . 595.1 Regions and Features of a Robotic Hand . . . . . . . . . . . 605.1.1 Wrist . . . . . . . . . . . . . . . . . . . . . . . . . . . 605.1.2 Palm . . . . . . . . . . . . . . . . . . . . . . . . . . . 605.1.3 Fingers . . . . . . . . . . . . . . . . . . . . . . . . . . 615.1.4 Pointer Finger . . . . . . . . . . . . . . . . . . . . . . 615.1.5 Thumb . . . . . . . . . . . . . . . . . . . . . . . . . . 625.2 Applications of Design Guidelines . . . . . . . . . . . . . . . 625.2.1 Barrett Hand . . . . . . . . . . . . . . . . . . . . . . . 625.2.2 Seed Robotics RH4D Ares Hand . . . . . . . . . . . . 645.2.3 Seed Robotics RH7D Eros Hand . . . . . . . . . . . . 66viTable of Contents5.3 Next Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 716.2 Limitations and Future Work . . . . . . . . . . . . . . . . . . 716.3 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . 72Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73AppendicesA Study 1 Instructions . . . . . . . . . . . . . . . . . . . . . . . . 78A.1 Study1-Phase1 Instructions . . . . . . . . . . . . . . . . . . . 78A.2 Study1-Phase2 Instructions . . . . . . . . . . . . . . . . . . . 84B Advertisements, Online Surveys, and Consent Forms . . . 87B.1 Study 2 Advertisements, Online Surveys, and Consent Forms 87B.2 Study 3 Advertisements, Online Surveys, and Consent Forms 93C Participants’ confidence in recognizing human gestures com-pared to robot expressions of the same gestures . . . . . . 99viiList of Tables3.1 Directional Gestures, GD, and frequently observed accom-panying hand poses found in Study 1. . . . . . . . . . . . . . 183.2 Orientation Gestures, GO, and frequently observed accom-panying hand poses found in Study 1. . . . . . . . . . . . . . 183.3 Manipulation Gestures, GM , and frequently observed ac-companying hand poses found in Study 1. . . . . . . . . . . . 193.4 Feedback Gestures, GF , and frequently observed accompa-nying hand poses found in Study 1. . . . . . . . . . . . . . . . 194.1 Measures of independent-samples t-tests on the RecognitionConfidence from Study 2. . . . . . . . . . . . . . . . . . . . . 394.2 Measures of one-way (or Welch) ANOVA or independent-samplest-test on the Recognition Confidence from Study 3 . . . . . . 40C.1 Measures of independent samples t-test on the to measures ofRecognition Confidence across the robot and human expres-sions of each hand configuration for all gestures. . . . . . . . . 101viiiList of Figures2.1 Vacuum Gripper System FXC-FMC-SG . . . . . . . . . . . . 102.2 Shunk’s 2-finger pneumatic parallel gripper MPG Series . . . 112.3 Shunk’s 2-finger hydraulic gripper HGN Series . . . . . . . . . 112.4 Robotiq’s 3-finger gripper . . . . . . . . . . . . . . . . . . . . 122.5 The Barrett Hand . . . . . . . . . . . . . . . . . . . . . . . . 133.1 Assembly task designed for human-participants experiment. . 153.2 Directional Gestures, GD, and frequently observed accom-panying hand poses found in Study 1. . . . . . . . . . . . . . 203.3 Orientation Gestures, GO, and frequently observed accom-panying hand poses found in Study 1. . . . . . . . . . . . . . 213.4 Manipulation Gestures, GM , and frequently observed ac-companying hand poses found in Study 1. . . . . . . . . . . . 243.5 Feedback Gestures, GF , and frequently observed accompa-nying hand poses found in Study 1. . . . . . . . . . . . . . . . 263.6 An example of one of the 14 pages of the Study 2 online survey. 283.7 Human hand poses observed in Study 1, and the correspond-ing human-inspired robot hand poses. . . . . . . . . . . . . . 323.8 Robot Closed-Hand Pose. . . . . . . . . . . . . . . . . . . . . 323.9 An example of one of the 14 pages of the Study 3 online survey. 334.1 Most frequently observed hand poses for Directional Ges-tures (GD) for Study 1 (N = 17). . . . . . . . . . . . . . . . 354.2 Most frequently observed hand poses for Orientation Ges-tures (GO) for Study 1 (N = 17). . . . . . . . . . . . . . . . 364.3 Most frequently observed hand poses forManipulation Ges-tures (GM ) for Study 1 (N = 17). . . . . . . . . . . . . . . . 364.4 Most frequently observed hand poses for Feedback Gestures(GF ) for Study 1 (N = 17). . . . . . . . . . . . . . . . . . . . 374.5 Human Recognition Rates for Directional Gestures (GD)for both Study 2 (Human) and Study 3 (Robot). The errorbars indicate the margin of error for a 95% confidence interval. 43ixList of Figures4.6 Human Recognition Confidence for Directional Gestures(GD) for both Study 2 (Human) and Study 3 (Robot). . . . . 434.7 Common misinterpretations of RobotDirectional Gestures,GD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444.8 Human Recognition Rates for Orientation Gestures, GO,for both Study 2 (Human) and Study 3 (Robot). The errorbars indicate the margin of error for a 95% confidence interval. 474.9 Human Recognition Confidence for Orientation Gestures(GO) for both Study 2 (Human) and Study 3 (Robot). . . . . 474.10 Human Recognition Rates forManipulation Gestures, GM ,for both Study 2 (Human) and Study 3 (Robot). The errorbars indicate the margin of error for a 95% confidence interval. 504.11 Human Recognition Confidence forManipulation Gestures,GM , for both Study 2 (Human) and Study 3 (Robot). . . . . 504.12 Common misinterpretations of Robot Manipulation Ges-tures, GM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514.13 Human Recognition Rates for Feedback Gestures (GF ) ges-tures for both Study 2 (Human) and Study 3 (Robot). Theerror bars indicate the margin of error for a 95% confidenceinterval. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554.14 Human Recognition Confidence for Feedback Gestures (GF )gestures for both Study 2 (Human) and Study 3 (Robot). . . 555.1 Significant regions for consideration when designing a robotichand, including the wrist, palm, fingers, pointer finger, andthumb. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595.2 The Barrett Hand. . . . . . . . . . . . . . . . . . . . . . . . . 635.3 The Seed Robotics RH4D Ares hand. . . . . . . . . . . . . . . 655.4 The Seed Robotics RH7D Eros hand. . . . . . . . . . . . . . . 675.5 A selection of configurations/poses expressed by the Seed RoboticsRH7D Eros hand. . . . . . . . . . . . . . . . . . . . . . . . . . 69A.1 Proper location and orientation of each of the six parts on thecar door . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82A.2 Experimental setup for human-participants pilot experiment(Study1-Phase1). . . . . . . . . . . . . . . . . . . . . . . . . . 83A.3 Highlighted changes in the orientation or location of three ofthe six parts assembled on the vehicle door . . . . . . . . . . 85A.4 Experimental setup for human-participants pilot experiment(Study1-Phase2). . . . . . . . . . . . . . . . . . . . . . . . . . 86xList of FiguresB.1 Screen capture of the consent form used for the human-humaninteraction online surveys conducted in Study 2 . . . . . . . . 89B.2 Contents of the online advertisement used to recruit partici-pants for Study 2 . . . . . . . . . . . . . . . . . . . . . . . . . 90B.3 Contents of the paper advertisement used to recruit partici-pants for Study 2 . . . . . . . . . . . . . . . . . . . . . . . . . 91B.4 An example of one of the 14 pages of the Study 2 online survey. 92B.5 Screen capture of the consent form used for the human-robotinteraction online surveys conducted in Study 3 . . . . . . . . 95B.6 Contents of the online advertisement used to recruit partici-pants for Study 3 . . . . . . . . . . . . . . . . . . . . . . . . . 96B.7 Contents of the paper advertisement used to recruit partici-pants for Study 3 . . . . . . . . . . . . . . . . . . . . . . . . . 97B.8 An example of one of the 14 pages of the Study 3 online survey. 98C.1 Measures of Recognition Confidence across the robot and hu-man expressions of each hand configuration for DirectionalGestures, GD. . . . . . . . . . . . . . . . . . . . . . . . . . . 102C.2 Measures of Recognition Confidence across the robot and hu-man expressions of each hand configuration forOrientationalGestures, GO. . . . . . . . . . . . . . . . . . . . . . . . . . . 102C.3 Measures of Recognition Confidence across the robot and hu-man expressions of each hand configuration for Manipula-tion Gestures, GM . . . . . . . . . . . . . . . . . . . . . . . . 103C.4 Measures of Recognition Confidence across the robot and hu-man expressions of each hand configuration for FeedbackGestures, GF . . . . . . . . . . . . . . . . . . . . . . . . . . . 103xiGlossaryCAGR compound annual growth rate. 1DOF degree(s) of freedom. iiGPS global positioning system. 2HCI human-computer interaction. 3HHI human-human interaction. v, 6, 8HRC human-robot collaboration. 1, 3HRI human-robot interaction. v, 1, 2, 3, 6, 7, 8, 64, 71RA robotic assistant. ii, 1, 2WAM Whole Arm Manipulator. 10, 29xiiAcknowledgementsThis work has been possible thanks to support from General Motors.The author thanks Dr. Ross Mead for spending countless hours reviewingand providing feedback on the thesis.The author also thanks the founders of Seed Robotics—Marco Prata andPedro Ramilo—for their open discussions on the design and development ofthe RH7D Eros robotic hand.xiiiChapter 1IntroductionIn the past thirty years, robotics technology has become well-established inthe manufacturing industry for reducing worker ergonomic stress and work-load by performing operations quickly, repetitively, and accurately [20, 25].Robotics technology is approaching the point in which an industrial roboticassistant (RA)—such as Baxter from Rethink Robotics—is mechanically safeenough to be used outside of work cells, having minimum to no physical bar-riers between it and human workers [16]. Other promising avenues for RAhardware include lightweight robots designed specifically for safety [21] andexisting industrial robot platforms augmented with improved sensing andcontrol [17]. As we continue to integrate robots as versatile aids for indus-try automation, it is important to develop human-robot interaction (HRI)mechanisms that facilitate seamless cooperation and intuitive communica-tion between humans and robots. The global robotics market is anticipatedto reach $67-billion (USD) by 2025, with industrial robotics representing thelargest segment of the market and growing at a compound annual growthrate (CAGR) of 7.6% [36].As robots become more adaptive and capable working alongside humanco-workers, it is imperative that intuitive HRI methods are designed to fa-cilitate direct and physical human-robot collaboration (HRC) [38]. Suchcollaboration would benefit productivity by effectively combining the capa-bilities of each partner: the intelligence, experience, and responsiveness ofhuman co-workers, and the accuracy, repeatability, and speed of RAs [22].Proximate HRI could be used extensively in manufacturing for tasks suchas assembly, inspection, box packing, and part delivery, among others. Insuch scenarios, while there is ongoing direct interaction between human androbot co-workers, it is important to allow human co-workers to focus theirattention on completing the task at hand, rather than controlling the robotutilizing complex teaching pendants or other keyboard based interfaces.1Chapter 1. IntroductionHowever, the manufacturing assembly line environment has inherent re-strictions and limitations that make implementation of human-robot collabo-ration systems challenging. Ambient acoustic noise is one such factor. Whilespeech control has come to be useful for devices like the Amazon Echo andspeech interfaces to instruments such as global positioning system (GPS),the denoising of speech presents significant signal processing challenges [10].Further, in manufacturing environments, workers are often encouraged orrequired to use earplugs; for those workers, spoken verbal communicationis unreliable and in some cases prohibited [7]. As an alternative, workersoften use hand gestures to communicate, motivating our investigation oftask-based gestural communication as a plausible HRI medium in industrialsettings.Much of the related work on improving industrial HRI has focused oninteractions in which the human demonstrates or instructs a robot on howto perform a task [44]; however, natural and balanced HRI must be bilateral[12]—the robot must be able to react and respond to the given demonstra-tions and instructions by its human co-worker(s). Future industrial scenariosenvision an ongoing human-robot interaction in which not only the humaninstructs and communicates with the robot, but also the robot is capableof responding and communicating back with its human coworker(s). Thebroad goal of this work is to develop HRI methods that will facilitate moreintuitive and effective cooperation and collaboration between humans androbots on industrial tasks.Today, the predominant robotic form factor used in manufacturing isthat of single-arm robotic manipulators with no face or body. Since work-ers are required to pay attention to the task in front of them, they maybe more likely to attend to the robot’s gripper than to a face, as it is po-sitioned where they are already looking [28]. Therefore, to bridge the gapbetween current systems and future robot embodiments, this thesis focuseson the development and evaluation of communicative robot gestures on asingle-arm manipulator. Related work demonstrated human recognition ofgestures expressed by a single-arm RA [20]; while this related study wassuccessful in conveying information without articulation of the robot’s fin-gers, the usefulness of robot fingers in gesturing has yet to be systematicallyexplored.More specifically, this work develops and evaluates a cardinal set of user-generated gestures applicable to industrial scenarios in which the robot isproviding a set of instructions to a co-located person while collaboratingon a shared task in an intuitive and effective manner. Three studies wereperformed to explore the following interrelated research questions:21.1. Thesis Outline1. “What kind of hand gestures do humans commonly use tononverbally instruct one another in industrial assembly con-texts?” This research question explores the lexicon of gestures natu-rally generated and interpreted in human-human interactions situatedwithin a particular task context. Grounding these gestures within aparticular task is important, as the gestures might mean somethingelse in a different context (and, conversely, other gestures might meansomething else within this same context).2. “How well do humans recognize the hand gestures presentedby another human?” This research question establishes a baselinefor human interpretation of naturally occurring gestures within thetask context to which robot gestures (inspired by the human gestures)will be compared.3. “How well do untrained human observers recognize robot handgestures that are accompanied by human-inspired hand posescompared to those that are exhibited with an unposed robothand?” This research question investigates the novel generation ofhuman-inspired situated gestures on a non-anthropomorphic robotichand common in industrial settings, and the interpretation of thesegestures by human observers.Answers to these question will help in designing a fluent HRI with reliablesets of communicative gestures. This work extends the body of work innonverbal HRI, the key contributions of which are:• a methodology for designing and implementing task-based communica-tive gestures to be expressed by a robot in HRI;• a cardinal set of user-generated task-based communicative hand ges-tures and accompanying hand poses for human-robot collaborativetasks;• an evaluation and validation of the identified gesture set with respect tohuman Recognition Rate and Recognition Confidence within a human-robot collaboration scenario; and• a set of guidelines for the mechanical design of robot hands.1.1 Thesis OutlineThis section describes the organization and contents of chapters in this thesis.31.1. Thesis OutlineChapter 2 highlights relevant works from the field of psychology, HRI,and human-computer interaction (HCI). The chapter mainly focuses on stud-ies that discuss the significance of human nonverbal communication with anemphasis on hand gestures (Section 2.1), nonverbal human-robot communi-cation within various contexts (Section 2.2), and the hardware limitationsof robot hands available in the industry compared to that of a human hand(Section 2.3). There have been many research contributions addressing theusefulness of human hand gestures, and their implementation in HRC con-texts; however, the added value of having poseable fingers on a robot fornonverbal communication purposes has yet to be explored in a systematicmanner. We address this knowledge gap by exploring how effectively an ar-ticulated robot arm can communicate approximated human hand-gesturesto its human co-workers with and without hand poses.Chapter 3 explores each of the aforementioned research equations in threestudies. Section 3.1 presents the first of three human-subject studies, Study1, designed to empirically identify a sample of appropriate task-based humanhand gestures and hand poses used for expressing the gestures. In this study,participants are asked to perform a collaborative assembly task, nonverballycommunicating their intentions to a human confederate. The motions thatthey produce are analyzed to determine the gestures and hand poses theyused during the study answering the first research question, “What kind ofhand gestures do humans commonly use to nonverbally instruct one anotherin industrial assembly context?”Based on the findings from Study 1, Study 2 (Section 3.2) presents videosof the identified human gestures with the selected hand poses to participantswithin an analogous assembly context to analyze how well the gestures areperceived by human observers, answering the second research question, “Howwell do humans recognize the hand gestures presented by another human?”Section 3.3 of Chapter 3 presents the third human-subject experiment,Study 3, which empirically tests the efficacy of a mechanically limited robotichand in communicating the identified gestures in study 1 to human observers.This study utilizes a commonly used robotic manipulator to approximate thehuman gestures. Videos of the produced gestures are presented to partici-pants to identify which gestures and hand poses are best understood whenimplemented on the robotic system in this fashion. This study answers thethird and final research question, “How well do untrained human observersrecognize robot hand gestures that are accompanied by human-inspired handposes compared to those that are exhibited with unposed robot hand?”41.1. Thesis OutlineChapter 4 evaluates and analyzes the hypotheses that most gestures arebetter recognized with a higher level of confidence when displayed with ahuman-inspired posed robot hand than an unposed robot hand by examiningRecognition Rates (accuracy) and Recognition Confidence of human obser-vations of the implemented robot hand gestures. Section 4.1 presents theidentified human hand gestures and accompanying hand poses from Study1. Section 4.2 presents which human gestures and hand poses participantsrecognize more confidently and evaluates how well the robot implementationof the same hand gestures and hand poses performs.Chapter 5 expands upon the results found in this thesis to provide a setof guidelines for the mechanical design of individual regions and features ofrobotic hands. Section 5.1 discuses these region and feature considerations.Section 5.2 presents the application of these principles to the design of realrobot hands. Section 5.3 proposes further steps to formalize these guidelines.Chapter 6 summarizes the thesis work. Section 6.1 reviews the key con-tributions of this work. Limitations of the approaches employed are alsodiscussed and resolutions to these limitations proposed as future work (Sec-tion 6.2), followed by concluding remarks (Section 6.3).5Chapter 2Background and MotivatingLiteratureSection 2.1 highlights the significance of human nonverbal communicationwith an emphasis on hand gestures in human-human interactions (HHI). Sec-tion 2.2 provides an overview of the relevant work on the topic of human-likenonverbal communication in human-robot interactions (HRI) within variouscontexts, such as turn-taking, hesitation, and hand gestures. Section 2.3further explores the different types of robot hands available in the industry,discusses the hardware limitations of robot hands compared to that of a hu-man hand, and highlights the challenges the kinematic differences pose inrobot hand gestural communication.2.1 Nonverbal Communication in HHIIn HHI, people use both verbal and nonverbal communication to conveyinformation to one another. Different nonverbal signals—hand and armgestures, body movements, facial expressions, eye and head gaze, touch,etc.—function in three distinct ways: (1) they regulate social situations andcommunicate attitudes and emotions (e.g., anxiety, happiness, depression,etc.) to others, (2) they strengthen speech by providing additional informa-tion about the content of the speech, and (3) they replace spoken language toconvey meaning (e.g., sign language often used within communities of peoplewith hearing impairments) [1, 2, 19]. In summary, people use nonverbal sig-nals to convey their internal states and intentions to other people, and theyhave the ability to read and understand the internal states and intentions ofother people from these nonverbal signals [1, 13].In particular, human hand gestures are one of the most vital nonverbalchannels; while the hands were evolved for grasping, they are also very usefulin social signaling [19]. For instance, conversational gestures—hand move-ments that accompany and are often related to speech—tend to increasespeech fluency [34, 35]. Hand gestures can also be used alone (i.e., in the62.2. Nonverbal Communication in HRIabsence of speech) and deliver a clear communicative message. For example,symbolic gestures—hand configurations and movements that can be directlytranslated into words—are often used to send a particular message to others[5, 26]. Contextual information can influence/modify the meaning of sym-bolic gestures. For example, the “thumbs up” is a familiar symbolic gesturethat is often interpreted as “good/positive”; however, context informationcan influence or add to its meaning—it can be used to greet someone, to in-dicate understanding the point of a conversation, or as an insult [41]. Thus,even though nonverbal gestures can convey messages without an accompa-nied speech, their meanings are still influenced by context.Harrison [23] explored gestural communication among workers in a noisyproduction line of a salmon factory. This related work showed that workerscommonly use hand gestures to communicate with one another, and that theworkers often have to shout when speaking to be heard due to the high am-bient noise. In industrial environments with high ambient noise, it has beenshown that gestural communication is preferred and has been well adaptedin different industries to replace verbal communication [3]. This thesis workexplores the efficacy of gestural communication in human-robot teams.2.2 Nonverbal Communication in HRIJust as how nonverbal communication can replace speech in environmentsin which verbal communication is unreliable or undesirable, nonverbal com-munication is expected to take a similar role in HRI. Various human-likenonverbal cues—hand gestures in particular—have been explored as commu-nication mechanisms between humans and robots during turn-taking [8, 11],playing games [42], hesitation [30, 31], and hand gestures within collabora-tive working processes [15, 18, 20, 33, 37].Past research in human-robot turn-taking (e.g., selecting the role ofspeaker vs. listener) has shown that vocal communication when accom-panied with nonverbal cues, such as hand gestures and head nods, improvestask performance of human-robot teams by making the robot more under-standable and predictable to the human teammate [8, 11]. However, thefocus of these works is often on situations in which nonverbal communica-tion is used to support and strengthen speech. This thesis work considersconditions in which only nonverbal communication is applicable (i.e., vocalcommunication is not feasible).Other HRI studies have investigated robots using gestures to play gameswith people. For instance, Short et al. [42] performed an experiment involv-72.2. Nonverbal Communication in HRIing a robot playing the rock-paper-scissors game against a human partner;however, due to mechanical limitations of the robot hand, a set of modifiedrock-paper-scissors gestures were deployed for the robot to use. Thus, par-ticipants had to be trained before the game to understand the meaning ofeach robot hand pose. In contrast, this thesis work explores a set of gesturesthat allows a mechanically limited robotic hand to communicate informationto untrained human observers.Few studies have focused on the effectiveness of nonverbal communi-cation of non-anthropomorphic robotic manipulators in industrial settings[15, 18, 20, 30, 31]. In one such study, Moon et al. [30] studied the efficacyof robot hesitation gestures as a means to convey robot planner uncertaintyin a human-robot resource conflict that arises when both the robot and theperson reach for the same object at the same time; results of this relatedwork showed that human observers can easily recognize the hesitation ges-tures expressed by the robotic manipulator. This result demonstrates thatnon-anthropomorphic robotic manipulators have the potential to effectivelyconvey communicative messages as well.In the context of collaborative work processes, Sauppé and Mutlu [37]evaluated the communicative effectiveness of a set of referencing (deictic)gestures performed by a human-like robot in six diverse settings, includingone scenario replicating the noisy environment of industrial settings; thisrelated work discusses design implications for the use of gestures in differentsettings. Ende et al. [15] explored which human-like nonverbal gestures arecommunicative for robotic systems of different levels of anthropomorphism;this related work found that referencing gestures—conveying “this one” and“from here to there”—and terminating gestures—conveying “stop” or “no”—are well recognized on various types of robots. In a study by Haddadi et al.[20], a set of gestures from human dyads (pairs) performing an assembly taskwas collected and implemented on a robotic arm with an unarticulated (i.e.,not actuated) stuffed glove at the robot end-effector to provide anthropo-morphic context; this related work found that, upon evaluating the humanrecognition of the robotic gestures, the robot’s lack of hand pose articulationtends to confuse rather than help human observers.While many of the aforementioned studies extracted useful hand gesturesfrom HHI and implemented them in HRI contexts, to date, the added valueof having poseable fingers on a robot for nonverbal communication purposeshas yet to be explored in a systematic manner. This thesis work addressesthe knowledge gap by exploring how effectively an articulated robot arm cancommunicate approximations of human hand-gestures to human coworkerswith and without articulated hand poses.82.3. A Review of Industrial Robotic Hands2.3 A Review of Industrial Robotic HandsIndustrial applications are dominated by single-arm robotic manipulatorsequipped with different end-effectors [20]. In a review paper, Tai et al. [43]presented a recent survey on the applications and advancements of industrialrobotic grippers. Industrial robotic grippers are commonly used for massproduction purposes and are mounted on a robotic arm on a stationaryplatform. Depending on the application, modern industrial robotic arms andgrippers can outperform humans in many tasks and are capable of lifting 1000kg [32], are repeatable to 10µm [29], and are faster with accelerations up to15 g [27]. Additionally, the cost of industrial robotic grippers is decreasingwhile manual labor costs are increasing. This has encouraged industry andacademia to develop more advanced robotic arms and grippers addressingthe needs of industry.An industrial robotic gripper can often be categorized into one of fourbroad categories: vacuum grippers, pneumatic grippers, hydraulic grippers,and servo-electric grippers [6]. Each category is described below.1. Vacuum grippers have a high level of flexibility and have been thestandard gripper used in manufacturing. This type of robot gripper isequipped with a rubber or suction cup to manipulate items. An exam-ple of a vacuum gripper is the Schmalz Vacuum Gripper (FXC/FMC-SG) gripper developed by Millsom Vacuum Handling1 for flexible han-dling of non-rigid workpieces, such as cardboard boxes (Figure 2.1).2. Pneumatic grippers have a compact and lightweight design. Thesegrippers can easily be incorporated into tight spaces, which can behelpful in the manufacturing industry. Schunk’s Pneumatic parallelgrippers2 are commonly used for safe and precise handling of small- tomedium-sized workpieces. Figure 2.2 highlights an example of Schunk’sMPG Series 2-finger pneumatic gripper.3. Hydraulic grippers are most often used in applications that requiresignificant amounts of force and, thus, require specialized equipmentthat has a hydraulic power source for actuation. Figure 2.3 illustratesan example of a hydraulic gripper by Schunk.4. Servo-electric grippers are highly flexible and allow for different mate-rial tolerances when handling parts. As such, these grippers are start-1http://www.millsom.com.au/2http://us.schunk.com/usen/homepage/92.3. A Review of Industrial Robotic Handsing to appear more in industry. Baxter’s 1D gripper [16] and Robo-tiq’s 3-finger gripper3 are examples of common servo-electric grippers.Figure 2.4 shows the Robotiq’s 3-finger gripper which is capable ofmanipulating a variety of object shapes and sizes.Figure 2.1: Vacuum Gripper System FXC-FMC-SG(http://www.millsom.com.au/products/vacuum-components/vacuum-gripping-systems/fxc-fmc-sg).In addition, various tools can be directly mounted on the tip of themanipulator (e.g., welding tips or suction cups). These commonly used end-effectors are highly non-anthropomorphic and are often limited in actuation.This makes it challenging, if not impossible, to map onto these end-effectorsthe communicative human hand configurations/poses (the articulation orpose of the fingers, such as in finger-crossing) for expressing a gesture.Robotic hands that more closely resemble human kinematics are ableto produce better approximations of human gestures (e.g., the GCUA Hu-manoid Robotic Hand [9]); however, such hands are likely to be much moreexpensive and less useful in industrial manufacturing.Therefore, to maximize the applicability of our results, this thesis re-search uses a commonly available non-anthropomorphic robotic hand thatbalances capabilities between physical manipulation and social expressive-ness. Approximations of human instructional hand gestures are programmedon a 7-DOF Barrett Whole Arm Manipulator (WAM)4 equipped with a3http://robotiq.com/4WAMTM , Barrett Technologies, Cambridge, MA, USA102.3. A Review of Industrial Robotic HandsFigure 2.2: Shunk’s 2-finger pneumatic parallel gripper MPG Series(http://us.schunk.com/usen/gripping − systems/series/mpg − plus).Figure 2.3: Shunk’s 2-finger hydraulic gripper HGN Series(http://www.directindustry.com/prod/schunk/product-69812-1283431.html).112.3. A Review of Industrial Robotic HandsFigure 2.4: Robotiq’s 3-finger gripper (http://robotiq.com/products/).three-fingered Barrett Hand5. The Barrett Hand is similar to (but notactually used as) morphologies of robotic hands common in industry, seeFigure 2.5. This work focuses on evaluating (1) if approximations of hu-man instructional hand gestures can be successfully generated on these non-anthropomorphic robotic hands, and (2) the efficacy of these robotic handsin communicating the gestures to human partners.5Barrett HandTM , Barrett Technologies, Cambridge, MA, USA.122.3. A Review of Industrial Robotic HandsFigure 2.5: The Barrett Hand(http://www.barrett.com/products-hand.htm).This chapter discussed nonverbal communication in both human-humanand human-robot interaction, and introduced categories of robotic gripperscommonly used in industry. Informed by this background literature, thenext chapter presents three studies designed to explore each of the researchquestions introduced in Chapter 1:1. “What kind of hand gestures do humans commonly use to nonverballyinstruct one another in industrial assembly context?”2. “How well do humans recognize the hand gestures presented by anotherhuman?”3. “How well do untrained human observers recognize robot hand gesturesthat are accompanied by human-inspired hand poses compared to thosethat are exhibited with an unposed robot hand?”13Chapter 3MethodologyThis chapter presents the three user studies conducted to address the threeinterrelated research questions introduced in Chapter 1. In Study 1 (a pilotstudy; Section 3.1), to identify appropriate task-based hand gestures andhand poses used for expressing the gestures, participants were asked to usesingle-handed gestures to instruct a human confederate in a collaborative cardoor assembly task. In Study 2 (Section 3.2), videos of the identified gestureswith the selected hand poses were presented to participants within an anal-ogous assembly context to analyze how well the gestures are perceived byhuman observers. In Study 3 (Section 3.3), approximations of these gestureswere implemented on a robotic manipulator, and videos of the producedgestures were presented to participants to identify which gestures and handposes were best understood when implemented on the robotic system. Allstudies were approved by the UBC Behavioural Research Ethics Board.3.1 Study 1 (Pilot): Identifying Human HandGestures Based on Observations fromHuman-Human CollaborationTo identify a sample lexicon of robot gestures that are both natural and ef-fective in communicating instructions to human partners, a pilot study wasconducted involving human dyads collaborating on a vehicle door assemblytask—the goal was to generate a sample of gestures that would be appro-priate and naturally occurring in the application domain, accepting thatthis would not generate an exhaustive exploration of the space or cover thecultural, regional, or other variations in gestures.The experimental setup consisted of six car door parts, and an un-assembled car door with seven spots to which the door parts could be at-tached using VelcroTM strips. The participant stood in front of the car doorand the worker stood to the right of the car door, with the car door partsplaced on a table between them. This setup allowed the participant and theconfederate to easily access the vehicle door as well as the parts (Figure 3.1).143.1. Study 1 (Pilot): Identifying Human Hand GesturesFigure 3.1: Assembly task designed for human-participants experiment.First, participants were asked to use hand gestures to instruct a humanconfederate—referred to henceforth as the “worker”—to assemble the partsinto specific locations on the car door according to a provided picture of thecompleted assembly.Next, a second picture of the vehicle door was given to the participants.The picture contained changes in the orientation/location of three of the sixitems already assembled on the door. The participants were asked to directthe experimenter to rearrange the items on the door to achieve the newassembly arrangement (see Appendix A for instructions and the car doorpictures provided to participants).To provoke a wider range of intuitive gestures in each round of the ex-periment, the worker would intentionally and as naturally as possible:1. assemble/place the part at an incorrect location or orientation;2. pick up an incorrect part from the table; and3. maintain natural eye contact with the participant to get him/her toeither confirm or correct the ongoing task.153.1. Study 1 (Pilot): Identifying Human Hand Gestures(See Appendix A, Section A.1 for instructions provided to the worker.)Participants were requested to observe the following rules6:1. not to speak/verbally communicate with the worker;2. only use one hand to direct the worker;3. only make one gesture and hold only one part at a time;4. wait for worker task completion before making the next gesture; and5. remain at the home position at all times.In total, 17 participants (N = 17; 7 female, 10 male) between 19 and36 years of age (M = 24.41, SD = 4.05) participated in the study; all buttwo were right-handed. The results dataset came from video recordings ofthe study and the observed hand gestures that participants naturally usedto convey common instructional commands to their partner. In executingthe assembly task, participants expressed an average of 20 gestures in total,which was reduced to a lexicon of 14 gestures based on the following crite-ria: gestures must be (1) understandable without trained knowledge of thegesture, (2) critical to task completion, and (3) commonly used among allparticipants. The selected gestures were classified into four categories basedon the nature of the gestures:1. Directional Gestures, GD, indicating that the worker should move(translate) a part in the specified direction, where:GD = {Up, Down, Left, Right}.2. Orientation Gestures, GO, indicating that the worker should rotate(orient) a part the specified number of degrees, where:GO = {< 45◦, 90◦, 180◦}.3. Manipulation gestures, GM , indicating that the worker should ap-ply the specified operation to a part or parts, where:GM = {Install,Remove,PickUp,Place,Swap}.4. Feedback gestures, GF , indicating approval or disapproval of workeraction, where: GF = {Confirm,Stop}.6The implications of imposing these restrictions are discussed in Section 6.2.163.1. Study 1 (Pilot): Identifying Human Hand GesturesAll of the selected gestures—except for the Confirm gesture—involvedsome sort of movement of the wrist/forearm. Many different hand configu-rations were observed for expressing each gesture. The two most commonlyobserved human-generated hand poses/configurations were selected for eachgesture. For instance, the “Move Part Up” gesture was most commonlyexpressed using (1) an Open-Hand (OH) configuration, and (2) a Finger-Pointing (FP) configuration (Figure 3.2). Section 4.1 provides the percent-age of participants that used each of the two most commonly observed handposes for expressing each of the four types of identified gestures. The selectedhand gestures and their corresponding hand poses are depicted in Table 3.1and Figure 3.2 for Directional Gestures, Table 3.2 and Figure 3.3 for Ori-entation Gestures, Table 3.3 and Figure 3.4 forManipulation Gestures,and Table 3.4 and Figure 3.5 for Feedback Gestures.173.1. Study 1 (Pilot): Identifying Human Hand GesturesTable 3.1: Directional Gestures, GD, and frequently observed accompa-nying hand poses found in Study 1.Directional Gestures (GD)indicate that the worker should move (translate)a part in the specified directionGesture, g ∈ GD Hand Poses FiguresUpOH 3.2aFP 3.2bDownOH 3.2aFP 3.2bLeftOH 3.2cFP 3.2dRightOH 3.2cFP 3.2dHand Poses:OH: Open-HandFP: Finger-PointingTable 3.2: Orientation Gestures, GO, and frequently observed accompa-nying hand poses found in Study 1.Orientation Gestures (GO)indicate that the worker should rotate (orient)a part the specified number of degreesGesture, g ∈ GO Hand Poses Figures< 45◦ HOH 3.3a90◦ HOH 3.3aFP 3.3b180◦ HOH 3.3aFP 3.3bHand Poses:HOH: Half Open-HandFP: Finger-Pointing183.1. Study 1 (Pilot): Identifying Human Hand GesturesTable 3.3: Manipulation Gestures, GM , and frequently observed accom-panying hand poses found in Study 1.Manipulation Gestures (GM)indicate that the worker should apply the specifiedoperation to a part or partsGesture, g ∈ GM Hand Poses FiguresInstallOH 3.4aFP 3.4bRemoveOH 3.4cHOH 3.4dPickUp OH 3.4ePlace FP 3.4fSwapFP 3.4gVS 3.4hHand Poses:OH: Open-HandHOH: Half Open-HandFP: Finger-PointingVS: V-SignTable 3.4: Feedback Gestures, GF , and frequently observed accompanyinghand poses found in Study 1.Feedback Gestures (GF )indicate approval or disapprovalof a worker’s actionGesture, g ∈ GF Hand Poses FiguresConfirm TU 3.5aStopOH 3.5bFP 3.5cHand Poses:TU: Thumbs-UpOH: Open-HandFP: Finger-Pointing193.1. Study 1 (Pilot): Identifying Human Hand GesturesFigure 3.2: Directional Gestures, GD, and frequently observed accom-panying hand poses found in Study 1: Up [and Down] gesture with (a)an Open-Hand pose, and (b) a Finger-Pointing pose; and Left [and Right]gesture with (c) an Open-Hand pose and (d) a Finger-Pointing pose.203.1. Study 1 (Pilot): Identifying Human Hand GesturesFigure 3.3: Orientation Gestures, GO, and frequently observed accompa-nying hand poses found in Study 1: 90◦ [and 180◦ and < 45◦] gesture with(a) a Half-Open-Hand pose, and (b) a Finger-Pointing pose. Note: the HOHpose was the only frequently observed pose for < 45◦ gesture213.1. Study 1 (Pilot): Identifying Human Hand Gestures(Figure continued on next page)223.1. Study 1 (Pilot): Identifying Human Hand Gestures(Figure continued on next page)233.1. Study 1 (Pilot): Identifying Human Hand GesturesFigure 3.4: Manipulation Gestures, GM , and frequently observed accom-panying hand poses found in Study 1: Install gesture with (a) an Open-Hand pose, and (b) a Finger-Pointing pose; Remove gesture with (c) anOpen-Hand pose, and (d) a Half Open-Hand pose; PickUp gesture with(e) an Open-Hand pose; Place gesture with (f) a Finger-Pointing pose; andSwap gesture with (g) a Finger-Pointing pose, and (h) V-Sign pose.243.1. Study 1 (Pilot): Identifying Human Hand Gestures(Figure continued on next page)253.1. Study 1 (Pilot): Identifying Human Hand GesturesFigure 3.5: Feedback Gestures, GF , and frequently observed accompany-ing hand poses found in Study 1: Confirm gesture with (a) a Thumbs-Uppose; and Stop gesture with (b) an Open-Hand, and (c) a Finger-Pointingpose.263.2. Study 2: Human Perceptions of Human Hand Gestures Based on Observations from Human-Human Collaboration3.2 Study 2: Human Perceptions of Human HandGestures Based on Observations fromHuman-Human CollaborationThe results of Study 1 yielded a collection of gestures that are intuitive andcommonly used in a human-human collaboration scenario. Study 2 involveda video-based online survey to analyze how well human observers understandthese gestures when conveyed with the different hand configurations. Thesurvey consists of a randomly ordered set of video clips, each of a person(referred to as the “director”) exhibiting one of the identified gestures witha selected hand configuration to direct a “worker” in an assembly task anal-ogous to Study 1. To avoid eliciting unintentional biases associated withother bodily gestures (e.g., differences in posture), only the gesturing handand arm were shown in the videos.In this between-participants study, each participant saw all of the 14collected gestures; however, each gesture was shown with only one of thetwo identified hand configurations for that gesture.Each video clip (one video clip per gesture) consisted of a short lead-insentence instructing the respondents to watch the video with special at-tention to the hand motions of the “director”. After each video clip, theparticipants were instructed to answer the following questions7:1. “What do you think the ‘worker’ should do with the part?” (partici-pants were asked to respond “I don’t know” if they did not understanda gesture);2. “How easy was it for you to understand the meaning of this gesture?”(on a semantic-differential scale from 1 (very difficult) to 7 (very easy));and3. “How certain are you of your answer to question 1?” (on a semantic-differential scale from 1 (very uncertain) to 7 (very certain)).Figure 3.6 shows a screen capture from the online survey with one of thevideo clips. Appendix B Section B.1 shows the consent form for running thisonline study.7The use of open-ended questions for gesture identification was selected to avoid leadinganswers; however, it might have resulted in some difficulty in assessing recognition as someinterpretation of the answer given.273.2. Study 2: Human Perceptions of Human Hand GesturesFigure 3.6: An example of one of the 14 pages of the Study 2 online survey.All pages of the survey contained the same questions in the same order;however, the video content of each page was randomly selected. Each videoclip contained one of the selected gestures. In this study, each participantsaw all of the 14 collected gestures; however, each gesture was shown withonly one of the two identified hand configurations for that gesture283.3. Study 3: Human Perceptions of Robot Hand Gestures Based on Observations from Human-Robot CollaborationThe semantic scale used for answering the three above questions wastreated as a continuous scale since each interval of the scale was of equalproportion; therefore, the data collected from the second and third questionswere treated as continuous measures.Answers to the first question were indicative of whether participants un-derstood the gesture correctly (i.e., Recognition Rate). Answers to the secondand third questions had a high level of internal consistency and were com-bined as a confidence measure (i.e., Recognition Confidence) of responses tothe first question (Cronbach’s α = 0.882).Recruitment of survey respondents involved two social media platforms(Twitter and Facebook) and distribution of advertisements to university stu-dents (Appendix B, Section B.1). Survey respondents received no compen-sation. In total, N = 120 participants responded to the survey. Two codersanalyzed participant responses with partial overlap, and had a high level ofinternal consistency (Cronbach’s α = 0.905).Analyses and results for this study can be found in Section 4.2.3.3 Study 3: Human Perceptions of Robot HandGestures Based on Observations fromHuman-Robot CollaborationStudy 2 investigated whether robot hand gestures accompanied with human-inspired hand configurations are better recognized by untrained observersthan the same gestures expressed with an unarticulated robot hand. Ap-proximations of the gestures identified in Study 1 were programmed on a7-DOF Barrett Whole Arm Manipulator (WAM)8 equipped with a three-fingered Barrett Hand9. Each gesture was video recorded three times: oncewith each of the two human-inspired hand configurations (Figure 3.7), andonce while the robot kept its hand closed (Figure 3.8); this latter Closed-Hand (CH) configuration served as a baseline.8WAMTM , Barrett Technologies, Cambridge, MA, USA9Barrett HandTM , Barrett Technologies, Cambridge, MA, USA.293.3. Study 3: Human Perceptions of Robot Hand GesturesAlthough it has more degrees of freedom than most industrial robot grip-pers, the Barrett Hand is still relatively non-anthropomorphic in shape andpose. To produce recognizable gestures, an iterative design approach wasemployed similar to [20]. The robot arm was manually moved to imitateeach human gesture, and the resulting motion trajectories were recorded.Next, the trajectories were played back and tuned until the gestures werevisually similar to the human gestures. A small pilot study (N = 4; twonaive participants and two expert participants) was conducted to get feed-back on the produced gestures, and their feedback was applied to improvethe implementation of the gestures.A video-based online survey was conducted consisting of randomly or-dered videos of the robotic arm exhibiting one of the identified gestures withone of the three robot hand configuration (two human-inspired hand con-figurations, and one CH configuration) to direct a worker in an assemblytask analogous to Study 1 (Section 3.2). The questions used for this surveywere the same as in Study 2. Figure 3.9 shows a screen capture from theonline survey with the robot arm exhibiting one of the identified gestures.Appendix B, Section B.2 shows the consent form for running this onlinestudy.Recruitment of survey respondents involved two social media platforms(Twitter and Facebook) and distribution of advertisements to university stu-dents (Appendix B, Section B.2). Survey respondents received no compen-sation. A total of N = 100 participants responded to the survey. Two codersanalyzed participant responses with partial overlap, and had a high level ofinternal consistency (Cronbach’s α = 0.877).Analyses and results for this study can be found in Section 4.2.303.3. Study 3: Human Perceptions of Robot Hand Gestures(Figure continued on next page)313.3. Study 3: Human Perceptions of Robot Hand GesturesFigure 3.7: Human hand poses observed in Study 1, and the correspond-ing human-inspired robot hand poses: (a) Open-Hand, (b) Finger-Pointing,(c) Half Open-Hand, (d) V-Sign, and (e) Thumbs-Up. Due to the limitedmorphology of the hand, (e) was considered the best implementation of theThumbs-Up hand pose despite its unfortunate resemblance to an insultinggesture; the hand has only three fingers and lacks a poseable thumb, so stick-ing out one of the side fingers could be mistaken as a Finger-Pointing handpose.Figure 3.8: Robot Closed-Hand Pose. The Closed-Hand Pose served as ourbaseline for analysing participant Recognition Rates of robot gestures.323.3. Study 3: Human Perceptions of Robot Hand GesturesFigure 3.9: An example of one of the 14 pages of the Study 3 online survey.All pages of the survey contained the same questions in the same order;however, the video content of each page was randomly selected. Each videoclip contained one of the selected gestures. In this study, each participantsaw all of the 14 collected gestures; however, each gesture was shown withonly one of the two identified hand configurations for that gesture33Chapter 4Results and DiscussionThe aim of this work was to analyze whether people understand a set ofcollected gestures correctly (Recognition rate), and how confident they arein understanding the meaning of the gestures (Recognition Confidence). Thehypothesis for this work was that most gestures have a higher level of Recog-nition Rate with a higher level of Recognition Confidence when displayedwith a human-inspired posed robot hand than an unposed robot hand. Thischapter evaluates and analyzes this hypothesis by examining RecognitionRates and Recognition Confidence of human observations of the implementedrobot hand gestures. The identified human hand gestures and accompanyinghand poses from Study 1 are presented in Section 4.1. In Section 4.2, themeasures of Recognition Confidence and Recognition Rate from Study 2 andStudy 3 are used together to evaluate how well the robot implementationof the same hand gestures and hand configurations perform with respect tohuman-human gesture communication, followed by a summary of findings inSection 4.3.4.1 Study 1 (Pilot) Results: Identifying HumanHand GesturesFrom the observations of human interactions in Study 1 (Section 3.1), a lex-icon of task-based hand gestures was developed, as well as the types of handposes that were frequently used for expressing the four types of gestures—theDirectional, Orientation, Manipulation, and Feedback Gestures—which are shown in Tables 3.1 to 3.4 along with the top two most frequentlyobserved hand poses for those gestures (unless only one common hand posewas observed).The remainder of this section presents the percentage of participants whoused each of the two most commonly observed hand poses for expressing eachof the four types of identified gestures.344.1. Study 1 (Pilot) Results: Identifying Human Hand Gestures4.1.1 Directional GesturesBased on the observations from Study 1, four Directional Gestures wereidentified: GD = {Up, Down, Left, Right}, Table 3.1. These Direc-tional Gestures were most frequently expressed using the Finger-Pointing(FP) and Open-Hand (OH) configurations (see Figures 3.7a and 3.7b forexemplars of these hand configurations).Figure 4.1 illustrates the percentage of participants that used either FPor OH poses to express the Directional Gestures.Figure 4.1: Most frequently observed hand poses for Directional Gestures(GD) for Study 1 (N = 17).4.1.2 Orientation GesturesAnother category of gestures identified in Study 1 included OrientationGestures, GO = {< 45◦, 90◦, 180◦}, Table 3.2. The < 45◦ gesture wasfrequently expressed with the Half Open-Hand (HOH) configuration. The90◦ and the 180◦ gestures were most frequently expressed using the Finger-Pointing (FP) and HOH hand configurations (see Figures 3.7a and 3.7b forexemplars of these hand configurations).Figure 4.2 illustrates the percentage of participants who used the HOHpose to express the < 45◦ gesture and either FP or HOH poses to expressthe 90◦ and 180◦ gestures.354.1. Study 1 (Pilot) Results: Identifying Human Hand GesturesFigure 4.2: Most frequently observed hand poses forOrientation Gestures(GO) for Study 1 (N = 17).4.1.3 Manipulation GesturesFive of the gestures observed in Study 1 were categorized as ManipulationGestures, GM = {Install,Remove,PickUp,Place,Swap}. Table 3.3lists the most frequently observed hand poses for each of these gestures (seeFigures 3.7 for exemplars of each of the identified hand configurations). Fig-ure 4.3 shows the distribution of hand poses used to express the Manipu-lation Gestures.Figure 4.3: Most frequently observed hand poses for Manipulation Ges-tures (GM ) for Study 1 (N = 17).364.1. Study 1 (Pilot) Results: Identifying Human Hand Gestures4.1.4 Feedback GesturesFrom the observations of human interactions in Study 1, two gestures wereidentified and categorized as Feedback Gestures, GF = {Confirm,Stop}.The Confirm gesture was most frequently expressed using a Thumbs-Up(TU) hand configuration (shown in Figure 3.7e), and the Stop gesture wasmost frequently expressed using the Open-Hand (OH) and Finger-Pointing(FP) hand configurations (shown in Figures 3.7a and 3.7b, respectively).Figure 4.4 shows the distribution of hand poses used to express the Feed-back Gestures.Figure 4.4: Most frequently observed hand poses for Feedback Gestures(GF ) for Study 1 (N = 17).374.2. Study 2 and Study 3 Results: Analysing Recognition Rate and Recognition Confidence of Human and Robot Hand Gestures4.2 Study 2 and Study 3 Results: AnalysingRecognition Rate and Recognition Confidenceof Human and Robot Hand GesturesIndependent-samples t-tests were performed to the measures of RecognitionConfidence (collected in Study 2) across the two commonly observed handconfigurations for each human gesture. This analysis provides a comparisonof whether participant confidence in recognizing the human gestures signifi-cantly varied across different hand configurations, as shown in Table 4.1.For each robot gesture, one-way ANOVAs were applied to the measuresof Recognition Confidence (from Study 3) across the three robot hand con-figurations: two human-inspired hand configurations for each gesture, andthe Closed-Hand (CH) configuration (Table 4.2). Further, a Bonferroni post-hoc analysis was performed to determine whether participant confidence inrecognizing robot gestures varied significantly across the three robot handconfigurations. Exceptions to this analysis were gestures with only one com-monly observed hand configuration; for those gestures, independent-samplest-tests were conducted across the observed human-inspired hand configura-tion and the CH configuration (Table 4.2).The measures of Recognition Confidence and Recognition Rate from Study2 and Study 3 are used together to evaluate how well the robot implemen-tation of the same hand gestures and hand configurations/poses performedwith respect to human-human gesture communication.10In addition, for participants who did not understand the intended mean-ing of a robot gesture, other common interpretations of the gestures wereanalysed to determine whether there were other unpredicted-but-acceptedmeanings of the gestures. In this work, a misinterpretation was deemed“common” if at least 15% of participants had the same misinterpretation ofthe gesture, or if the same misinterpretation was repeated across differentgestures within a gesture category. See Figure 4.7 for exemplars of all theother common misinterpretations of Directional Gestures.Comprehensive analyses and results are presented throughout the re-mainder of this chapter, followed by a discussion for each gesture type.10Further, the results of participant confidence in recognizing human gestures comparedto robot expressions of the same gesture are provided in the Appendix C.384.2. Study 2 and Study 3 ResultsTable 4.1: Measures of independent-samples t-tests on the Recognition Con-fidence from Study 2.Directional Gestures, GDGesture, g ∈ GD Hand Poses t pUp FP & OH t(86) = 2.34 < 0.05Down FP & OH t(85) = 0.29 0.77Left FP & OH t(51) = 1.01 0.32Right FP & OH t(62) = 0.87 0.40Orientation Gestures, GOGesture, g ∈ GO Hand Poses t p< 45◦ HOH N/A N/A90◦ FP & HOH t(89) = 1.88 0.06180◦ FP & HOH t(82) = 3.36 < 0.01Manipulative Gestures, GMGesture, g ∈ GM Hand Poses t pInstall FP & OH t(75) = 2.15 < 0.05Remove OH & HOH t(91) = 0.32 0.75PickUp OH N/A N/APlace FP N/A N/ASwap FP & VS t(57) = −1.29 0.20Feedback Gestures, GFGesture, g ∈ GF Hand Poses t pConfirm TU N/A N/AStop FP & OH t(94) = −0.28 0.78p < 0.1p < 0.05p < 0.01394.2. Study 2 and Study 3 ResultsTable 4.2: Measures of one-way (or Welch) ANOVA or independent-samplest-test on the Recognition Confidence from Study 3Directional Gestures, GDGesture, g ∈ GD Hand Poses F pUp FP, OH & CH F (2, 40) = 4.50 < 0.05Down FP, OH & CH F (2, 36) = 3.65 < 0.05Left FP, OH & CH F (2, 31) = 0.53 0.59Righta OH, FP & CH Welch’s F (2, 11.58) = 16.93 < 0.001Orientation Gestures, GOGesture, g ∈ GO Hand Poses F or t p< 45◦ HOH & CH t(40) = 1.49 0.1490◦ FP, HOH & CH F (2, 68) = 5.33 < 0.01180◦ FP, HOH & CH F (2, 72) = 5.14 < 0.01Manipulative Gestures, GMGesture, g ∈ GM Hand Poses F or t pInstall FP, OH & CH F (2, 41) = 5.40 < 0.01Remove OH, HOH & CH F (2, 56) = 2.77 0.07PickUp OH & CH t(46) = 1.99 0.05Place FP & CH t(72) = 0.39 0.70Swap FP, VS & CH F (2, 28) = 0.88 0.42Feedback Gestures, GFGesture, g ∈ GF Hand Poses t pConfirm TU N/A N/AStop OH & FP t(60) = 0.24 0.81p < 0.1p < 0.05p < 0.01aIn Study 3, the Recognition Confidence of the Right gesture fails the assumption ofhomogeneity of variances. Therefore, a Welch ANOVA (rather than a one-way ANOVA)was performed.404.2. Study 2 and Study 3 Results4.2.1 Directional GesturesThe combined results of the human-human and human-robot gesture recog-nition studies (Study 2 and Study 3, respectively) forDirectional Gestures(GD = {Up, Down, Left, Right}) are shown in Figures 4.5 and 4.6.Figure 4.5 illustrates the percentage of participants who correctly identifiedeach Directional Gesture (Recognition Rate). Figure 4.6 highlights the meanrating of confidence of interpretation for each Directional Gesture (Recogni-tion Confidence). Figure 4.7 depicts the rates of common misinterpretationsof each Directional Gesture from Study 3.4.2.1.1 Directional Gesture: UpIn Study 2, when humans expressed the Up gesture, both FP and OH con-figurations were recognized accurately (82% and 96% respectively). In Study3, when the robot expressed the Up gesture, people recognized the gesturebetter in the OH configuration (72%) than in the FP (46%) or CH (40%)configurations (Figure 4.5).Participant Recognition Confidence of the Up gesture was significantlyaffected by the human hand configuration (t(86) = 2.34, p < 0.05) (Table4.1). Participants felt more confident in understanding the gesture when ex-pressed with the OH configuration than the FP configuration (Figure 4.6).Recognition Confidence was also significantly affected by robot hand con-figuration (F (2, 40) = 4.50, p < 0.05) (Table 4.2). Participants recognizedthe robot’s gesture significantly more confidently in the OH configurationthan in the FP configuration (p < 0.05). Statistical tests did not reveal asignificant difference between the FP and CH configurations (p = 1.00) orthe OH and CH configurations (p = 0.18) (Figure 4.6).4.2.1.2 Directional Gesture: DownWhen humans expressed the Down gesture, both FP and OH configurationswere recognized accurately (86% and 90% respectively). When the robotexpressed the Down gesture, people recognized the gesture better in theOH configuration (61%) than in the FP (46%) or CH (29%) configurations.The results did not indicate any significant difference in the RecognitionConfidence of the Down gesture expressed by a human using either the FPor OH configurations (t(85) = 0.29, p = 0.77); however, the results didreveal that Recognition Confidence was significantly affected by the robothand configuration (F (2, 36) = 3.65, p < 0.05). Participants recognizedthe robot’s gestures significantly more confidently in the OH configuration414.2. Study 2 and Study 3 Resultsthan in the CH configuration (p < 0.05). The results did not indicate astatistically significant difference between the FP and OH configurations(p = 0.74) or the FP and CH configurations (p = 0.41).4.2.1.3 Directional Gesture: LeftOnly 50% and 57% of the participants recognized the human Left gesture inthe FP and OH configurations, respectively. When the robot expressed theLeft gesture, people recognized the gesture better in the OH configuration(56%) than in the FP (43%) or CH (26%) configurations.Human hand configuration had no statistically significant effect on par-ticipant Recognition Confidence of the Left gesture (t(51) = 1.01, p = 0.32).Likewise, the results did not reveal a statistically significant effect against therobot hand configurations (FP, OH and CH) on the Recognition Confidenceof the gesture expressed by a robot hand (F (2, 31) = 0.53, p = 0.59).4.2.1.4 Directional Gesture: RightWhen humans expressed the Right gesture, both the FP and the OH config-urations had moderate recognition rates (60% and 69% respectively). Whenthe robot expressed the Right gesture, people recognized the gesture betterin the FP configuration (55%) than in the OH (36%) or CH (43%) configu-rations.The Right gesture expressed by a human hand using either the FP or OHconfigurations did not show a significant difference in participant Recogni-tion Confidence (t(62) = 0.87, p = 0.39). Recognition Confidence was signifi-cantly affected by the robot hand configuration (Welch′sF (2, 11.58) = 16.93,p < 0.001). The Games-Howell post-hoc test revealed that participants rec-ognized the robot gesture more confidently in the FP configuration than inthe CH configuration (p < 0.001). Tests did not reveal a statistically signifi-cant difference between the FP and OH configurations (p = 0.12) or the OHand CH configurations (p = 0.82).424.2. Study 2 and Study 3 ResultsFigure 4.5: Human Recognition Rates for Directional Gestures (GD) forboth Study 2 (Human) and Study 3 (Robot). The error bars indicate themargin of error for a 95% confidence interval.Figure 4.6: Human Recognition Confidence forDirectional Gestures (GD)for both Study 2 (Human) and Study 3 (Robot). The Right gesture failedthe assumption of homogeneity of variances; therefore, the Games-Howellpost-hoc test instead of the Bonferroni post-hoc test was performed for thisgesture.434.2. Study 2 and Study 3 ResultsFigure 4.7: Common misinterpretations of Robot Directional Gestures,GD.444.2. Study 2 and Study 3 Results4.2.1.5 DiscussionThe best and most confidently recognized human hand pose forDirectionalGestures was the OH pose. Similarly, the OH pose also often correspondedto the best and most confidently recognized robot hand pose. Comparingthe approach of this work with [20], these results suggest that articulatedfingers are not necessary for directional gestures, adding support to [15] thatreferential gestures can be well-recognized by non-anthropomorphic robotichands; furthermore, it appears that fingers might not be needed at all, asthere was often no statistically significant difference between OH and CHposes—a closed hand was just as effective as an open hand at communicatingdirectionality. The exception to these results was with the Right gesture(Figure 4.6), for which an alternative robot hand pose (FP) outperformed theOH configuration; this is believed to be due to the fact that pointing gesturesoften anchor one referent (e.g., the car part) to another referent (e.g., thecar door) [37], so the relative angle of the camera biased the perception ofthe gesture to relate these two referents in a rightward direction (i.e., thecar part—the only referent on the left—to the car door—the only referenton the right).As shown in Figure 4.7, many participants misinterpreted the intendeddirection of robot Directional Gestures; this confusion could be becausethe robot arm moved at a much slower speed than a human arm when repeat-ing the motion for the gesture three times (to be consistent with observationsfrom Study 1). Also, Directional Gestures were commonly misinterpretedas an Install gesture, which could be again due to the relative angle of thecamera showing the robot arm closer to the car door than it actually was.Additionally, more than half of the participants misinterpreted a Down ges-ture when expressed with CH pose as a PickUp gesture (Figure 4.7); thiscould be because there were other parts on the table next to the car doorand the motion of the gesture might have anchored participant perceptionsto objects in the direction of motion (i.e., downward), resulting in a misin-terpretation of the gesture as picking up those parts.Based on these findings, the relationship between participant viewingangle (perspective) of the robot gesture and participant recognition rates forDirectional Gestures deserves exploration in future work [28].454.2. Study 2 and Study 3 Results4.2.2 Orientation GesturesResults of the Recognition Rate and Recognition Confidence analyses fromboth Studies 2 and 3 forOrientation Gestures (GO = {< 45◦, 90◦, 180◦})are shown in Figure 4.8 and Figure 4.9, respectively. No common misinter-pretations were observed for Orientation Gestures.4.2.2.1 Orientation Gesture: < 45◦In Study 2, when humans expressed the < 45◦ gesture, the HOH configura-tion was recognized accurately (96%). When the robot expressed the < 45◦gesture, people recognized the gesture better in the CH configuration (79%)than in the HOH (73%) configuration (Figure 4.8).Robot hand configuration had no statistically significant effect on partic-ipant Recognition Confidence of the < 45◦ gesture (t(40) = 1.49, p = 0.14)(Figure 4.9 and Table 4.2).4.2.2.2 Orientation Gesture: 90◦When humans expressed the 90◦ gesture, both the FP and HOH configu-rations were recognized accurately (85% and 98% respectively). When therobot expressed the 90◦ gesture, people recognized the gesture better in theHOH and CH configurations (87% and 85% respectively) than in the FP(79%) configuration.Test results indicated a trend that the HOH configuration was more confi-dently recognized than the FP configuration for the 90◦ gesture expressed bya human, but this trend was marginally statistically significant (t(89) = 1.88,p = 0.06) (Table 4.1); however, test results indicated that the robot handconfiguration had a statistically significant effect on how confidently partic-ipants recognize the gesture (F (2, 68) = 5.33, p < 0.01) (Table 4.2). Peoplerecognized the gesture significantly more confidently in the HOH configura-tion than in the FP configuration (p < 0.01). The results did not reveal astatistically significant difference between the FP and the CH configurations(p = 0.30), or the HOH and CH configurations (p = 0.41).464.2. Study 2 and Study 3 ResultsFigure 4.8: Human Recognition Rates for Orientation Gestures, GO, forboth Study 2 (Human) and Study 3 (Robot). The error bars indicate themargin of error for a 95% confidence interval.Figure 4.9: Human Recognition Confidence forOrientation Gestures (GO)for both Study 2 (Human) and Study 3 (Robot).474.2. Study 2 and Study 3 Results4.2.2.3 Orientation Gesture: 180◦When humans expressed the 180◦ gesture, the HOH configuration was rec-ognized accurately (92%); however, the FP configuration had a relativelylower recognition rate (78%). When the robot expressed the 180◦ gesture,all FP, HOH, and CH configurations were recognized accurately (82%, 87%and 85% respectively).Participant Recognition Confidence of the 180◦ gesture was significantlyaffected by the human hand configuration (t(83) = 3.36, p < 0.01). Partic-ipants recognized the gesture significantly more confidently when expressedwith the HOH configuration than the FP configuration. Recognition Confi-dence was also significantly affected by the robot hand configuration (F (2, 72) =5.14, p < 0.01). Participants recognized the robot gesture significantlymore confidently in the HOH configuration than in the FP configuration(p < 0.01). The results did not reveal a statistically significant differencebetween the HOH and CH configurations (p = 0.21), or the FP and CHconfigurations (p = 0.56).4.2.2.4 DiscussionHaddadi et al. [20] found that only 25% of people understood OrientationGestures when expressed by a robotic manipulator with an un-actuatedstuffed glove at the robot end-effector. In contrast, Orientation Gestureswere found to be recognized very accurately in both human (Study 2) androbot (Study 3) studies (Figure 4.8). Haddadi et al. [20] utilized an Open-Hand pose for communicating orientation information, however, such handposes were not observed in the human-human data collection (Sec. 3.1); thus,it is suspected that an Open-Hand pose might not be a natural configurationfor this gesture. As shown in (Figure 4.8), the robot Closed-Hand (CH) posewas recognized as well, in some cases better than the human-inspired handposes (i.e., Finger-Pointing and Half Open-Hand).In both human and robot studies, the Recognition Rates of the 90◦ andthe 180◦ gestures were consistent with the Recognition Confidence of theassociated gestures, and both gestures were best and most confidently rec-ognized with the Half Open-Hand (HOH) pose (Figures 4.8 and 4.9). Con-versely, the robot < 45◦ gesture was best recognized with the Closed-Hand(CH) configuration (Figure 4.8); however, within the participants who rec-ognized the gesture correctly, they recognized it more confidently (thoughnot significantly) with the HOH configuration than CH configuration (Fig-ure 4.9). It is suspected that the reason HOH was less recognized in ex-484.2. Study 2 and Study 3 Resultspressing the < 45◦ gesture is related to the angle of rotation, with smallerrotations having a lower recognition rate. For example, for the < 45◦ ges-ture, participants seemed to look confused about the intention of the robot,and sometimes did not observe the rotation of the hand at all. This couldbe because the Barrett Hand lacks an opposable thumb (discussed in Sec-tion 2.3), which could serve as a visual anchor or reference point to an ob-server [19, 2, 1]; thus, common non-anthropomorphic robot manipulators,such as Baxter’s 1D gripper [16] and KUKA’s two-finger gripper [4], areexpected to be effective in communicating orientation information, thoughhuman observers might not feel as confident and comfortable in their as-sessments of the gesture’s meaning—in short, for gestures indicating smallchanges in orientation, human coworkers will have to “trust their gut”.4.2.3 Manipulation GesturesThe measures of Recognition Rate and Recognition Confidence from bothStudy 2 and Study 3 forManipulation Gestures (GM = {Install,Remove,PickUp,Place,Swap}) are shown in Figures 4.10 and 4.11, respectively.Figure 4.12 displays the rates of common misinterpretations of each Manip-ulation Gesture.4.2.3.1 Manipulation Gesture: InstallIn Study 2, when humans expressed the Install gesture, the Open-Hand(OH) configuration had a perfect recognition rate (100%); however, theFinger-Pointing (FP) configuration had a low recognition rate (57%). Whenthe robot expressed the Install gesture, people recognized the gesture betterin the CH configuration (68%) than in the OH (52%) or FP (35%) configu-rations (Figure 4.10).Recognition Confidence of the Install gesture was significantly affectedby the human hand configuration (t(75) = 2.15, p < 0.05) (Table 4.1). Par-ticipants felt more confident recognizing the gesture when expressed withthe OH configuration than the FP configuration (Figure 4.11). Similarly,robot hand configuration significantly affected participant Recognition Con-fidence of the gesture (F (2, 41) = 5.40, p < 0.01) (Table 4.2). Participantsrecognized the robot’s gesture significantly more confidently in the OH con-figuration than in the CH configuration (p < 0.01). No significant differencewas observed between the FP and OH configurations (p = 0.46), or the FPand CH configurations (p = 0.65) (Figure ??).494.2. Study 2 and Study 3 ResultsFigure 4.10: Human Recognition Rates for Manipulation Gestures, GM ,for both Study 2 (Human) and Study 3 (Robot). The error bars indicate themargin of error for a 95% confidence interval.Figure 4.11: Human Recognition Confidence for Manipulation Gestures,GM , for both Study 2 (Human) and Study 3 (Robot).504.2. Study 2 and Study 3 ResultsFigure 4.12: Common misinterpretations of Robot Manipulation Ges-tures, GM .4.2.3.2 Manipulation Gesture: RemoveWhen humans expressed the Remove gesture, both the OH and HOH con-figurations were recognized accurately (90% and 96%, respectively). Whenthe robot expressed the gesture, people recognized the gesture slightly bet-ter in the CH configuration (68%) than in the OH (61%) or the HOH (65%)configurations.The human hand configurations—OH and HOH—had no statisticallysignificant effect on the Recognition Confidence of the Remove gesture(t(91) = 0.32, p = 0.75). Likewise, the results did not reveal a signifi-cant effect against the robot hand configurations (OH, HOH and CH) onthe Recognition Confidence of the gesture (F (2, 56) = 2.77, p = 0.07). Theresults did not reveal a significant difference between the OH and HOH con-514.2. Study 2 and Study 3 Resultsfigurations (p = 1.00) or the OH and CH configurations (p = 0.30); however,there was a trend that the gesture was better recognized when expressed withthe CH configuration than the HOH configuration (p = 0.09).4.2.3.3 Manipulation Gesture: PickUpWhen humans expressed the PickUp gesture, the OH configuration wasrecognized accurately (90%). When the robot expressed the PickUp ges-ture, the OH configuration was recognized more accurately than the CHconfiguration (68% and 50%, respectively).The results show a strong trend that the HOH configuration was moreconfidently recognized than the FP configuration when the gesture is ex-pressed by the robot, but this trend was only marginally significant (t(46) =1.99, p = 0.05).4.2.3.4 Manipulation Gesture: PlaceWhen humans expressed the Place gesture, the FP configuration was rec-ognized accurately (95%). When the robot expressed the gesture, both theFP and the CH configurations were recognized accurately (89% and 86%,respectively)The Place gesture expressed by a robot hand using either the FP orthe CH configuration did not show a statistically significant difference in theRecognition Confidence measure of the gesture (t(72) = 0.39, p = 0.70).4.2.3.5 Manipulation Gesture: SwapWhen the Swap gesture was expressed by a human, only 57% and 61% ofthe participants recognized the gesture in the FP and V-Sign (VS) configu-rations, respectively. Similarly, when the robot expresses the gesture, only54%, 16%, and 43% of the participants recognized the gesture in the FP, VS,and CH configurations, respectively.The human hand configurations (FP and VS) had no statistically signif-icant effect on the participant Recognition Confidence of the Swap gesture(t(57) = −1.29, p = 0.20). Likewise, the results did not reveal a statisticallysignificant effect against the robot hand configurations (FP, VS and CH,respectively) on the Recognition Confidence of the gesture expressed by therobot hand (F (2, 28) = 0.88, p = 0.42).524.2. Study 2 and Study 3 Results4.2.3.6 DiscussionMost of the human hand gestures were accurately recognized by the partici-pants for all of the selected hand poses; the exceptions to this finding were theInstall gesture and the Swap gesture (Figure 4.10). The Install gesture wasperfectly recognized (i.e., the Recognition Rate was 100%) when expressedwith an Open-Hand (OH) configuration; however, it had a much lower recog-nition rate (57%) when expressed with a Finger-Pointing (FP) configuration.Some participants misinterpreted the Finger-Pointing as "poking or pressingon the part" being installed on the car door (Figure 4.12). Overall, the Swapgesture had one of the lowest recognition rates of all human gestures whenusing either FP or V-Sign (VS) hand configurations with Recognition Ratesof 51% and 61%, respectively (Figure 4.10); in Study 1, some participantsindicated a preference to use two hands to execute a Swap gesture, statingthat a one-handed Swap gesture was not as intuitive to them, which couldexplain the lower Recognition Rates of this one-handed Swap gesture.As shown in Figure 4.10, no correlation was identified between the Recog-nition Rates of the human hand-configurations and the imitated robot hand-configurations. For example, the Swap gesture was best recognized in a VShand configuration for the human case, whereas it was best recognized ina FP hand configuration for the robot case. This could have been par-tially due to the mechanical limitations of the robot hand (e.g., the VSpose did not look intuitive on the robot hand). In addition, two of therobot hand gestures—Install and Remove—were better recognized whenexpressed with an unposed, Closed-Hand (CH) configuration rather thana posed hand configuration, rejecting the assumption that human-inspiredhand poses always outperform the unposed robot hand. As in the humancase, approximately 16% of participants misinterpreted the robot Installgesture with a FP hand pose as "pressing on the part" (many thought thepart was a button) (Figure 4.12). Furthermore, as shown in Figure 4.12,approximately 23% of participants misinterpreted the Install gesture withan OH pose as the robot indicating to “stop”. For the Remove gesture,a Half Open-Hand (HOH) pose was commonly misinterpreted as “rotatingthe part”, and OH pose was commonly misinterpreted again as “pressingon the part/button”. Collectively, these results add support to and elabo-rate upon related work on differences in human anthropomorphic vs. robotnon-anthropomorphic nonverbal communication [15, 18].Haddadi et al. [20] found that people had difficulty recognizing many ofthe manipulation gestures investigated in Studies 1–3. In [20], the stuffedglove at the end of the manipulator consistently presented an Open-Hand534.2. Study 2 and Study 3 Resultspose, which was not necessarily the best hand pose for expressing many ofthe manipulation gestures, as identified by [20] and supported by the resultsof this work (Figures 4.10 and 4.11).Referring to Figures 4.10 and 4.11, better recognized hand poses also hadhigher Recognition Confidence for both human and robot gestures, with theexception of the robot Install gesture. Note that while both human androbot expressions of the Swap gestures had low Recognition Rates, thoseparticipants who recognized the gesture correctly also recognized it confi-dently.4.2.4 Feedback GesturesObservations of human interactions in Study 1 yielded two gestures that wereidentified and categorized as Feedback Gestures, GF = {Confirm,Stop}. Feedback Gestures differed from other identified gesture cate-gories in that they were symbolic gestures (discussed in Section 2.1) usedfor reinforcing or interrupting the human movement rather than directing apart movement. For this category, only the human-inspired hand configura-tions identified in Study 1 were implemented on the robot; the Closed-Hand(CH) configuration was not used as a baseline, as this work was primarilyinterested in investigating if symbolic gestures could still deliver a clear com-municative message when implemented on a non-anthropomorphic robotichand. The Confirm gesture was most frequently expressed using a Thumbs-Up (TU) hand configuration shown in Figure 3.7e, and the Stop gesture wasmost frequently expressed using Open-Hand (OH) and Finger-Pointing (FP)hand configurations, shown in Figures 3.7a and 3.7b, respectively.The combined results of Study 2 and Study 3 for Feedback Gesturesare shown in Figures 4.13 and 4.14; Figure 4.13 illustrates the measuresof Recognition Rate and Figure 4.14 illustrates the measures of RecognitionConfidence for Feedback Gestures.4.2.4.1 Feedback Gesture: ConfirmIn Study 2, when humans expressed the Confirm gesture, the Thumbs-Up(TU) hand configuration was very accurately (98%) and confidently recog-nized; however, when the robot expressed the Confirm gesture, only 25% ofparticipants recognized the gesture with a below average Recognition Con-fidence (see Figure 4.13 for the Recognition Rate, and Figure 4.14 for theRecognition Confidence measures of this gesture).544.2. Study 2 and Study 3 ResultsFigure 4.13: Human Recognition Rates for Feedback Gestures (GF ) ges-tures for both Study 2 (Human) and Study 3 (Robot). The error bars indi-cate the margin of error for a 95% confidence interval.Figure 4.14: Human Recognition Confidence for Feedback Gestures (GF )gestures for both Study 2 (Human) and Study 3 (Robot).554.2. Study 2 and Study 3 Results4.2.4.2 Feedback Gesture: StopWhen humans expressed the Stop gesture, both the Finger-Pointing (FP)and Open-Hand (OH) configurations were recognized accurately (94% and96% respectively). Similarly, when the robot expressed the Stop gesture,both the FP and OH had high Recognition Rates (75% and 72%, respec-tively), though these rates were lower than those of the human gestures(Figure 4.13).For both the human-human and human-robot cases, hand configurationhad no statistically significant effect on the participants Recognition Con-fidence of the Stop gesture (t(94) = −0.28, p = 0.78 and t(60) = 0.24,p = 0.81, respectively) (Figure 4.14).4.2.4.3 DiscussionThe selected feedback gestures—Confirm and Stop—are symbolic gestures[1] and, as such, are strongly influenced by contextual factors [5, 26]. Dis-cussed below are the results of human observations of these feedback gesturesand the contextual factors that might contribute to perceptual differences.The robot’s Confirm gesture had a very low performance, which is sus-pected to be due to the mechanical limitations of the robot hand—the robothand has no thumb, and the TU pose looked more like the robot was display-ing an inappropriate “middle finger” (as repeatedly and humorously noted byparticipants) [41]. As discussed in Section 2.3, related non-anthropomorphicrobot manipulators, such as Baxter’s 1D gripper [16] and KUKA’s two-fingergripper [4], are expected to be interpreted in a similar manner; thus, it isrecommended that robotic manipulators used in collaborative environmentshave a level of anthropomorphism such that there is a clear opposable thumb,as confirmatory information conveyed through a TU pose was shown to beone of the most important gestures for such a robot to communicate.Recognition Confidence of the Stop gesture was consistent with theRecognition Rate of the gesture for both human and robot gestures (Fig-ure 4.14). These results add further evidence to the related work of [15],which reported that terminating gestures, such as “Stop” or “No”, are well rec-ognized on both anthropomorphic and non-anthropomorphic robotic hands.Overall, the findings of this work suggest that even a mechanically limitedrobotic hand can still express certain symbolic gestures [1, 5, 26], such asthe Stop gesture [15]; however, it is warned that presenting a Thumbs-Up(TU) hand configuration without the use of an opposable thumb should beused with caution or not at all, as it can be perceived as inappropriate [41].564.3. Summary4.3 SummaryAn objective of this work was to investigate a collection of intuitive andhuman-recognizable hand gestures and accompanying hand poses that canbe implemented on industrial, non-anthropomorphic robotic hands to non-verbally communicate with co-located human coworkers. The results indi-cated that most of the human gestures were well recognized (RecognitionRate greater than 90%) by the participants with at least one of the two se-lected hand poses observed in a human-human nonverbal scenario discussedin Study 1. Similarly, most of the robot gestures were relatively well recog-nized (Recognition Rate greater than 60%) by the participants with at leastone of the three robot hand poses—two human-inspired hand poses and theClosed-Hand (CH) pose.However, the following gestures were exceptions to these results, yieldinglower Recognition Rates:1. both human and robot expressions of Left and Right Direc-tional Gestures, possibly due to the relative angle of the camera withwhich the gestures were recorded, as well as the relative location of therobot, the car part, and the car door in the work space;2. both human and robot expressions of the Swap ManipulationGesture (Figure 4.10), possibly because Swap is a complex gesturethat participants indicated would be better performed when expressedwith two hands, unlike other identified gestures.3. robot expressions of the Confirm Feedback Gesture, which waslikely due to the mechanical limitation of the robot hand and its lackof having a thumb (Figure 4.13).In the human-human study (Study 2, Section 3.2), participant Recogni-tion Rates of human hand gestures were consistent with participant Recog-nition Confidence of the gesture. In the human-robot study (Study 3, Sec-tion 3.3), the best and most confidently recognized human hand poses typi-cally corresponded to the best and most confidently recognized robot handposes forDirectional,Orientation, and Feedback Gestures, with the ex-ception of the Right Directional Gesture and the < 45◦ Orientation Gesture(though the differences were not statistically significant). For Manipula-tion Gestures, robot hand poses imitated from human hand poses were notalways better recognized than non-posed (i.e., Closed-Hand (CH)) configu-rations; for example, the robot Remove gesture had the highest RecognitionRate and Recognition Confidence when expressed with the CH pose.574.3. SummaryTogether, these studies provide insights into how humans produce andperceive nonverbal communication to interact with other human co-workersin assembly tasks, inform how robots should communicate to human co-workers in the same settings, and how human co-workers might interpretthese nonverbal signals from their robot counterparts.The next chapter expands upon these results to provide a set of guidelinesfor the mechanical design of robotic hands.58Chapter 5Guidelines for the Design ofExpressive Robotic HandsThe results of this work combined with experimenter observations in Studies1–3 yielded insights and guidelines for the design of individual regions andfeatures of a robotic hand. These region and feature considerations areillustrated in Figure 5.1 and described below in Section 5.1. The applicationof these principles to the design of real robot hands is presented in Section 5.2.Further steps to formalize these guidelines are proposed in Section 5.3.Figure 5.1: Significant regions for consideration when designing a robotichand, including the wrist, palm, fingers, pointer finger, and thumb.595.1. Regions and Features of a Robotic Hand5.1 Regions and Features of a Robotic HandThe analysis of Study 1 suggested a specific set of common hand gestures thatwere represented by one or more hand configurations/poses (Section 4.1).The implementation and subsequent analysis of these gestures on a robotichand provided insights on particular regions of the hand that would impacthuman perceptions of the intended communicative meaning (Section 4.2).Definitions and considerations for each of the regions shown in Figure 5.1and discussed in the follow subsections.5.1.1 WristThe wrist is the anchor point for the robot hand, and connects to the base ofthe palm (Figure 5.1). The side of the hand opposite the wrist (i.e., the palm,extended fingers, or the pointer finger) enables the robot to produce handgestures in the set of Directional Gestures, GD [37]. Furthermore, theresults of this thesis indicate benefit from the wrist providing or permittingsome form of tilt movement (side-to-side) and/or twist movement (forward-and-backward), as all of the selected gestures (with the exception for theConfirm gesture) involved some sort of movement of the wrist or forearm.Tilting of the wrist allows the robot to better produce hand gestures in theset of GD, as well as the Stop gesture (in the set of Feedback Gestures,GF ; Figure 3.5bc). Twisting of the wrist allows the robot to better producehand gestures in the set of Orientation Gestures, GO.5.1.2 PalmThe palm serves as the main region from which other regions of the hand ex-tend (Figure 5.1). The palm of a robotic hand should take one of two forms:planar or volumetric. A planar palm has two clear “sides”—the “back of thehand” and the “front of the hand”—which allow it to produce hand gestureswith Open-Hand poses (e.g., Figure 3.7a); an example of a simple planarpalm design might be a semi-circular disk like a ping-pong paddle. A volu-metric palm has no clear directionality—similar to a balled up fist—whichallows it to produce hand gestures with Closed-Hand poses (e.g., Figure 3.8);an example of a simple volumetric palm design might be a sphere or cube.605.1. Regions and Features of a Robotic Hand5.1.3 FingersThe fingers—pointer, middle, index, and/or pinky—extend out from thepalm on the side opposite the wrist (Figure 5.1). These fingers can be spreadapart or closed (touching each other), and can be unarticulated or articulatedat the proximal knuckle point. While fingers that are spread apart mightallow the robot to produce a V-Sign hand pose (Figure 3.7d), the resultsin Section 4.2.3.5 suggest that the V-Sign pose might not be as effective asother hand poses in human-robot communication; thus, the design choice offingers spread apart or touching is not essential for the gestures identified inthis study. Based on the study results, unarticulated fingers are suggestedto be posed in one of three ways: Open-Hand, Half Open-Hand, or Closed-Hand. Articulated fingers can permit transitional hand poses somewherebetween the range of Open-Hand and Closed-Hand poses, and can be artic-ulated either separately (decoupled) or together (coupled); based on insightsfrom Study 1 (Section 4.1); however, the only recommended decoupling isthe pointer finger, discussed below. For both unarticulated and articulatedfingers, the selected hand pose dictates the effectiveness of hand gesturesas perceived by the human observer. Because the fingers might naturallyadd “sides” to the palm, they will override any perceptions yielded by thepalm alone; furthermore, the number and layout of fingers might be aesthet-ically linked to the size and shape of the palm, so at least three fingers aresuggested to establish clear “sides” of the palm.5.1.4 Pointer FingerAn extended pointer finger separated from other fingers in a Closed-Handpose (either unarticulated or articulated) allows the robot to produce Finger-Pointing poses (Figure 3.7b) to communicate very specialized hand gestures(Figure 5.1). For example, as illustrated in Study 1, the Place gesture (inthe set of Manipulation Gestures, GM ; Figure 3.4f) was only expressedusing the Finger-Pointing pose in human-human interactions (Section 4.1).615.2. Applications of Design Guidelines5.1.5 ThumbThe thumb often assists in one of two purposes: physical manipulation orsocial expressiveness; each of these purposes often dictates the location ofthe thumb on the robot hand (Figure 5.1). For effective physical manipu-lation, the thumb on a robot hand is often located near the wrist and onthe “front” of the palm; this allows the robotic manipulator to apply forcesfrom opposing directions between the fingers and thumb (e.g., for graspingan object). For effective social expressiveness, the results of this work sug-gest that the thumb should be near the wrist and to the side of the palm(i.e., as with a Thumbs-Up hand pose); this allows the robot to produceunique symbolic hand gestures, such as the Confirm gesture (in the set ofFeedback Gestures, GF ; Figure 3.5a), and adds a reference point to handposes to improve the recognition of Orientation Gestures (GO), especiallyfor small orientations (i.e., < 45◦ gesture). Thus, to support both physicaland social purposes, it is recommended that the thumb be able to roll frombelow the palm to the side of the palm.5.2 Applications of Design GuidelinesThe section reviews robotics hands that were used by, or related to, thisthesis work, and applies the design principles described above to evaluate(for the Barrett Hand; Section 5.2.1), predict (for the Seed Robotics RH4DAries Hand; Section 5.2.2), and improve (for the Seed Robotics RH7D ErosHand; Section 5.2.3) the social expressiveness of robot hands.5.2.1 Barrett HandThe implementation challenges and subsequent Study 3 experimental resultsusing the Barrett Hand (Figure 5.2) revealed the foundational insights forthe design guidelines proposed above. The hand can be evaluated by theregions and features proposed above, which dictate its social expressiveness.The wrist of the Barrett Hand is fixed; however, it is mounted to a fore-arm that can both tilt and twist at an elbow in the arm, enabling fundamen-tal movements for the production of Directional Gestures (Section 4.2.1)and Orientation Gestures (Section 4.2.2), respectively. The perception ofa palm is formed by the mechanism coupling the wrist to the fingers, whichforms two clear sides of the hand, classifying it as a planar palm. Whilethe fingers dictate the effectiveness of Open-Hand vs. Closed-Hand poses,this planar palm is effective at communicating Orientation Gestures, as625.2. Applications of Design Guidelinesevidenced by the positive results of Study 3 (Section 4.2.2). The fingersare separately articulated, enabling the robot to effectively produce handconfigurations in the range between Open-Hand and Closed-Hand poses. Inaddition, the articulated pointer finger enables the Barrett Hand to producespecialized hand gestures, such as the Place gesture (Figure 3.4f). However,the lack of a thumb makes it impossible for the Barrett Hand to approximatesymbolic hand gestures, such as the Confirm gesture (Figure 3.5a) with theThumbs-Up pose, which is crucial for affirmative communication.The Barrett Hand is similar to (but not actually used as) morphologies ofrobotic hands common in industry (see Section 2.3), it follows that such robothands might not be sufficient for supporting fluent human-robot gesturalinteractions. More anthropomorphic robotic hands that closely resemblehuman anatomy might produce better approximations of human gestures;however, such hands are typically much more expensive and less effectivein the industries. The next section describes an example of an inexpensiveand robust robotic hand designed to address the future industrial needs withrespect to social expressiveness.Figure 5.2: The Barrett Hand.635.2. Applications of Design Guidelines5.2.2 Seed Robotics RH4D Ares HandSeed Robotics (http://www.seedrobotics.com) designs and develops tendon-based robotic hands for advanced manipulation. Their designs are highly cus-tomizable using 3D printed modular components, and the developed producttend to be significantly less expensive than other robotic manipulators onthe market. Currently, their robot hands are too small for manipulation inindustrial settings; however, the flexibility in design is appealing for purposesof review in this thesis, as the design can be adapted for social expressiveness.This section discusses their base model hand—the RH4D Ares (Figure 5.3)—and predicts how well people might interpret gestures produced by it. Thenext section presents a redesign of the hand informed by this thesis work.The Ares robotic hand features four actuated degrees of freedom: twoin the wrist, one in the coupled fingers, and one in an opposable thumb.The wrist enables both twist rotation (for Orientation Gestures) andforward/backward rotation; however, the forward/backward wrist rotationdoes not add much social expressiveness to the hand, as none of the gesturesidentified in Study 1 (Section 4.1) or implemented in Study 3 (Section 3.3)warranted this movement. Thus, as with the Barrett Hand (Section 5.2.1),the Ares hand must be mounted to a higher-DOF arm to support the rangeof motion necessary for Directional Gestures. The palm is formed bythe space between the two fingers and the thumb. The two fingers arevisibly separate, but actuated together (i.e., coupled); while this config-uration could produce Closed-Hand poses and the less important V-Signpose (Figure 3.7d), the Ares is unable to produce the Finger-Pointing pose(Figure 3.7b), which is beneficial for some Directional Gestures and thePlace Manipulation Gesture. The articulated opposable thumb is ide-ally located for grasping objects; however, its range of motion is limited tosupporting only Half Open-Hand or Closed-Hand poses, and its positioningsuggests that any displays of a Thumbs-Up pose would be deemed inappro-priate. In summary, the Ares hand is effective for physical manipulation ofsmall objects, but its effectiveness in socially expressivity is limited to ges-tures expressed using Half Open-Hand poses (e.g., Orientation Gestures)and Closed-Hand poses (e.g., some Manipulation Gestures).Based on the above predictions, the Ares hand does not fully addressthe needs identified in this thesis for gestural communication in HRI. Thesepredictions serve as hypotheses about how people might perceive hand ges-tures with the Ares hand, which could be formally tested in future work.For now, these predictions alone were enough for Seed Robotics, who subse-quently worked with the thesis author to change the design of the Ares hand645.2. Applications of Design Guidelinesto better support social expressiveness (described in the next section).Figure 5.3: The Seed Robotics RH4D Ares hand.(http://www.seedrobotics.com/rh4d-ares-hand.html)655.2. Applications of Design Guidelines5.2.3 Seed Robotics RH7D Eros HandTo enhance the social expressiveness of the RH4D Ares hand (Section 5.2.2),the founders of Seed Robotics met with the thesis author to discuss thedesign considerations proposed in Section 5.1. This discussion informed thedevelopment of a new Seed Robotics hand—the RH7D Eros (Figure 5.4).This section presents the design characteristics that make the Eros one of themost socially expressive robotic hands available, as illustrated in Figure 5.5.The Eros hand features seven actuated degrees of freedom: three in thewrist, two for the fingers, and two for the thumb. As with the Ares, theEros wrist features both twist rotation (for Orientation Gestures) andforward/backward rotation; however, side-to-side tilting rotation has beenadded and the forward/backward rotation has been moved to where thewrist and palm meet, adding strong support for Directional Gestures. Aplanar palm is very clearly formed between the recommended three fingersand the thumb. To maximize expressivity while minimizing actuation costs,two of the three fingers (the “non-pointer fingers”) are articulated togetherand one of the fingers (the pointer finger) is articulated separately; this artic-ulated configuration supports the full range of Open-Hand to Closed-Handposes (Figure 5.5ab), and allows the hand to produce Finger-Pointing poses(Figure 5.5c) to support some Directional Gestures, the Place Manipu-lation Gesture, and the Stop Feedback Gesture. The thumb has beenrelocated from the front of the palm (Figure 5.3) to the side of the palm(Figure 5.4). As with the Ares, the thumb can bend; however, the Eros addsa rolling motion enabling the thumb to transition between the front of thepalm (for physical manipulation) and the side of the palm (for social ex-pressiveness), supporting a strong Thumbs-Up pose for better OrientationGestures and the Confirm Feedback Gesture (Figure 5.5d).Based on the experimental results (Section 4.2) and the proposed data-driven design guidelines (Section 5.1) of this thesis work, the Eros handrepresents a significant improvement over both the Ares (Section 5.2.2) andthe Barrett Hand (Section 5.2.1) in terms of social expressiveness; however,as with the Ares, these predicted improvements are as hypotheses to beinvestigated in future studies. The Eros is currently too small for industrialmanipulation tasks, though its 3D-printable design suggests that it couldbe made larger and stronger for manufacturing applications such as pickingand placing workpieces in assembly lines. In an ultimate combination ofphysical and social, the Eros hands have been used for performing human-robot handshakes, representing the synergy between anthropomorphism andengineering for a future in which both humans and robots collaborate.665.2. Applications of Design GuidelinesFigure 5.4: The Seed Robotics RH7D Eros hand.(http://www.seedrobotics.com/rh7d-eros-hand.html)675.3. Next Steps5.3 Next StepsThe design guidelines proposed in Section 5.1 are informed by human-humangestural communication (Section 4.1) and their subsequent implementationsfor human-robot interactions (Section 4.2); however, these guidelines comefrom a single robot hand (the Barrett Hand) within a particular scenario(collaborative industrial manufacturing), so the significance and impact ofthe principles is currently limited to these domains. Thus, further explo-ration of the space of robot hands—as well as their associated regions, fea-tures, and applications—is needed to formalize principles for the design ofphysically and socially effective robotic hands. The formalization of theseguidelines will enable a researcher to quickly evaluate (Section 5.2.1), pre-dict (Section 5.2.2), and improve (Section 5.2.3) a robot hand with respectto requirements identified for both physical manipulation and social expres-siveness in a target application domain.The final chapter provides conclusions and future directions of this work.685.3. Next StepsFigure 5.5: A selection of configurations/poses expressed by the SeedRobotics RH7D Eros hand, including (a) Open-Hand, (b) Closed-Hand, (c)Finger-Pointing, and (d) Thumbs-Up.(http://www.seedrobotics.com/rh7d-eros-hand.html)69Chapter 6ConclusionsIn noisy industrial settings, spoken communication is unreliable and evenimpractical for human-robot coworkers. This work addressed nonverbal ges-tural expressions as a means of reliable human-robot communication. Theaim of this work was (1) to study the communicativeness of a commonthree-finger robotic hand in industrial scenarios, and (2) to investigate hu-man recognition of robot hand gestures in a collaborative human-robot task.The results highlight the efficacy of using a common non-anthropomorphicrobotic manipulator—the Barrett Hand—to communicate with human ob-servers (e.g., coworkers) using hand gestures.Humans generally recognize human hand gestures accurately and confi-dently in human-human interactions (discussed in Section 2.1); however, hu-man recognition of robot hand gestures has not been adequately explored inhuman-robot interactions (Section 2.2). Although typical industrial roboticgrippers are non-anthropomorphic and have limited dexterity (Section 2.3),the results demonstrate that such devices are capable of expressing Direc-tional, Orientation, Manipulation, and Feedback gestures (defined inSections 3.1) in a human-recognizable manner. Three studies (Sections 3.1 –3.3) were performed to explore and inform the use of such robot hands forhuman-robot communication in collaborative settings, the results of whichare presented in Sections 4.1 and 4.2.According to the results presented in this work, most gestures are betterand more confidently recognized when displayed with a posed robot handrather than an unposed, closed hand; however, hand poses used by humanswhen expressing a gesture are not necessarily ideal for a robot to use when ex-pressing the same gesture. These results suggest principles and guidelines forthe mechanical design of expressive robot hands and robot hand gestures inco-present human-robot interactions, including human-robot collaboration;these guidelines are presented in Section 5.1 and applied in Section 5.2.An overview of the contributions of this work is summarized below inSection 6.1. A discussion of limitations and future work are provided inSection 6.2, followed by concluding remarks in Section 6.3.706.1. Contributions6.1 ContributionsThis work developed and evaluated a cardinal set of user-generated gesturesapplicable to industrial scenarios in which the robot must intuitively andeffectively provide a set of instructions to a co-located person while collabo-rating on a shared task. The key contributions of this work are:• a methodology for designing and implementing task-based communica-tive gestures to be expressed by a robot in HRI;• a cardinal set of user-generated task-based communicative hand ges-tures and accompanying hand poses for human-robot co-working tasks;• an evaluation and validation of the identified gesture set with respect tohuman Recognition Rate and Recognition Confidence within a human-robot collaboration scenario; and• a set of guidelines for the mechanical design of robot hands.6.2 Limitations and Future WorkThese thesis investigations revealed considerations and limitations in themethods utilized that could be addressed more exhaustively in related orfuture work.In Study 1, when identifying the gestures that were utilized in human-human collaborative assembly scenario, the Up-and-Down and Left-and-right Directional Gestures (GD) were analyzed together based on theassumption that these gestures were symmetric; however, in subsequent anal-yses, it was determined that these directional gestures were best recognizedwith different hand poses—thus, the assumption of symmetry in the analysisof Study 1 might not hold and requires further investigation. Informed bythis observation, Study 2 and Study 3 both treated the Up-and-Down andLeft-and Right gestures separately.In this work, the Closed-Hand (CH) pose was utilised as a baseline foranalysing participant Recognition Rates and Recognition Confidence of robothand gestures. An extension to this work would be to have a person alsogesture with the CH pose to serve as a baseline for the human-human study(Study 2), and to evaluate how well people perceive the human CH gesturecompared to the robot CH gesture (as with other gestures in Section 4.2).716.3. Concluding RemarksThe objective of this work is to gain preliminary insights into the trans-fer of natural human hand gestures to non-anthropomorphic robot handgestures; however, restrictions on participants in Study 1 (described in itsmethodology, Section 3.1) might have resulted in human hand gestures thatwere less natural than what would be observed in an actual assembly sce-nario. Future work would investigate human hand gestures and speech utter-ances produced in natural collaborative industrial settings, and model thesegestures for non-anthropomorphic robot hands common in these settings.Sections 5.2.2 and 5.2.3 applied the design guidelines outlined in Sec-tion 5.1 to predict how people would recognize robot gestures produced bythe Seed Robotics Ares and Eros hands, respectively. These predictionsserve as hypotheses to be tested in formal studies. Future work would em-ploy the same procedure performed in Study 3 (Section 3.3), implementingthe same hand gestures and configurations/poses on both the Ares and Eroshands. Participant Recognition Rates and Recognition Confidences for eachof these robotic hands could be compared to the Barrett Hand, as well asthe human-human production of the same gestures, as in Section 4.2. Theresults of such a study would provide further insights and support for thedata-driven design of expressive robotic hands.In future work, the implemented robot hand gestures could be comparedto other communication modalities—such as teaching pendants, touch inter-faces, or speech (even though speech might not be an option in the targetdomains)—with respect to common metrics in human-robot collaboration,such as human response time and overall task performance. As noted in theresults (Section 4.2), the camera angle from which the studies were evaluatedmight have impacted participant Recognition Rates and Recognition Confi-dence; further studies will investigate how observer perspective influencesthe clarity of gestural communication [14]. Finally, the gesture communica-tion system will be integrated into a decision-making mechanism to enablethe robot to predict and select the most appropriate communicative actionto maximize interpretability by a human co-worker.6.3 Concluding RemarksCollaborative robots are transforming the way in which people work in indus-trial settings, and will continue to disrupt manufacturing for years to come.As such, it is important for robots to understand how to effectively commu-nicate with their human co-workers. This thesis provides the groundworkfor these collaborative robots, upon which further work can be built.72Bibliography[1] Michael Argyle. Bodily Communication. International UniversitiesPress, Inc, United Kingdom, 1988.[2] Michael Argyle and Robert A. Hinde. Non-verbal communication in hu-man social interaction. In Non-verbal communication, page 443. Cam-bridge University Press, Oxford, England, 1972.[3] Paolo Barattini, C. Morand, and N.M. Robertson. A proposed gestureset for the control of industrial collaborative robots. In IEEE Interna-tional Conference on Robots and Human Interactive Communications(Ro-MAN ’12), pages 132 – 137, Paris, France, September 2012.[4] Rainer Bischoff and Erwin Prassler. Kuka youbot - a mobile manip-ulator for research and education. In IEEE International Conferenceon Robotics and Automation (ICRA ’11), pages 1–4, Shanghai, China,May 2011.[5] Pio Enrico Bitti and Isabella Poggi. Symbolic nonverbal behavior: Talk-ing through gestures. In Robert S. Feldman and Bernard Rime, editors,Fundamentals of nonverbal behavior, pages 433–456. Cambridge Univer-sity Press, New York, USA, 1991.[6] Samuel Bouchard. How to choose the right robotic gripper for yourapplication. May 2014.[7] A. J. Brammer and C. Laroche. Noise and communication: a three-yearupdate. Noise Health, 14(61):281–6, 2012.[8] Cynthia Breazeal, Cory D. Kidd, Andrea Lockerd Thomaz, Guy Hoff-man, and Matt Berlin. Effects of nonverbal communication on efficiencyand robustness in human-robot teamwork. In IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS ’05), pages 708–713, Edmonton, AB, Canada, August 2005.73Bibliography[9] Demeng Che and Wenzeng Zhang. Gcua humanoid robotic hand withtendon mechanisms and its upper limb. International Journal of SocialRobotics, 3(4):395–404, November 2011.[10] Israel Cohen, Jacob Benesty, and Sharon Gannot. Speech Processing inModern Communication: Challenges and Perspectives. Springer ScienceBusiness Media, Technion City, Israel, December 2009.[11] Momotaz Begum Crystal Chao, Jinhan Lee and Andrea L. Thomaz.Simon plays simon says: The timing of turn-taking in an imitationgame. In The 20th IEEE International Symposium on Robot and HumanInteractive Communication (2011), pages 235–240, 2011.[12] Kerstin Dautenhahn. Socially intelligent robots: dimensions of human-robot interaction. Philos Trans R Soc Lond B Biol Sci, 362:679–704.,April 2007.[13] Bella M. DePaulo and Howard S. Friedman. Nonverbal communication.In Daniel T. Gilbert, Susan T. Fiske, and Gardner Lindzey, editors,The handbook of social psychology, pages 3–40. McGraw-Hill, New York,USA, 4th edition, 1998.[14] Sahba El-Shawa, Noah Kraemer, Sara Sheikholeslami, Ross Mead, andElizabeth A. Croft. "Is this the real life? Is this just fantasy?": Hu-man proxemic preferences for recognizing robot gestures in physical re-ality and virtual reality. In IEEE/RSJ International Conference onIntelligent Robots and Systems (IROS ’17) (In review), Vancouver, BC,Canada, September 2017.[15] Tobias Ende, Sami Haddadin, Sven Parusel, Tilo Wüsthoff, Marc Has-senzahl, and Alin Albu-Schäffer. A human-centered approach to robotgesture based communication within collaborative working processes. InIEEE/RSJ International Conference on Intelligent Robots and Systems(IROS ’11), pages 3367–3374, San Francisco, CA, September 2011.[16] Conor Fitzgerald. Developing Baxter. In IEEE International Conferenceon Technologies for Practical Robot Applications (TePRA ’13), pages 1–6, Woburn, MA, USA, April 2013.[17] Manuel Giuliani, Claus Lenz, Thomas Müller, Markus Rickert, andAlois Knoll. Design principles for safety in human-robot interaction.International Journal of Social Robotics, 2(3):253–274, 2010.74Bibliography[18] Brian T. Gleeson, Karon E. MacLean, Amir Haddadi, Elizabeth A.Croft, and Javier A. Alcazar. Gestures for industry intuitive human-robot communication from human observation. In 8th ACM/IEEE In-ternational Conference on Human-Robot Interaction (HRI ’13), pages349–356, Tokyo, Japan, March 2013.[19] Jean Ann Graham and Michael Argyle. A cross-cultural study of thecommunication of extra-verbal meaning by gesture. International Jour-nal of Psychology, 10:57–67, 1975.[20] Amir Haddadi, Elizabeth A. Croft, Brian T. Gleeson, Karon E.MacLean, and Javier A. Alcazar. Analysis of task-based gestures inhuman-robot interaction. In IEEE International Conference on Roboticsand Automation (ICRA ’13), Karlsruhe,Baden-Württemberg,Germany,May 2013.[21] Sam Haddadin, Alin Albu-Schaffer, Alessandro De Luca, and GerdHirzinger. Collision detection and reaction: A contribution to safe phys-ical human-robot interaction. In IEEE/RSJ International Conferenceon Intelligent Robots and Systems (IROS ’08), pages 3356–3363, Sept2008.[22] Sami Haddadin, Michael Suppa, Stefan Fuchs, Tim Bodenmüller, AlinAlbu-Schäffer, and Gerd Hirzinger. Towards the robotic co-worker.Robotics Research, pages 261–282, 2011.[23] Simon Harrison. The production line as a context for low metaphoricity.In Metaphor in Specialist Discourse, volume 4, pages 131–161. JohnBenjamins Publishing Company, 2015.[24] Justin W. Hart, Sara Sheikholeslami, and Elizabeth A. Croft. Devel-oping robot assistants with communicative cues for safe, fluent HRI.In J. Scholz H. Abbass and D. Reid, editors, Foundations of TrustedAutonomy. Springer, Berlin, Germany, 2016 - pre-print.[25] Clint Heyer. Human-robot interaction and future industrial robotics ap-plications. In IEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS ’10), pages 4749–4754, Oct 2010.[26] Robert M. Krauss, Yihsiu Chen, and Purnima Chawla. Nonverbal be-havior and nonverbal communication: What do conversational handgestures tell us? Advances in Experimental Social Psychology, 28:389–450, 1996.75Bibliography[27] J. KrÃŒger, Terje K. Lien, and Alexander W. Verl. Cooperation ofhuman and machines in assembly lines. CIRP Annals - ManufacturingTechnology, 58(2):628–646, 2009.[28] Ross Mead and Maja J Mataric. Perceptual models of human-robot proxemics. Experimental Robotics, Springer Tracts in AdvancedRobotics, 109:261–276, 2016.[29] Gareth J. Monkman, Stefan Hesse, Ralf Steinmann, and Henrik Schunk.Robot Grippers. John Wiley Sons, Weinheim, Germany, 2007.[30] AJung Moon, Chris A. C. Parker, Elizabeth A. Croft, and MachielH F Van Der Loos. Did you see it hesitate? Empirically groundeddesign of hesitation trajectories for collaborative robots. In IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS ’11),pages 1994–1999, San Francisco, CA, USA, September 2011.[31] AJung Moon, Chris A. C. Parker, Elizabeth A. Croft, and H. F. MachielVan Der Loos. Design and impact of hesitation gestures during human-robot resource conflicts. Journal of Human-Robot Interaction, 2:18–40,2013.[32] Bruno Siciliano qne Oussama Khatib. Springer Handbook of Robotics.Springer Science Business Media, Berlin/Heidelberg, Germany, 2008.[33] Laurel D. Riek, Tal-Chen Rabinowitch, Paul Bremner, Anthony G.Pipe, Mike Fraser, and Peter Robinson. Cooperative gestures: Effectivesignaling for humanoid robots. In The 5th ACM/IEEE InternationalConference on Human-Robot Interaction (HRI ’10), pages 61–68, Os-aka, Japan, March 2010.[34] Margaret Gwendoline Riseborough. Physiographic gestures as decodingfacilitators: Three experiments exploring a neglected facet of communi-cation. Journal of Nonverbal Behavior, 5:172–183, 1981.[35] William T. Rogers. The contribution of kinesic illustrators towards thecomprehension of verbal behaviour within utterances. Human Commu-nication Research, 5:54–62, 1978.[36] Alison Sander and Meldon Wolfgang. The rise of robotics. Technicalreport, 2014.76[37] Allison Sauppé and Bilge Mutlu. Robot deictics: How gestureand context shape referential communication. In Proceedings of the2014 ACM/IEEE International Conference on Human-robot Interaction(HRI ’14), pages 342–349, New York, NY, USA, 2014. ACM.[38] Alan C. Schultz and Michael A. Goodrich. Human-robot interaction: Asurvey. Foundations and Trends in Human-Computer Interaction, 1(3):203–275, 2007.[39] Sara Sheikholeslami, AJung Moon, and Elizabeth A. Croft. Cooperativegestures for industry: Exploring the efficacy of robot hand configura-tions in expression of instructional gestures for human-robot interaction.International Journal of Robotics Research.[40] Sara Sheikholeslami, AJung Moon, and Elizabeth A. Croft. Exploringthe effect of robot hand configurations in directional gestures for human-robot interaction. In IEEE/RSJ International Conference on Intelli-gent Robots and Systems (IROS ’15), Hamburg, Germany, September–October 2015.[41] Joel Sherzer. Verbal and nonverbal deixis: the pointed lip gesture amongthe San Blas Cuna. Language in Society, 2:117–131, 1973.[42] Elaine Short, Justin W. Hart, Michelle Vu, and Brian Scassellati. Nofair!! An interaction with a cheating robot. In 5th ACM/IEEE In-ternational Conference on Human-Robot Interaction (HRI ’10), pages219–226, Osaka, Japan, March 2010.[43] Kevin Tai, Abdul-Rahman El-Sayed, Mohammadali Shahriari, Moham-mad Biglarbegian, and Shohel Mahmud. State of the art robotic grippersand applications. Robotics, 5(2), 2016.[44] Germano Veiga and Ricardo Araújo. Programming by demonstrationin the coworker scenario for smes. Industrial Robot: An InternationalJournal, 36(1):73–83, 01 2009.Appendix AStudy 1 InstructionsStudy one presented in section 3.1 had two phases:A.1 Study1-Phase1 InstructionsIn phase 1 of Study 1 (Section 3.1), participants were asked to use handgestures to instruct a human confederate, referred to as the “worker”, toassemble six car door parts on a car door. Figures A.1 shows the properlocation and orientation of each of the six parts.As shown in Figure A.2, the participant stood in front of the car door (ata distance of 2ft), and the experimenter stood to the right of the car door(at a distance of 1ft). The car door parts were placed on a table between theexperimenter and the human volunteer. This setup allowed the experimenterand the human volunteer to easily access the car door as well as the car doorparts.To provoke a wider range of natural and intuitive gestures in each roundof the experiment, the worker would intentionally and as naturally as possiblemake mistakes at assembling the parts on the car door. During assemblingeach part, the worker would:• part 1:– Hold part 1 below and to the left of its final location on the door,and slightly tilted to the right; and– Rotate part 1 more than needed (to have it tilted to the left).• part 2:– Take part 5 instead of part 2 from the table; and– Attach part 2 with a wrong orientation (90◦ clockwise).• part 3:– Attach part 3 to a wrong spot on the car door; and78A.1. Study1-Phase1 Instructions– Attach part 3 to its correct spot on the car door but with a wrongorientation (180◦).• part 4:– Hold part 4 below its final location on the door and slightly tiltedto the left; and– Rotate part 4 more than needed (to have it tilted to the right).• part 5:– Attach part 5 with a wrong orientation (90◦ counter-clockwise);and– Rotate part 5 180◦ clockwise (i.e. reattach part 5 in 90◦ clockwiseorientation with respect to its correct orientation).• part 6:– Hold part 6 above and to the left of its final location on the door;and– Attach part 6 with a wrong orientation (180◦).79A.1. Study1-Phase1 Instructions(Figure continued on next page)80A.1. Study1-Phase1 Instructions(Figure continued on next page)81A.1. Study1-Phase1 InstructionsFigure A.1: Proper location and orientation of each of the six parts on thecar door82A.1. Study1-Phase1 InstructionsFigure A.2: Experimental setup for human-participants pilot experiment(Study1-Phase1).83A.2. Study1-Phase2 InstructionsA.2 Study1-Phase2 InstructionsIn this phase, a new picture of the assembled vehicle door containing changesin the orientation or location of three of the six parts now assembled on thevehicle door was given to the participants (Figure A.3). Participants areasked to direct the worker to rearrange the parts on the door to achieve thenew assembly arrangement.Similar to Study 1, Phase 1, the vehicle door was in front of the humanvolunteer ( 2ft), and the experimenter stood to the right of the vehicle door( 1ft) facing towards the human volunteer (Figure A.4).84A.2. Study1-Phase2 InstructionsFigure A.3: Highlighted changes in the orientation or location of three of thesix parts assembled on the vehicle door85A.2. Study1-Phase2 InstructionsFigure A.4: Experimental setup for human-participants pilot experiment(Study1-Phase2).86Appendix BAdvertisements, OnlineSurveys, and Consent FormsThis appendix outlines the details of the online surveys used for Studies 2and 3. Consent forms and advertisement materials used for the studies arealso presented in this appendix. This appendix is divided into two sections:Section B.1 presents the consent form and the online survey used for Study2; and Section B.2 presents the consent form and the online survey used usedfor Study 3.B.1 Study 2 Advertisements, Online Surveys, andConsent FormsIn Study 2, two versions of the same online survey was used, each containing adifferent pseudo-random order of video-clips, each of a person exhibiting oneof the identified hand gestures in Study 1 to direct a worker in an assemblytask analogous to Study 1 (Chapter 3, Section 3.1). All versions of the surveyused a single consent form. This consent form is presented in Figure B.1.The study was advertised via online media tools including twitter, facebook,and the Collaborative Advanced Robotics and Intelligent Systems (CARIS)Laboratory website and distribution of advertisements to university students.The advertised material is presented in Figure B.2 and Figure B.3.Each survey contained 14 pages, each page containing a video and thesame three survey questions discussed in Study 2 (Chapter 3, Section 3.2).A sample page is shown in Figure B.4.B.1. Study 2 Advertisements, Online Surveys, and Consent FormsLast revised: July 31, 2013 Gesture Survey Consent Form.docx      Gesture Survey  Thank you for volunteering to participate in the  survey.        The University of British Columbia Collaborative Advanced  Robotics  and  Intelligent Systems (CARIS) Laboratory Department of Mechanical  Engineering, UBC 6250  Applied Science  Lane, Vancouver, BC V6T 1Z4 Tel: (604) 822-3147 Fax: (604)  822-2403 Web site:  http://caris.mech.ubc.ca  Gesture Survey Consent Form  Project Title:  Exploring the Effect of Robotic Articulated Hands in Task Based Gestures in Human-robot Interaction Principal Investigator:  Dr. Elizabeth Croft (604) 822-6614, ecroft@mech.ubc.ca Research assistant and contact person: Sara Sheikholeslami (604) 822 3147, sara_sheikholeslami@yahoo.ca Funding:  This research is funded by the Collaborative, Human-focused, Assistive Robotics for Manufacturing.  Purpose: The purpose of this project is to evaluate hand and arm gestures as a communication medium in human-robot interaction. The ultimate goal of our research is to explore whether human-like gestures expressed using a poseable robot hand are better recognized by humans than those expressed with a non-poseable robot hand. Results from this study will help determine how robots can better communicate with humans using hand gestures.  Procedures:  The study is being conducted via an online survey. It consists of short videos of people assembling a car door with different parts to be placed/rearranged on the car door. You will be asked to answer short questions about each of the videos. The survey should take no longer than 15 minutes to complete.  This project is part of an ongoing research in human-robot interaction which will be published in peer reviewed journals and conferences. You will not be compensated for your participation.  Potential Risks:  None. Confidentiality: This online survey is hosted by the UBC subscribed Enterprise Feedback Management tool (EFM).  Enterprise Feedback Management (EFM) is a Canadian-hosted survey solution complying with the BC Freedom of Information and Protection of Privacy Act. All data is stored and backed up in Canada (Sydney BC).  No identifying information (Figure continued on next page)88B.1. Study 2 Advertisements, Online Surveys, and Consent FormsLast revised: July 31, 2013 Gesture Survey Consent Form.docx    about your computer will be collected.  This consent form is the first page of the survey. You are required to give your consent by pressing the “consent to participate” button below in order to participate in the study.  If you do not wish to participate, simply press the “no thank you” button below and you will be redirected out of the survey form.  If you have any concerns about your treatment or rights as a research subject, you may telephone the Research Subject Information Line in the UBC Office of Research Services at the University of British Columbia, at (604) 822-8598.   Revision 01  Consent: By pressing this button, you consent to participate in this study, and acknowledge you have reviewed this consent form. Continue to survey.    No thank you, I do not wish to participate in the survey.    Figure B.1: Screen capture of the consent form used for the human-humaninteraction online surveys conducted in Study 2 (Chapter 3, Section 3.2).89B.1. Study 2 Advertisements, Online Surveys, and Consent FormsRe: [Call for Volunteers] Robot becoming good teammates – A Human-Robot Interaction Study  The CARIS lab is conducting a fascinating online survey in human-robot interaction to understand human robot relations better. With the rapid advancements and innovations in the realm of robotic technology, soon there will be robot assistants capable of supporting humans in their daily tasks. However, this requires effective and reliable human-robot communication.  We aim to use non-verbal robot gestures that enable smooth flow of interaction between humans and robots. The primary criterion in selecting these gestures is their intuitiveness. We would like to invite you to participate in our fun human-human collaboration online survey. It will take no more than 15 minutes of your time, and you will be asked to watch and comment on short videos of two people working together on a vehicle door assembly task.  With your help, we will be able to design robots capable of having natural interaction with their human teammates in the near future.  Visit http://bit.ly/caris_study to take the survey. You will be required to complete an online consent form in order to begin the survey. For information/concerns regarding the survey please contact:  Sara Sheikholeslami   Or visit: http://bit.ly/caris_study  Thank you very much for your help.  Sara Sheikholeslami, Undergrad Researcher, UBC Mechanical Engineering 	  AJung Moon, Ph.D. Student, UBC Mechanical Engineering   Elizabeth Croft, Professor, UBC Mechanical Engineering                          Last Revised:  August 1, 2013 Call for Volunteers HH Interaction Figure B.2: Contents of the online advertisement used to recruit participantsfor Study 2. The study was advertised on the CARIS Laboratory website.Links to this advertisement was distributed via other online media tools,including twitter and facebook.90B.1. Study 2 Advertisements, Online Surveys, and Consent Forms The Human-Robot Experiment @CARIS Lab ICICS x527 sarash.ubc@gmail.com http://bit.ly/caris_study 604-822-3147  The Human-Robot Experiment @CARIS Lab ICICS x527 sarash.ubc@gmail.com  http://bit.ly/caris_study 604-822-3147  The Human-Robot Experiment @CARIS Lab ICICS x527 sarash.ubc@gmail.com  http://bit.ly/caris_study 604-822-3147  The Human-Robot Experiment @CARIS Lab ICICS x527 sarash.ubc@gmail.com  http://bit.ly/caris_study 604-822-3147  The Human-Robot Experiment @CARIS Lab ICICS x527 sarash.ubc@gmail.com  http://bit.ly/caris_study 604-822-3147  The Human-Robot Experiment @CARIS Lab ICICS x527 sarash.ubc@gmail.com  http://bit.ly/caris_study 604-822-3147  The Human-Robot Experiment @CARIS Lab ICICS x527 sarash.ubc@gmail.com  http://bit.ly/caris_study 604-822-3147  The Human-Robot Experiment @CARIS Lab ICICS x527 sarash.ubc@gmail.com  http://bit.ly/caris_study 604-822-3147  The Human-Robot Experiment @CARIS Lab ICICS x527 sarash.ubc@gmail.com http://bit.ly/caris_study 604-822-3147  The Human-Robot Experiment @CARIS Lab ICICS x527 sarash.ubc@gmail.com http://bit.ly/caris_study 604-822-3147    Wishing to have robot partners?  Participate in our online survey today!        The UBC CARIS Lab is looking for volunteers to participate in a fun human-human collaboration online survey. (About 15 minutes).   You will be asked to watch and comment on short videos of two people working together on a vehicle door assembly task. With your help, we will be able to design robots that can better communicate with their human teammates in the near future.   Visit http://bit.ly/caris_study to read the instructions and watch the videos, OR Contact Sara at sarash.ubc@gmail.com to participate.                          Last revised: August 6, 2013 Call for Volunteers ICRA HHI tear.doc Page 1 of 1    Come help me be a good teammate!  Figure B.3: Contents of the paper advertisement used to recruit participantsfor Study 2. The advertisement was distributed to university students.91B.1. Study 2 Advertisements, Online Surveys, and Consent FormsFigure B.4: An example of one of the 14 pages of the Study 2 online survey.All pages of the survey contained the same questions in the same order;however, the video content of each page was randomly selected.92B.2. Study 3 Advertisements, Online Surveys, and Consent FormsB.2 Study 3 Advertisements, Online Surveys, andConsent FormsIn Study 3, two versions of the same online survey was used, each containing adifferent pseudo-random order of video-clips, each of a person exhibiting oneof the identified hand gestures in Study 1 to direct a worker in an assemblytask analogous to Study 1 (Chapter 3, Section 3.1). All versions of the surveyused a single consent form. This consent form is presented in Figure B.5.The study was advertised via online media tools including twitter, facebook,and the Collaborative Advanced Robotics and Intelligent Systems (CARIS)Laboratory website and distribution of advertisements to university students.The advertised material is presented in Figure B.6 and Figure B.7.Each survey contained 14 pages, each page containing a video and thesame three survey questions discussed in Study 3 (Chapter 3, Section 3.2).A sample page is shown in Figure B.8.93B.2. Study 3 Advertisements, Online Surveys, and Consent FormsLast  revised:  June  25,  2014  Gesture  Survey  Consent  Form.docx   Gesture Survey  Thank you for volunteering to participate in the survey.        The University of British Columbia Collaborative Advanced Robotics  and Intelligent Systems ( CARIS) Laboratory Department of Mechanical Engineering, UBC 6250 Applied Science Lane, Vancouver, BC V6T 1Z4 Tel:  (604) 822-3147 Fax: (604) 822-2403 Web site: http://caris.mech.ubc.ca    Gesture Survey Consent Form  Project Title: Exploring the Effect of Robotic Articulated Hands in Task Based Gestures in Human-robot Interaction Principal Investigator: Dr. Elizabeth Croft  Research assistant and contact person: Alex Reddy   Funding: This research is funded by the Collaborative, Human-focused, Assistive Robotics for Manufacturing.  Purpose: The purpose of this project is to evaluate hand and arm gestures as a communication medium in human-robot interaction. The ultimate goal of our research is to explore whether human-like gestures expressed using a poseable robot hand are better recognized by humans than those expressed with a non-poseable robot hand. Results from this study will help determine how robots can better communicate with humans using hand gestures.  Procedures: The study is being conducted via an online survey. It consists of short videos of a robot hand using communicative gestures to give instructions to a person to assemble a car door with different parts to be placed/rearranged on the car door. You will be asked to answer short questions about each of the videos. The survey should take no longer than 15 minutes to complete.  This project is part of an ongoing research in human-robot interaction which will be published in peer reviewed journals and conferences. You will not be compensated for your participation.  Potential Risks: None.  Confidentiality: This online survey is hosted by the UBC subscribed Enterprise Feedback Management tool (EFM). Enterprise Feedback Management (EFM) is a Canadian-hosted survey solution complying with the BC Freedom of Information and Protection of Privacy Act. All data is stored and backed up in Canada (Sydney BC). No identifying information(Figure continued on next page)94B.2. Study 3 Advertisements, Online Surveys, and Consent FormsLast  revised:  June  25,  2014  Gesture  Survey  Consent  Form.docx   about your computer will be collected.  This consent form is the first page of the survey. You are required to give your consent by pressing the “consent to participate” button below in order to participate in the study. If you do not wish to participate, simply press the “no thank you” button below and you will be redirected out of the survey form.  If you have any concerns about your treatment or rights as a research subject, you may telephone the Research Subject Information Line in the UBC Office of Research Services at the University of British Columbia, at (604) 822-8598.  Revision 02  Consent: By pressing this button, you consent to  participate in this study, and acknowledge you have reviewed this consent form. Continue to survey.   No thank you, I do not  wish to participate in the survey. Figure B.5: Screen capture of the consent form used for the human-robotinteraction online surveys conducted in Study 3 (Chapter 3, Section 3.3.95B.2. Study 3 Advertisements, Online Surveys, and Consent FormsRe: [Call for Volunteers] Robot becoming good teammates – A Human-Robot Interaction Study 	  The CARIS lab is conducting a fascinating online survey in human-robot interaction to understand human robot relations better. With the rapid advancements and innovations in the realm of robotic technology, soon there will be robot assistants capable of supporting humans in their daily tasks. However, this requires effective and reliable human-robot communication. 	  We aim to use non-verbal robot gestures that enable smooth flow of interaction between humans and robots. The primary criterion in selecting these gestures is their intuitiveness.  We would like to invite you to participate in our fun human-robot collaboration online survey. It will take no more than 15 minutes of your time, and you will be asked to watch and comment on short videos of a human and a robot arm working together on a vehicle door assembly task. 	  With your help, we will be able to design robots capable of having natural interaction with their human teammates in the near future. 	  Visit http://bit.ly/caris_study to take the survey. You will be required to complete an online consent form in order to begin the survey. For information/concerns regarding the survey please contact: 	  Alex Reddy  	  Or visit: http://bit.ly/caris_study 	  Thank you very much for your help. 	  Alex Reddy, Undergrad Researcher, UBC Mechanical Engineering  Sara Sheikholeslami, MASc. Studet, UBC Mechanical Engineering  AJung Moon, Ph.D. Student, UBC Mechanical Engineering  Elizabeth Croft, Professor, UBC Mechanical Engineering  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  Last Revised:  June 25, 2014 Call for Volunteers HR Interaction rev2.doc Figure B.6: Contents of the online advertisement used to recruit participantsfor Study 3. The study was advertised on the CARIS Laboratory website.Links to this advertisement was distributed via other online media tools,including twitter and facebook.96B.2. Study 3 Advertisements, Online Surveys, and Consent FormsThe Human-Robot Experiment @CARIS Lab ICICS x527 alexjreddy123@gmail.com http://bit.ly/caris_study 604-822-3147 The Human-Robot Experiment @CARIS Lab ICICS x527 http://bit.ly/caris_study The Human-Robot Experiment @CARIS Lab ICICS x527 http://bit.ly/caris_study The Human-Robot Experiment @CARIS Lab ICICS x527 http://bit.ly/caris_study The Human-Robot Experiment @CARIS Lab ICICS x527 http://bit.ly/caris_study The Human-Robot Experiment @CARIS Lab ICICS x527 http://bit.ly/caris_study The Human-Robot Experiment @CARIS Lab ICICS x527 http://bit.ly/caris_study The Human-Robot Experiment @CARIS Lab ICICS x527 http://bit.ly/caris_study The Human-Robot Experiment @CARIS Lab ICICS x527 http://bit.ly/caris_study The Human-Robot Experiment @CARIS Lab ICICS x527 http://bit.ly/caris_study   Wishing  to  have  robot  partners?  Participate  in  our  online  survey  today!    Come help me be a good teammate!    The UBC CARIS Lab is looking for volunteers to participate in a fun human-robot collaboration online survey. (About 15 minutes).     You will be asked to watch and comment on short videos of a human and a robot hand working together on a vehicle door assembly task. With your help, we will be able to design robots that can better communicate with their human teammates in the near future.   Visit http://bit.ly/caris_study to read the instructions and watch the videos, OR Contact Alex at                                     to participate.                           Last  revised:  July  1,  2014                       Call  for  Volunteers  HRI  tear.doc                                                                                                      Page  1  of  1  Figure B.7: Contents of the paper advertisement used to recruit participantsfor Study 3. The advertisement was distributed to university students.97B.2. Study 3 Advertisements, Online Surveys, and Consent FormsFigure B.8: An example of one of the 14 pages of the Study 3 online survey.All pages of the survey contained the same questions in the same order;however, the video content of each page was randomly selected.98Appendix CParticipants’ confidence inrecognizing human gesturescompared to robot expressionsof the same gesturesThe following section provides the results of participant confidence in recog-nizing human gestures compared to robot expressions of the same gesture.While this analysis is beyond the scope and the objectives of this thesis, weadded this section for completeness.Independent sample t-tests were applied to measures of Recognition Con-fidence across the robot and human expressions of each hand configurationfor all gestures (Table C.1, and Figures C.1 for the Directional Gestures,C.2 for theOrientational Gestures, C.3 for theManipulation Gestures,and C.4 for the Feedback Gestures).Most of the gestures were interpreted with higher Recognition Rates whenperformed by a person rather than when performed by the robot. The ex-ceptions to these results are:1. Finger-Pointing (FP) configuration of the Right Directional Gesture(t(43.99) = 1.97, p = 0.06) (Figure C.1),2. both FP and Half Open-Hand (HOH) configurations of the 180◦ Orien-tational Gesture (t(57) = −0.03, p = 0.98 and t(74) = 0.40, p = 0.69,respectively) (Figure C.2), and3. both FP and Open-Hand (OH) configurations of the Install Manip-ulation Gesture (t(36) = 0.14, p = 0.89 and t(62) = 0.36, p = 0.72,respectively) (Figure C.3),though the differences were not statistically significant.Gestures that are recognized significantly more accurately when per-formed by a person rather than when performed by the robot include:99Appendix C. Participants’ confidence in recognizing human gestures compared to robot expressions of the same gestures1. both FP and OH configurations of the Up Directional Gesture (t(50) =−3.36, p < 0.01 and t(65) = −2.04, p < 0.05, respectively),2. FP configuration of the 90◦ Orientational Gesture (t(65) = −2.91,p < 0.01),3. both OH and HOH configurations of the Remove Manipulation Ges-ture (t(62) = −4.21, p < 0.001 and t(63) = −5.02, p < 0.001, respec-tively),4. OH configuration of the PickUp Manipulation Gesture (t(116) =−2.00, p < 0.05),5. V-Sign (VS) configuration of the Swap Manipulation Gesture (t(34) =−3.12, p < 0.01),6. Thumbs-Up (TU) configuration of the Confirm Feedback Gesture(t(116) = −11.17, p < 0.001), and7. both FP and OH configurations of the Stop Feedback Gesture (t(66) =−2.43, p < 0.05 and t(66.21) = −2.21, p < 0.05, respectively).100Appendix C. Participants’ confidence in recognizing human gestures compared to robot expressions of the same gesturesTable C.1: Measures of independent samples t-test on the to measures ofRecognition Confidence across the robot and human expressions of each handconfiguration for all gestures. Note that FP configuration of Right Direc-tional Gesture and OH configuration of Stop Feedback Gesture failed theassumption of equality of variances, and therefore, the reported results forthese two gestures do not assume equal variances.Directional Gestures, GDGesture, g ∈ GD Hand Poses t pUpFP t(50) = −3.36 < 0.01OH t(65) = −2.04 < 0.05DownFP t(54) = −1.22 0.23OH t(60) = −0.27 0.78LeftFP t(34) = −0.60 0.55OH t(41) = −1.57 0.12RightFP t(43.99) = 1.97 0.06OH t(42) = −1.86 0.07Orientational Gestures, GOGesture, g ∈ GO Hand Poses t p< 45◦ HOH t(115) = −1.50 0.1490◦ FP t(65) = −2.91 < 0.01HOH t(71) = −0.42 0.67180◦ FP t(57) = −0.03 0.98HOH t(74) = 0.40 0.69Manipulative Gestures, GMGesture, g ∈ GM Hand Poses t pInstallFP t(36) = 0.14 0.89OH t(62) = 0.36 0.72RemoveOH t(62) = −4.21 < 0.00HOH t(63) = −5.02 < 0.00PickUp OH t(116) = −2.00 < 0.05Place FP t(118) = −1.31 0.19SwapFP t(40) = −0.71 0.48VS t(34) = −3.12 < 0.01Feedback Gestures, GFGesture, g ∈ GF Hand Poses t pConfirm TU t(116) = −11.17 < 0.00StopFP t(66) = −2.43 < 0.05OH t(66.21) = −2.21 < 0.05101Appendix C. Participants’ confidence in recognizing human gestures compared to robot expressions of the same gesturesFigure C.1: Measures of Recognition Confidence across the robot and humanexpressions of each hand configuration for Directional Gestures, GD.Figure C.2: Measures of Recognition Confidence across the robot and humanexpressions of each hand configuration for Orientational Gestures, GO.102Appendix C. Participants’ confidence in recognizing human gestures compared to robot expressions of the same gesturesFigure C.3: Measures of Recognition Confidence across the robot and humanexpressions of each hand configuration for Manipulation Gestures, GM .Figure C.4: Measures of Recognition Confidence across the robot and humanexpressions of each hand configuration for Feedback Gestures, GF .103

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0344007/manifest

Comment

Related Items