UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Design and evaluation of nonverbal motion cues for human-robot spatial interaction Hetherington, Nicholas James 2020

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata

Download

Media
24-ubc_2020_november_hetherington_nicholas.pdf [ 3.75MB ]
Metadata
JSON: 24-1.0394282.json
JSON-LD: 24-1.0394282-ld.json
RDF/XML (Pretty): 24-1.0394282-rdf.xml
RDF/JSON: 24-1.0394282-rdf.json
Turtle: 24-1.0394282-turtle.txt
N-Triples: 24-1.0394282-rdf-ntriples.txt
Original Record: 24-1.0394282-source.json
Full Text
24-1.0394282-fulltext.txt
Citation
24-1.0394282.ris

Full Text

Design and Evaluation of Nonverbal Motion Cues for Human-Robot Spatial Interaction  by  Nicholas James Hetherington  B.A.Sc., Queen’s University at Kingston, 2017  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  MASTER OF APPLIED SCIENCE in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Mechanical Engineering)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  September 2020   © Nicholas James Hetherington, 2020   ii  The following individuals certify that they have read, and recommend to the Faculty of Graduate and Postdoctoral Studies for acceptance, a thesis entitled:  Design and Evaluation of Nonverbal Motion Cues for Human-Robot Spatial Interaction  submitted by Nicholas James Hetherington in partial fulfillment of the requirements for the degree of Master of Applied Science in Mechanical Engineering  Examining Committee: Dr. H.F. Machiel Van der Loos, Mechanical Engineering, University of British Columbia Supervisor Dr. Elizabeth A. Croft, Mechanical and Aerospace Engineering, Electrical and Computer Systems Engineering, Monash University Supervisory Committee Member Dr. Karon E. MacLean, Computer Science, University of British Columbia Supervisory Committee Member Dr. Ian M. Mitchell, Computer Science, University of British Columbia Supervisory Committee Member             iii  Abstract Mobile robots have recently appeared in public spaces such as shopping malls, airports, and urban sidewalks. These robots are designed with human-aware motion planning capabilities, but they are not designed to communicate with pedestrians. Pedestrians encounter these robots without any training or prior understanding of the robots’ behaviour, which can cause discomfort, confusion, and delay social acceptance. The two studies described in this thesis evaluate robot communication cues designed for human-robot spatial interactions in public spaces. By communicating robot behaviour to pedestrians, these cues aim to increase the social acceptability of mobile robots. Both studies use videos of the robot cues and online surveys to collect data from human participants. Study 1 evaluates two different modalities for communicating a robot’s movement to pedestrians: flashing lights and light projection. Previous reviewed literature had not directly compared these two modalities of motion legibility cues. Study 1 also compares using these cues to communicate path information or goal information, which are contributing factors to legible robot motion. Previous reviewed literature had not compared path and goal information for motion legibility cues using visual modalities. Results show that light projection is a more socially acceptable modality than flashing lights, and that these cues are most socially acceptable when they communicate both path and goal information. Study 2 evaluates different communication cues used by a robot to yield to a pedestrian at a doorway. The experiment compared decelerating, retreating, and rotating motions. These motions had not previously been directly compared in this context. Results show that a robot retreating behaviour was the most socially acceptable cue. The results of this work help guide the development of mobile robots for public spaces. iv  Lay Summary For most people, walking in public spaces is simple and easy. But what if there were a robot in the scene? Self-driving robots are appearing in public spaces for things like delivery and security. These robots can cause discomfort and confusion for pedestrians, which prevents social acceptance. This research explores ways to design these robots for social acceptability. In one study, a robot uses a light projector or flashing lights to communicate its movement to pedestrians. The results show the light projector is more socially acceptable. Another study explores an interaction between a robot and pedestrians at a doorway. The robot uses different types of body language to tell the pedestrian they should go first. The results show that the robot’s backing away from the door is the most socially acceptable behaviour. This research will help guide the design of self-driving robots that operate in public spaces.   v  Preface I, with support from my supervisors, identified and designed the research program described in this thesis. Study 1 (Chapter 3) was independent work done by me. The UBC Ethics ID for Study 1 is H10-00503-A070.  Study 2 (Chapter 4) was a collaboration between me, Ryan Lee, and Marlene Schlorf. Ryan Lee was a volunteer during the final year of his B.A.Sc. in Mechanical Engineering, Mechatronics option, at the University of British Columbia. Marlene Schlorf was a visiting researcher (remote) from the Technical University of Munich during her M.Sc. in Ergonomics - Human Factors Engineering. I conceived of the idea for Study 2 and closely supervised the prototyping and pilot testing work. Ryan Lee and I conducted a literature review together. Ryan also programmed the robot yielding cues and collected the pilot testing data. All three people designed the survey instructions and questions; I constructed the survey in the online survey tool and managed the online data collection. Marlene Schlorf conducted the data analysis. The writing in Chapter 4 is entirely my own. The UBC Ethics ID for Study 2 is H10-00503-A071.   vi  Table of Contents Abstract .......................................................................................................................................... iii Lay Summary ................................................................................................................................. iv Preface............................................................................................................................................. v Table of Contents ........................................................................................................................... vi List of Tables .................................................................................................................................. x List of Figures ............................................................................................................................... xii List of Supplementary Materials ................................................................................................... xv Glossary ....................................................................................................................................... xvi Acknowledgements ..................................................................................................................... xvii  Introduction ................................................................................................................... 1 1.1 Thesis Outline ............................................................................................................. 4  Background and Motivation .......................................................................................... 5 2.1 Explicit Robot Motion Legibility Cues....................................................................... 5 2.1.1 Modalities for Explicit Motion Legibility Cues...................................................... 6 2.1.2 Motion Legibility Factors ....................................................................................... 9 2.2 Implicit Robot Yielding Cues ................................................................................... 10 2.3 Research Objectives .................................................................................................. 13 2.3.1 Explicit Motion Legibility Cues ........................................................................... 14 2.3.2 Implicit Robot Yielding Cues ............................................................................... 14  Study 1 – Robot Motion Legibility Cues .................................................................... 15 3.1 Methods and Materials .............................................................................................. 16 3.1.1 Robot Research Platform ...................................................................................... 16 vii  3.1.2 Design of Motion Legibility Cues ........................................................................ 17 3.1.3 Experimental Scenario .......................................................................................... 20 3.1.4 Survey Design and Experimental Procedure ........................................................ 22 3.1.4.1 Social Acceptability Likert Scale and Legibility Likert Items ..................... 22 3.1.4.2 Follow-Up Questions .................................................................................... 24 3.1.5 Participants ............................................................................................................ 26 3.1.6 Data Analysis ........................................................................................................ 28 3.2 Results ....................................................................................................................... 30 3.2.1 Social Acceptability of Motion Legibility Cues ................................................... 30 3.2.1.1 H1.1: Arrows are a more socially acceptable cue type than lights. .............. 33 3.2.1.1.1 Preference for Type of Motion Legibility Cues ...................................... 34 3.2.1.2 H1.2: Path&goal mode is a more socially acceptable cue mode than either path or goal mode.......................................................................................................... 35 3.2.1.2.1 Preference for Mode of Motion Legibility Cues ..................................... 37 3.2.2 Comprehension of Motion Legibility Cue Modes ................................................ 38 3.2.2.1 Identifiability of Motion Legibility Cue Modes ........................................... 42 3.2.3 Social Acceptability of Motion Legibility Cues Compared to No Cues .............. 43 3.2.4 Effect of Different Robot Movement Scenarios on Motion Legibility Cues........ 44 3.3 Discussion ................................................................................................................. 45 3.3.1 Limitations ............................................................................................................ 46 3.3.1.1 Data Analysis Limitations............................................................................. 48 3.3.1.2 Practical Limitations ..................................................................................... 48   viii   Study 2 – Robot Yielding Cues ................................................................................... 49 4.1 Methods and Materials .............................................................................................. 51 4.1.1 Design of Robot Yielding Cues ............................................................................ 51 4.1.2 Experimental Scenario .......................................................................................... 54 4.1.3 Survey Design and Experimental Procedure ........................................................ 56 4.1.3.1 Cue Comprehensibility and Trustworthiness Likert Scales .......................... 58 4.1.4 Participants ............................................................................................................ 59 4.1.5 Data Analysis ........................................................................................................ 62 4.2 Results ....................................................................................................................... 63 4.2.1 H2.1 The Nudge cue is the most comprehensible yielding cue. ........................... 63 4.2.2 H2.2 The Nudge cue is the most trustworthy yielding cue. .................................. 64 4.2.3 Correlation of Comprehensibility and Trustworthiness in Robot Yielding Cues . 65 4.2.4 Comfort, Likeability, and Social Compatibility of Robot Yielding Cues ............ 66 4.2.5 Interpretation of Robot Yielding Cues .................................................................. 67 4.3 Discussion ................................................................................................................. 68 4.3.1 Limitations ............................................................................................................ 70  Conclusion................................................................................................................... 71 5.1 Study 1 – Explicit Motion Legibility Cues ............................................................... 71 5.2 Study 2 – Implicit Yielding Cues.............................................................................. 73 5.3 Future Work .............................................................................................................. 74 Bibliography ..................................................................................................................................77   ix  Appendices .....................................................................................................................................81 Appendix A Implementation Details ............................................................................................ 81     A.1 Robot Hardware ................................................................................................................. 81     A.2 Robot Software .................................................................................................................. 85         Software Implementation for Study 1 – Robot Motion Legibility Cues .............................. 86         Software Implementation for Study 2 – Robot Yielding Cues ............................................. 90 Appendix B Data Analysis Details ............................................................................................... 91     B.1 Data Analysis Details for Study 1 – Robot Motion Legibility Cues .................................. 91     B.2 Data Analysis Details for Study 2 – Robot Yielding Cues ................................................ 95 Appendix C Surveys, Consent Letters, and Advertisements ...................................................... 100     C.1 Survey, Consent Letter, and Advertisement for Study 1 – Motion Legibility Cues ........ 101     C.2 Survey, Consent Letter, and Advertisement for Study 2 - Robot Yielding Cues ............ 114   x  List of Tables Table 3.1: Results of the ANOVA on social acceptability of robot motion legibility cues. ........ 31 Table 3.2: Results of the ART ANOVA on Statement 1.3: path comprehension. ....................... 38 Table 3.3: Results of the ART ANOVA on Statement 1.4: goal mode comprehension............... 38 Table 4.1: Cue parameters tested in the in-person pilot tests. ...................................................... 53 Table A.1: Parts list for the auxiliary components added to the mobile base. .............................. 82 Table B.1: Mauchly test for sphericity of Social Acceptability Score. ........................................ 91 Table B.2: Shapiro-Wilks test for normality of Social Acceptability Score................................. 91 Table B.3: Levene test for homogeneity of variance of Social Acceptability Score. ................... 91 Table B.4: Main effect of cue type on Social Acceptability Score. .............................................. 92 Table B.5: Main effect of cue mode on Social Acceptability Score.. ........................................... 92 Table B.6: Cue type and cue mode interaction on Social Acceptability Score. ........................... 92 Table B.7: Cue type and cue mode interaction on Social Acceptability Score.. .......................... 92 Table B.8: Cue type and scenario interaction on Social Acceptability Score. ............................. 92 Table B.9: Cue type and scenario interaction on Social Acceptability Score. ............................. 92 Table B.10: Three-way interaction on Social Acceptability Score. Comparing cue types........... 93 Table B.11: Three-way interaction on Social Acceptability Score. Comparing cue modes......... 93 Table B.12: Main effect of cue mode on path comprehension (Statement 1.3). .......................... 94 Table B.13: Main effect of cue mode on goal comprehension (Statement 1.4). .......................... 94 Table B.14: Cue mode identifiability (Question 3). ..................................................................... 94 Table B.15: Normality tests for the comprehensibility data. ........................................................ 95 Table B.36: Normality tests for the trustworthiness data. ............................................................ 95 Table B.37: Sphericity tests for the comprehensibility data. ........................................................ 96 xi  Table B.38: Sphericity tests for the trustworthiness data. ............................................................ 96 Table B.39: Pairwise comparisons for the comprehensibility data. .............................................. 97 Table B.40: Pairwise comparisons for the trustworthiness data. .................................................. 98 Table B.41: Correlation between comprehensibility and trustworthiness of each yielding cue. .. 99   xii  List of Figures Figure 2.1: The unexplored human-robot spatial head-on interaction. ......................................... 16 Figure 3.2: Diagram of the robot’s two motion legibility cue types: arrows and lights. .............. 19 Figure 3.3: The robot’s motion legibility cues.............................................................................. 19 Figure 3.4: Image of the experimentation space. .......................................................................... 20 Figure 3.5: Age distribution of participants in the motion legibility cues study. ......................... 27 Figure 3.6: Gender distribution of participants in the motion legibility cues study. .................... 27 Figure 3.7: Results of the ANOVA on social acceptability score.. .............................................. 31 Figure 3.8: Results of the ANOVA on social acceptability score. ............................................... 32 Figure 3.9: Results of the ANOVA on social acceptability score. ............................................... 32 Figure 3.10: Boxplots for the follow-up question on cue type preference ................................... 34 Figure 3.11: Boxplots for the follow-up questions on cue mode preference ................................ 37 Figure 3.12: Violin plots showing the main effect of cue mode comprehension. ........................ 39 Figure 3.13: Violin plots showing the effect of cue type and scenario on comprehension.. ........ 41 Figure 3.14: Violin plots showing the effect of cue type and scenario on identifiability. ............ 42 Figure 3.15: Results of the ANOVA on social acceptability score. ............................................. 43 Figure 4.1: Diagram of the interaction explored in Study 2. ........................................................ 49 Figure 4.2: The robot’s five different yielding cues ..................................................................... 52 Figure 4.3: Diagram of the simulated human-robot spatial interaction in Study 2....................... 54 Figure 4.4: The videos used a zooming animation to simulate the viewer walking. .................... 55 Figure 4.5 : Age distribution of participants in the robot yielding cues study.............................. 60 Figure 4.6: Gender distribution of participants in the robot yielding cues study. ........................ 60 Figure 4.7: Responses to the question “I have previous experience with robots” ........................ 61 xiii  Figure 4.8: Comprehensibility results for the robot yielding cues. ............................................... 63 Figure 4.9: Trustworthiness results for the robot yielding cues. ................................................... 64 Figure 4.10: Violin plots showing the responses to Statements 3.10 - 3.12. ................................ 67 Figure A.1: Schematic of the auxiliary components added to the mobile base. ........................... 81 Figure A.2: Annotated image showing the auxiliary components added to the robot. ................. 82 Figure A.3: Relevant sections of the PowerBot data sheet.. ......................................................... 83 Figure A.4: Diagram of the projection area of the light projector used in Study 1. ..................... 84 Figure A.5: High-level software diagram illustrating ROS control of the PowerBot. ................. 85 Figure A.6: Graph of the ROS network when the motion legibility cues are used. ..................... 87 Figure A.7: Graph of the ROS network showing the “Wizard of Oz” set up. .............................. 88 Figure A.8: General diagram of the CommBot node. ................................................................... 89 Figure C.1: CAPTCHA and consent letter. ................................................................................ 101 Figure C.2: Demographics questions. ......................................................................................... 102 Figure C.3: Robotics experience demographic question.. .......................................................... 103 Figure C.4: Instructions. ............................................................................................................. 104 Figure C.5: Video page with attention check question. .............................................................. 105 Figure C.6: Question 1 – Likert statements. Repeated with Figure C.5. .................................... 106 Figure C.7:  Question 2 – cue type preference. ........................................................................... 107 Figure C.8: Question 3 – cue mode clarity.. ............................................................................... 108 Figure C.9: Question 4 – cue mode preference.. ......................................................................... 109 Figure C.10: Last page. A unique ID was inserted as shown. .................................................... 110 Figure C.11: Page 1 of the consent letter for Study 1. ................................................................ 111 Figure C.12: Page 2 of the consent letter for Study 1. ................................................................ 112 xiv  Figure C.13: Study 1 recruitment advertisement on Amazon Mechanical Turk. ....................... 113 Figure C.14: CAPTCHA and consent letter. .............................................................................. 114 Figure C.15: Instructions. ........................................................................................................... 115 Figure C.16: Familiarization page. ............................................................................................. 116 Figure C.17: Video page with Question 1 – attention check and Question 2 – interpretation.. . 117 Figure C.18: Question 3 – Likert statements. ............................................................................. 118 Figure C.19: Demographic questions. ........................................................................................ 119 Figure C.20: Last page. ............................................................................................................... 120 Figure C.21: Page 1 of the consent letter for Study 2. ................................................................ 121 Figure C.22: Page 2 of the consent letter for Study 2. ................................................................ 122 Figure C.23: Study 2 recruitment advertisement on Amazon Mechanical Turk. ....................... 123    xv  List of Supplementary Materials 1. Videos of the robot used for data collection in Study 1 1.1. Arrows_path _straight.mov 1.2. Arrows_path _turn.mov 1.3. Arrows_goal_straight.mov 1.4. Arrows_goal_turn.mov 1.5. Arrows_path&goal_straight.mov 1.6. Arrows_path&goal_turn.mov 1.7. Lights_goal_straight.mov 1.8. Lights_goal_turn.mov 1.9. Lights_path _straight.mov 1.10. Lights_path _turn.mov 1.11. Lights_path&goal _straight.mov 1.12. Lights_path&goal _turn.mov 1.13. None_turn.mov 1.14. None_straight.mov  2. Videos of the robot used for data collection in Study 2 2.1. Stop.mp4 2.2. Decelerate.mp4 2.3. Retreat.mp4 2.4. Tilt.mp4 2.5. Nudge.mp4 2.6. Familiarization.mp4  3. Code used for Study 1 and Study 2    xvi  Glossary ANOVA Analysis of variance ART  Aligned rank transform HRI  Human-robot interaction HRSI  Human-robot spatial interaction LED  Light emitting diode LiDAR Light Detection and Ranging ROS  Robot Operating System UBC  University of British Columbia   xvii  Acknowledgements I would like to thank my supervisors, Dr. Elizabeth Croft and Dr. Machiel Van der Loos. Elizabeth, thank you for your engaging supervision despite our being oceans apart. Mike, thank you for creating such a welcoming environment in the CARIS Lab. and for supporting me in so many different avenues. Thank you both for asking the hard questions and for challenging me to look at my research from multiple perspectives.  I owe thanks to the members of the CARIS Lab. for their support and guidance throughout my degree. Thank you to Wesley Chan and Camilo Perez for their guidance and technical support, especially early in my prototyping work. Thank you to Mahsa Khalili and Leia Shum for their astute feedback on my presentations and writing. Tim Wan provided feedback on my thesis. Ryan Lee volunteered countless hours to contribute to Study 2 in this thesis, as well as early mixed-reality prototyping work that was never put to use. Marlene Schlorf visited CARIS remotely to help design the evaluation methods for Study 2. Katherine Williams helped design in-person evaluation methods for Study 1, which were cancelled due to COVID-19. Tiger Zuo and Osman Baalbaki volunteered their time to help with early prototyping of the cues in Study 1.  Lastly, thank you to my friends and family for their ongoing love and support. Thank you in particular to my father, who claims the record for the most typos found in my thesis.  I received financial support from the Natural Sciences and Engineering Council of Canada and the Government of British Columbia.  Cha Gheill.  1   Introduction Since their introduction to assembly lines in the 1960s, robots have typically been used in highly controlled environments. In the last half century, manufacturing and assembly tasks have relied on robot manipulators functioning in work cells separated from human operators. The mining and agriculture industries have deployed autonomous vehicles to move material in harsh, semi-structured environments with remote human operators monitoring their activity under distinct safety protocols.  Warehousing operations such as Amazon have embraced the use of mobile robots transporting modularized pallets, called pods, in lights-out facilities where workers are kept safely away from the robot traffic.  More recently, mobile robots are increasingly appearing in less-structured public spaces such as airports, shopping malls, and urban sidewalks. The International Federation of Robotics predicted a 323% increase in the use of mobile robots for logistics from 2018 to 2021 [1] and McKinsey and Co. predicted that mobile robots will complete 80% of last-mile deliveries in the future [2]. These robots provide economic and social benefits to their owners and end users but can be a disruption to others in society when not designed appropriately. This thesis focuses on the design of mobile robots for human-robot spatial interaction (HRSI) in public spaces. The “mobile robots” in this thesis are comprised of a wheeled base with a variety of sensors for obstacle detection and motion planning, but no anthropomorphic features. Unlike automated road vehicles, these mobile robots operate in public spaces with pedestrians. I focussed on these nonanthropomorphic mobile robots because they are at the forefront of robotics development and are primed to benefit from human-robot interaction research.  2  For most pedestrians, human-human interaction while walking in public spaces is natural and fluid.  However, this interaction relies on an understanding of how others move [3]. Pedestrians inherently engage in reciprocal collision avoidance, in which they assume that other pedestrians will cooperate in mutual avoidance. This is the basis for the popular Optimal Reciprocal Collision Avoidance algorithm for multiple robots [4]. Furthermore, natural crowd movement relies on pedestrians’ subtle body language cues [5]. Mobile robots are new, unfamiliar agents in public spaces that do not easily fit into existing public interactions. Mobile robots for deliveries and logistics are designed for the tasks at either end of their journey through a public space, such as interacting with restaurant owners and customers, or picking up and dropping off airport luggage. These robots are designed to move safely and efficiently through the environment, but often do not communicate with pedestrians along the way. As a result, more and more pedestrians are encountering spatial interactions with these robots, without any training or prior understanding of their behaviour.  Most research in HRSI contexts focuses on human-aware navigation and motion planning. Many research works design robot motion that adheres to social conventions in public spaces. For example, Chen et al. proposed Socially Acceptable Collision Avoidance through Deep Reinforcement Learning as a method for “inducing socially aware behaviours in a reinforcement learning framework” [6]. Trautman et al. proposed Interacting Gaussian Processes as “the first algorithm that explicitly models human cooperative collision avoidance for navigation in dense human crowds” [7]. In their survey, Kruse et al. presented three goals of human-aware navigation systems: human comfort, robot naturalness, and robot sociability [8]. However, most works in human-aware navigation do not design for robot legibility. That is, the pedestrians do not 3  necessarily understand the robot’s behaviour, which can lead to discomfort [9] and jeopardize safety [10]. In addition to human awareness, legible robot behaviour is a key component of HRSI [11]. Lasota et al. wrote that “human agents’ ability to predict the actions and movements of a robot is as essential as the ability of a robot to predict the behaviour of humans.” [12]. There are a variety of definitions for the legibility of robot behaviour in the literature. This thesis uses the broad definition from Lichtenthäler et al. that “robot behaviour is legible, if a human can infer the next actions, goals and intentions of the robot with high accuracy and confidence” [11]. Robot behaviour legibility cues can help achieve Kruse et al.’s three goals of comfort, naturalness, and sociability.  This thesis asks the question, “How should mobile robots communicate their behaviour to pedestrians in public spaces?” The contributions of this thesis, which is comprised of two studies, extend the body of work in robot behavioural legibility cues for HRSI. This thesis develops two types of robot behaviour legibility cues. The first are motion legibility cues, which communicate a mobile robot’s planned motion to pedestrians. The second are yielding cues, which communicate that a mobile robot is yielding to a pedestrian in a spatial interaction. The robot-to-human cues in this research are designed to communicate aspects of the robot’s motion to pedestrians. This motion comprises the entirety of the spatial interaction and should therefore be communicated clearly.    4  1.1 Thesis Outline Chapter 2 reviews the literature to inform the development of robot behaviour legibility cues. The review distinguishes between explicit motion legibility cues and implicit robot yielding cues and identifies gaps in the existing literature. Chapter 3 presents Study 1 on the development of flashing lights and light projection as explicit motion legibility cues. Study 1 investigates the social acceptability of these two modalities and compares the cues’ design for path-predictability and goal-predictability. Chapter 4 presents Study 2 on the development of robot yielding cues for the interaction between a pedestrian and a mobile robot at a doorway. Study 2 evaluates five different yielding cues using components of social acceptability. Both Study 1 and Study 2 conducted online surveys using videos of the robot’s cues. Chapter 5 presents the conclusions of this thesis, their implications for the field of HRSI, and suggests some directions for future work.  5   Background and Motivation Chapter 1 presented a general introduction to trends in robotics and defined the human-robot spatial interaction (HRSI) with pedestrians that is the context for this thesis. This chapter reviews the literature to inform the development of robot behaviour legibility cues. These cues are motivated and defined in Chapter 1.  Lasota et al. differentiate between implicit and explicit communication cues designed to increase the legibility of mobile robot behaviour. With explicit cues “the robot directly communicates its planned actions and motions through visual and auditory cues”. With implicit cues robots “convey intent through subtle cues embedded in the ways they perform their motions” [12]. Section 2.1 reviews the literature on explicit motion legibility cues for mobile robots, which are the topic of Study 1 (Chapter 3). Section 2.2 reviews the literature on implicit robot cues for yielding to pedestrians, which are the topic of Study 2 (Chapter 4). Section 2.3 specifies the research objectives for each study.  2.1 Explicit Robot Motion Legibility Cues This section reviews the mobile robot literature on explicit motion legibility cues, which robots use to communicate their planned actions and motions to human interactors. Section 2.1.1 reviews different modalities used for explicit motion legibility cues. Section 2.1.2 reviews the literature on path-predictability and goal-predictability, which are important characteristics of motion legibility cues.   6  2.1.1 Modalities for Explicit Motion Legibility Cues In short, sound has been used as a holistic behavioural legibility cue, but not as an effective motion legibility cue. The most popular modalities for explicit motion legibility cues are visual, including anthropomorphic gaze, display screens, flashing lights, and light projection. Most studies on explicit motion legibility cues with mobile robots involve a head-on or path-crossing walking interaction involving different robot cue conditions. Most evaluate the cues with a survey after each trial. Some analyze the subject’s walking trajectory to quantitatively analyze reaction time, hesitation, and cooperation in the spatial interaction.  St. Clair and Mataric used verbal feedback to coordinate a human and mobile robot in a collaborative task [13]. Thomas et al. used beeping and coloured LEDs to indicate a state change in their mobile robot task planner for HRSI at doorways [14]. These studies show sound is a good modality for infrequently communicating high-level task information. Motion legibility cues, however, require frequent communication of lower level motion information and are therefore difficult to design using sound.  Anthropomorphic gaze has been studied as a communication cue for humanoid robots in collaborative tasks (e.g. Moon et al. [15]), and for mobile robots with anthropomorphic heads. Fischer et al. designed two gazing methods for an omnidirectional service robot with a face [16]. One method is an explicit motion legibility cue in which the robot orients its face along its planned path. In the other, the robot orients its face towards the pedestrian to show attention, despite not moving directly towards the pedestrian. Results showed that participants were more at ease with the attention cue than the motion legibility cue. 7  May et al. compared anthropomorphic gaze to flashing lights as motion legibility cues in a head-on interaction between a walking human and a mobile robot [17]. In their experiments the robot moved towards the human and then left, right, or continued straight while either turning its head or flashing one of two lights to communicate its destination. They found the flashing lights to be more communicative and make the participants more comfortable. The literature shows that robot gaze is an effective communication tool for human-robot task collaboration but does not show gaze to be an effective motion legibility cue for mobile robots.  Shrestha et al. compared arrows on a display screen to flashing lights as motion legibility cues in a head-on interaction between a walking human and mobile robot [18]. The flashing lights indicated the robot would move to the left or right to avoid the human, and the arrows indicated the human should move to one side or the other. In two other conditions these cues were combined with the sound of a motor vehicle turning signal. Subjects rated the flashing lights cue higher than the display screen cue in terms of comfort, naturalness, performance, and predictability. The addition of sound had no positive effect on either cue.  Multiple works have investigated robot motion legibility cues using light projection systems. After their work described above, Shrestha et al. developed a motion legibility cue using a light projector [19]. The robot projected a solid red arrow onto the ground in front of its base to indicate its intended direction of travel. The authors compared a no-cue control condition to the projection cue, as well as to the projection cue with the sound of a motor vehicle turning signal. Each condition was tested in three path-crossing human-robot interactions: head-on, perpendicular, and at 45 degrees. Survey and video analysis results showed the projection cue 8  increased the robot’s legibility and subjects’ comfort compared to no communication. As in their previous study, the addition of sound had no positive effect on the projection system cue. Chadalavada et al. designed a projection-based motion legibility cue for a robotic forklift [20]. The robot projected both its planned path and a boundary representing the space it would occupy. The robot and the subjects moved towards each other, and the subjects were asked to continue straight until they wanted to step aside. Subjects rated the projection cue significantly higher than the no-cue control condition in terms of communication, reliability, predictability, transparency, and situational awareness. Watanabe et al. designed a projection-based motion legibility cue for a robotic power wheelchair and evaluated it in a head-on interaction with a walking pedestrian and a passenger sitting in the wheelchair [21]. The projector displayed the wheelchair’s planned path for both the pedestrian and the passenger. Both the pedestrians and the passengers found the projection cue to increase their comfort and the robot’s motion legibility. Motion tracking showed the pedestrians’ walking trajectories were smoother when seeing the projection cue.  In summary, the reviewed literature showed that flashing lights and projection systems are communicative motion legibility cues. The other methods reviewed above - sound, robot gaze, and display screens - have not been shown to be effective motion legibility cues. The results from Shrestha et al. [18] and May et al. [17] show that flashing lights are an effective motion legibility cue. Flashing lights are also the most common explicit robot behaviour legibility cue used in industry. LED strips are featured on the urban delivery robots Postmates Serve (Postmates Inc., San Francisco, CA, USA) and Amazon Scout (Amazon.com Inc., Seattle, WA, USA) and on the Otto warehouse transport robots (Clearpath Robotics Inc., Kitchener, ON, Canada) among others. 9  The results from Chadalavada et al. [20], Shrestha et al. [19], and Watanabe et al. [21] show that projection systems can be used as effective motion legibility cues. However, the literature reviewed here has not directly compared flashing lights to projection systems as a motion legibility cue. Flashing lights and projection systems have different capabilities. Both can use different colours and frequencies to communicate. Flashing lights are limited to on/off states in their fixed positions, but projection systems can create many different shapes in different positions. Furthermore, flashing lights exist as turning signals on motor vehicles. However, projection systems do not have a familiar equivalent in society. These differences create an interesting and worthwhile comparison of two promising types of communication cues.  2.1.2 Motion Legibility Factors Multiple works in human-robot spatial interaction differentiate between two mobile robot motion legibility factors: path-predictability, in which the human understands the robot’s next immediate movement, and goal-predictability, in which the human understands the robot’s next intermediate destination in the world. The following works use different terminology, but the terms are equivalent to path-predictability and goal-predictability. In the context of manipulated robot arms, Dragan et al. define “legible” robot motion as that which allows the observer to infer the robot’s goal pose from observing its trajectory, and “predictable” robot motion as that which allows the observer to infer the robot’s trajectory once they know the robot’s goal pose [22]. Dragan et al.’s “legibility” is equivalent to goal-predictability and their “predictability” is equivalent to path-predictability. Zhang et al. designed a method for computing the “explicability” and “predictability” of a robot’s task plan [23]. Zhang et al.’s “explicability” is equivalent to goal-predictability and their “predictability” is equivalent to path-predictability.  10   Dragan et al. distinguish between path- and goal-predictability as “fundamentally different and often contradictory properties of motion”. Lichtenthäler and Kirsch analyzed a path-crossing human-robot spatial interaction to test Dragan et al.’s claim in the context of mobile robotics [24]. Lichtenthäler and Kirsch tested path-predictability by asking participants to predict the robot’s future direction; they tested goal-predictability by asking participants to predict the robot’s goal pose. Lichtenthäler and Kirsch use “goal-predictability”, but use “trajectory-predictability” instead of “path-predictability”. Lichtenthäler and Kirsch found a correlation between path- and goal-predictability, which differs from Dragan et al.’s findings.   2.2 Implicit Robot Yielding Cues The previous section reviewed methods for robots to explicitly communicate their planned motion to pedestrians. A commonly considered human-robot spatial interaction in public spaces is in the context of a doorway or similar bottleneck in a structured environment. In this situation, typically, only one agent can proceed through the narrow space, but pedestrians may be uncertain whether they or the robot should go first. This section reviews movements a mobile robot can use to communicate its yielding to pedestrians in this context.  Multiple works investigate human-robot head-on interactions in hallways or corridors in which the robot must pass beside a human in a relatively narrow space. In a proxemics study, Lauckner et al. established minimal frontal and lateral distances for a mobile robot approaching a person in a corridor [25]. Participants teleoperated the mobile robot and drove it towards themselves until they felt uncomfortable. Dondrup et al. showed that lower mobile robot velocities within the 11  pedestrians’ personal space resulted in less disruption to their movement [26]. The corridor context in these studies is slightly different than the narrower doorway context described above, but the results can inform the development of implicit robot yielding cues with appropriate human-robot proxemics and robot velocities.  A similar and common context in the autonomous road vehicle (AV) literature is yielding to a pedestrian at a crosswalk. Ackermann et al. showed that smooth and early deceleration decreased the time pedestrians took to decide whether an autonomous vehicle was yielding to them [27]. In this context the AV literature often focuses on ensuring the AV will not yield to the pedestrian (e.g. Gupta et al. [28]). Similarly, Thomas at el. developed a mobile robot with “assertive” behaviour. The assertive behaviour uses high acceleration to show pedestrians that the robot would not yield to them at a doorway in a head-on interaction [14]. These works demonstrate that robot deceleration can be used as a yielding cue for mobile robots.  Retreating or diverting is a common yielding cue for mobile robots in the literature. Kaiser et al. compared communication cues for a robot to yield to a pedestrian at a bottleneck in two scenarios: one where the robot and the human approached the bottleneck side by side, and one where they approached from opposite directions [29]. In each scenario they compared two cues to a control condition in which the robot proceeded through the bottleneck without yielding. One yielding cue was for the robot to simply stop before the bottleneck; the other was for the robot to stop and move to the side. In both scenarios they found both cues to increase the legibility of the robot’s behaviour when compared to the control, and that subjects were more resolute in their perception of the robot in the move aside condition than in the stop condition. In addition to their 12  assertive motion planner, Thomas et al. developed a yielding cue in which the robot moved aside before the doorway [14]. In a spatial interaction between a human and a mobile robot looking at the same object, Akita et al. developed a joint utility approach to divert the robot to a position optimal for both parties. They showed that their approach resulted in fewer collisions than a human-agnostic approach in which the robot prioritizes its own position [30]. Retreating has also been used as an implicit robot yielding cue in literature on human-robot collaboration with articulated robot arms. Moon et al. first showed that an articulated robot arm can use human-inspired hesitation to communicate yielding in a collaborative picking task [31]. In a collaborative pick and place task, Reinhardt et al. showed that a stop-then-retreat behaviour increased trust compared to a dominant (no stop) behaviour [32].  Moon et al.’s research on handovers from a robot to humans suggests gaze is an effective robot yielding cue [15]. In this work, a semi-humanoid robot uses gaze to communicate it is the human’s turn to receive an object from the robot. The results of their study showed that participants took less time to reach for the object when the robot used the gazing cue.  This section has reviewed literature on implicit robot behaviour cues in head-on and side-by-side human-robot spatial interactions, both with mobile robots and robot manipulators. The literature suggests robot deceleration, human-robot proxemics, robot retreating or diversion, and robot gaze are all effective implicit yielding cues. However, the literature does not directly compare robot deceleration, robot retreating, and robot gaze as implicit yielding cues. This comparison is interesting because these categories of cues involve different aspects of the robot’s behaviour. Decelerating and retreating are linear motions, whereas robot gaze involves rotation. 13  Furthermore, no studies reviewed here have investigated an interaction in which both agents approach a doorway from the sides, instead of head-on. Figure 2.1 illustrates this unexplored interaction.   Figure 2.1: The unexplored human-robot spatial head-on interaction in which both agents wish to enter a doorway on the side of a corridor. The dotted lines represent the agents’ desired paths.  2.3 Research Objectives Chapter 1 motivated the design of robot behaviour legibility cues for human-robot spatial interaction with mobile robots. This thesis explores two types of robot behaviour legibility cues: explicit motion legibility cues and implicit yielding cues. The sections below identify specific research objectives for each type of cue. Study 1 (Chapter 3) explores explicit motion legibility cues. Study 2 (Chapter 4) explores implicit robot yielding cues.  14  2.3.1 Explicit Motion Legibility Cues The studies reviewed in Section 2.1.1 showed that flashing lights and light projection are effective modalities for explicit motion legibility cues. However, the reviewed literature has not directly compared flashing lights and light projection as motion legibility cues. The first research objective is to make the direct comparison between flashing lights and light projection as modalities for explicit robot motion legibility cues.  The studies reviewed in Section 2.1.2 all differentiate between motion legibility factors in the context of implicit robot motion legibility cues. This review did not identify any studies that investigated explicit motion legibility cues for mobile robots with the differentiation of path- and goal-predictability. Recall that implicit cues use the robot’s motion to communicate and explicit cues use other modalities. Furthermore, the literature disagrees about whether path- and goal-predictability are contradictory factors of robot motion legibility. The second research objective is to explore these two factors in the context of explicit motion legibility cues for mobile robots.  2.3.2 Implicit Robot Yielding Cues The studies reviewed in Section 2.2 have shown robot deceleration, robot retreating, and robot gaze to be effective robot yielding cues. However, the literature reviewed does not directly compare these three approaches. The third research objective is to directly compare deceleration, retreating, and gaze as implicit yielding cues for mobile robots. The fourth research objective is to compare these yielding cues in a head-on human-robot spatial interaction with a doorway to the side of both agents, as shown in Figure 2.1.   15   Study 1 – Robot Motion Legibility Cues This chapter presents a study on visual cues a mobile robot can use in pedestrian spaces. These cues explicitly communicate the robot’s motion to pedestrians to increase the legibility of the robot’s behaviour. The robot described in Section 3.1.1 served as the platform for prototyping two different cue types: (1) arrows projected onto the ground in front of the robot, and (2) flashing lights mounted on the robot. Each cue type operated in one of the three following cue modes: (1) path mode, (2) goal mode, or (3) path&goal mode. Section 3.1.2 describes these cue modes. I conducted a user study to test the following hypotheses: • H1.1: Arrows are a more socially acceptable cue type than lights. • H1.2: Path&goal mode is a more socially acceptable cue mode than either path mode or goal mode. In addition to testing these hypotheses, this study explored three other research questions: 1. Do participants comprehend the intended information from the different cue modes? 2. Does the robot’s movement scenario affect the social acceptability of different cue types or cue modes? 3. Is the robot more socially acceptable with motion legibility cues than without? I used videos of the robot moving and displaying its motion legibility cues in an online survey to collect data from 229 participants. The rest of this chapter presents the methods and materials, the results of the data analysis, and a discussion thereof.  16  3.1 Methods and Materials 3.1.1 Robot Research Platform This section describes the robot research platform used in this study, as well as in Study 2 (Chapter 4). Figure 3.1 shows the mobile robot used in this study, a differential drive PowerBot (Adept Mobile Robots, Amherst, NH, USA). A light projector and two 3 Watt LEDs generated the motion legibility cues in this study. The mobile base is 83 cm long, 63 cm wide, and 49 cm tall. With the added tower, the robot is 173 cm tall.  Figure 3.1: The mobile robot research platform used in Study 1 and Study 2. 17  I used the Robot Operating System (ROS) (Open Source Robotics Foundation, Mountain View, CA, USA) middleware to interface with the PowerBot’s motion controller and to implement the communication cues for both studies in this thesis. In this study, I used the open-source ROS Navigation Stack for motion planning and navigation, and to animate the motion legibility cues described below. Appendix A contains more implementation details.  3.1.2 Design of Motion Legibility Cues This section presents the design and functionality of the motion legibility cues used in this study,  illustrated in Figure 3.2 and Figure 3.3. I prototyped two cue types: green arrows projected onto the ground in front of the robot, and orange flashing lights mounted on the robot. Each cue type operates in one of three cue modes: path mode, goal mode, or path&goal mode. The cues are animated using data from the robot’s navigation system. In path mode the robot references a point Pp on its planned path 1 m ahead to communicate its next immediate movement. In goal mode, the robot references its next two waypoints to communicate its intention to move to a future position in the environment. In goal mode the cues only animate if the robot is within 1.5 m and facing less than 45° away from its next waypoint. Path&goal mode is a sequential combination of both path and goal mode: the cue operates in goal mode if the robot is within 1.5 m and facing less than 45° away from its next waypoint; otherwise the cue operates in path mode. I designed path mode for path-predictability and goal mode for goal-predictability, which are the two motion legibility factors described in Section 2.1.2.   18  The lights cue type flashed an orange LED on one side of the robot or the other. In path mode, the angle θp from the robot to the point Pp on its path determined which light to flash. The cue treated values of θp less than 20° as straight and therefore neither light flashed. In goal mode, the angle θG between the next two waypoints in the robot’s frame determined which light to flash. The arrows cue type projected a green arrow onto the ground in front of the robot. In path mode, a solid arrow was drawn from the robot to point Pp. In goal mode, a flashing dashed arrow was drawn at angle θG. The solid arrow was designed to communicate immediately forthcoming motion; the flashing dashed arrow was designed to communicate future motion. The arrows were a fixed length of 30 cm and bounded to the projection area shown in Figure A.4.  In path mode the lights cue type used a higher flashing frequency to represent a sharper angle of the next immediate movement: the lights flashed at a frequency of Kgθp, where Kg is 0.1 Hz/deg. and θp is in degrees. In goal mode both cue types used a higher flashing frequency to represent a closer proximity to the next waypoint: both flashed at a frequency of Kgd, where Kg is 5 Hz/m and d is the distance in metres from the robot to the next waypoint. All flashing frequencies were limited to 0.5-5 Hz in order to be visible. I set these and other parameters through a  series of informal pilot studies.  I implemented the motion legibility cues in software using ROS and used the open-source ROS Navigation Stack for high-level control of the robot. The Navigation Stack moved the robot to the waypoints used by the cues in goal mode. The global path planned by the Navigation Stack animated the cues in path mode. Appendix A contains more implementation details.  19   Figure 3.2: Diagram of the robot’s two motion legibility cue types: arrows and lights. The robot uses its planned path to animate cues in path mode, and its next two waypoints to animate cues in goal mode.   Figure 3.3: The robot’s motion legibility cues: (1) lights indicating a left turn; (2) arrows in goal mode indicating a future left turn; (3) arrows in path mode indicating the next immediate movement will be straight.   20  3.1.3 Experimental Scenario To facilitate data collection, videos captured the robot’s motion legibility cues as it moved through the environment shown in Figure 3.4. The indoor environment contained borders to create a visual passageway and an intersection. It also contained an obstacle directly in front of the robot’s starting position, which forced the robot to move either left or right before reaching the intersection. The floor panels were part of the room.  Figure 3.4: Image of the experimentation space used to capture video of the robot’s motion legibility cues. The robot first moved around the obstacle to the junction between positions A, B, and C, then to either A or C. Each floor tile measures 2’ x 2’.   21  In each video the robot first moved around the obstacle to the junction in the middle of the environment, then either straight ahead or to its left, which created two different movement scenarios. When it reached the junction, the robot was stationary for 5 seconds before moving to its next waypoint. During these 5 seconds the final waypoint, or the path thereto, animated the robot’s motion legibility cues. This pause gave the viewer time to see the updated cue without the robot moving. A “Wizard of Oz”* approach allowed for consistent movement across the videos: the velocity control data were recorded for both movement scenarios and used to move the robot during video capture. The ROS Navigation Stack and cue animation programs still ran online during each video to animate the cues. Implementation details for this behaviour are described in Appendix A.  This study captured videos of each cue type (lights, arrows) operating in each cue mode (path, goal, path&goal) in each of the two movement scenarios. As control conditions, this study also captured videos of each movement scenario with no motion legibility cues, for a total of 14 videos. The videos are enclosed as supplementary material with this thesis. The files are named in the following pattern: “type_mode_scenario.mov”.    * “The term Wizard of Oz … [describes] a methodology wherein an experimenter (the ‘Wizard’), in a laboratory setting, simulates the behavior of a theoretical intelligent computer application (often by going into another room and intercepting all communications between participant and system).” [50] 22  3.1.4 Survey Design and Experimental Procedure I designed a digital survey using Qualtrics software (Provo, UT, USA) to collect data online using the videos described above. After giving consent and answering demographic questions, participants saw instructions explaining the situational context of the videos. The instructions included the image in Figure 3.4. The position B label increased the number of possible final destinations to avoid participants learning where the robot would move. Participants then viewed a control video with no motion legibility cues in both movement scenarios (left or straight). Participants were then grouped by movement scenario. Each group was shown 6 videos in randomized order – one for each cue type (lights, arrows) operating in each cue mode (path, goal, path&goal).  3.1.4.1 Social Acceptability Likert Scale and Legibility Likert Items After each video, participants responded to the following statements on a 5-point Likert scale: 1.1. The robot's communication to me was clear.          1.2. The robot moved as I expected. 1.3. The robot's communication showed me its next movement. 1.4. As the robot approached the junction in the middle of the scene, its final destination was clear. 1.5. The robot's overall behaviour was reasonable. 1.6. The robot’s communication would be socially compatible in a pedestrian’s environment. 1.7. The robot's communication made me feel comfortable. 1.8. I liked the robot.   23  I proposed a Social Acceptability Scale comprised of the Likert-type Statements 1.1, 1.2, and 1.5-1.8. The sub-constructs of social acceptability are clarity (1.1), met-expectations (1.2), reasonable behaviour (1.5), social compatibility (1.6), comfort (1.7), and likeability (1.8). The clarity and met-expectations items are contributing factors to legible robot behaviour. Some studies reviewed in Section 2.1 used the word “legibility” directly in their survey (e.g. Shrestha et al. [19]). I chose to use Statements 1.1 and 1.2 to capture legibility in simpler terms. Comfort and sociability are two of Kruse et al.’s goals of human-aware navigation [8]. Statement 1.7 assesses comfort directly, which is common in the studies reviewed in Section 2.1 (e.g. May et al. [17]). Reasonable behaviour (1.5) and likeability (1.8) are sub-constructs of sociability. Statement 1.6 directly assesses sociability in more explicit terms. I conducted informal pilot tests with 8 participants to verify that their understanding of the statements matched my own.  I calculated the Social Acceptability Score as the mean response to the statements described in Section 3.1.4.1. Joshi et al. support taking the mean of a set of Likert-type items to create interval data [33]. I analyzed the internal reliability of the Social Acceptability Scale with Cronbach’s α [34] and McDonald’s ωT [35] for each combination of cue type, cue mode, and robot movement scenario. The results showed a minimum α of 0.86 and a minimum ωT of 0.90, which are above the traditional minimum of 0.7.  Separate from the Social Acceptability Scale, I designed Statements 1.3 and 1.4 to directly assess comprehension of path-predictability and goal-predictability cue modes. Path- and goal-predictability are two factors of motion legibility. With cues in path mode or path&goal mode, a 24  positive response to Statement 1.3 indicates participants comprehended the intended path information. With cues in goal mode or path&goal mode, a positive response to Statement 1.4 indicates participants comprehended the intended goal information.  3.1.4.2 Follow-Up Questions At the end of the survey, participants answered follow-up questions:  2. The robot used either orange lights or green arrows to communicate its motion to you. Did you prefer when the robot used the orange lights or the green arrows? Move the slider to indicate your preference.    25  Questions 3 and 4 were repeated once for each cue type – lights and arrows. In Question 3, participants responded to the statements on a 5-point Likert scale from “Strongly Disagree” to “Strongly Agree”. 3. The robot used the orange lights or the green arrows to communicate different things: either its next movement, its final destination, or both. When the robot was using the < “orange lights” or “green arrows” >*, to what extent do you agree with the following statements? 3.1. It was clear when the robot was communicating its next movement. 3.2. It was clear when the robot was communicating its final destination. 3.3. It was clear when the robot was communicating both its next movement and its final destination. 4. When the robot was using the < “orange lights” or “green arrows” >*, did you prefer when it communicated its next movement, its final destination, or both? Move the slider to indicate your preference.  In Questions 2 and 4, participants were forced to move the slider from its default middle position but were able to place it back in the middle.      * In one repetition “orange lights” was inserted into the question text. In the other, “green arrows” was inserted. 26  3.1.5 Participants The survey was conducted online using the Qualtrics survey tool. Participants were recruited using Amazon Mechanical Turk (Amazon.com Inc., Seattle, WA, USA) for a reward of US$ 2.50. Some participants were also recruited using Facebook and Reddit. Section C.1 includes the survey used in this study, along with copies of the Amazon Mechanical Turk advertisement and consent letter. The survey also included an attention check question after each video, which asked participants on which side of the screen the robot finished and thereby excluded participants who answered incorrectly. The excluded participants were not paid.  A total of 289 participants responded to the online survey. This study recruited 255 participants using Amazon Mechanical Turk, and 34 were recruited elsewhere. Of the 289 participants, 60 were excluded due to incorrect responses to the attention check question described above, leaving 229 participants. Of the 229 remaining participants, 65 said they had experience with robotics, but none were excluded because none had experience with robots similar to the one used in this study.  Figure 3.5 and Figure 3.6 illustrate the distribution of the participants’ self-reported ages and genders, respectively.   27   Figure 3.5: Age distribution of participants in the motion legibility cues study.   Figure 3.6: Gender distribution of participants in the motion legibility cues study. 28  3.1.6 Data Analysis I analyzed the Social Acceptability Score data to test hypotheses H1.1 and H1.2 and to answer the research question, “Is the robot more socially acceptable with motion legibility cues than without?” This analysis also helps answer the research question, “Does the robot’s movement scenario affect the social acceptability of different cue types or cue modes?”  I analyzed the Social Acceptability Score data with a 2x3x2 mixed-model analysis of variance (ANOVA) test. The within-subjects factors were cue type (arrows, lights) and cue mode (path, goal, path&goal), and the between-subjects factor was robot movement scenario (turn, straight). Joshi et al. support taking the mean of a set of Likert-type items to create interval data and then performing parametric tests [33].  The assumptions of a mixed-model ANOVA test are multivariate normality, homogeneity of variance, homogeneity of variance-covariance, and sphericity [36]. While the Social Acceptability Score data violated all four assumptions, the test is either robust to violations thereof or the results were correctable. The test is robust to violations of multivariate normality if there are more than 30 subjects in each combination of factor levels [37], which is the case for the data in this study. The test is robust to violations of both homogeneity of variance and homogeneity of variance-covariance if the ratio between the largest and smallest group sizes is less than 1.5 [38], which is the case for these data. I corrected the p values for the violation of sphericity using the Greenhouse-Geisser method, which is more conservative than the alternative Huynh-Feldt method [39]. Section B.1 includes the results of the assumption testing. 29  The cue mode comprehension Statements 1.3 and 1.4 and follow-up Question 3 on cue mode identifiability all used individual Likert-type statements, so the data were on an ordinal scale. I used the nonparametric Aligned Rank Transform (ART) ANOVA test to analyze the data from each of these three questions [40]. Each of the three ART ANOVAs used a 2x3x2 mixed-model with within-subjects factors of cue type (arrows, lights) and cue mode (path, goal, path&goal) and a between-subjects factor of robot movement scenario (turn, straight).  The no cue control condition does not apply to the cue mode factor, so the four tests described above excluded the control data. I used a 3x2 mixed ANOVA on Social Acceptability Score to analyze the control data. The within-subjects factor was cue type (arrows, lights, none) and the between-subjects factor was robot movement scenario (turn, straight). This analysis helped answer the research question, “Is the robot more socially acceptable with motion legibility cues than without?” Because I performed two ANOVAs on the Social Acceptability Score data, I used the Bonferroni method to adjust the significance level to 𝛼 = .025 [41].  To test the hypotheses and answer the research questions, planned pairwise comparisons followed each test described above. I did not perform post-hoc analyses for the effects on Social Acceptability Score with p values lower than .025.  I also corrected the p values in the pairwise comparisons using the Bonferroni method. The design of the follow-up questions on preferences for cue type (Question 2) and cue mode (Question 4) did not allow for statistical analysis, so this chapter presents summary statistics instead. Preference is a component of social acceptability, so these questions help test H1.1 and H1.2 respectively. I did not analyze the demographics data, beyond reporting them above.  30  3.2 Results This section presents the results of the analyses described above. I first present the results pertaining to hypotheses H1.1 and H1.2, then the results pertaining to the research questions. Section B.1 contains tables with complete results for the ANOVAs and the pairwise comparisons. This section uses ‘X’ to indicate interaction effects.  3.2.1 Social Acceptability of Motion Legibility Cues I first present the results of the ANOVA on social acceptability score, which I used to test both hypotheses. Table 3.1 summarizes the results. The ANOVA detected significant main effects of cue type, cue mode, and scenario on social acceptability score. There were also significant two-way interactions between cue type and cue mode and between cue type and scenario, as well as a significant three-way interaction between all factors. I did not analyze the cue mode X scenario interaction because it was not significant at the α = .025 level and its effect size of ηp2 = 0.02 is very small. Figure 3.7, Figure 3.8, and Figure 3.9 illustrate the main effects, two-way, and three-way interactions, respectively. The rest of this section is organized by hypothesis.   31  Table 3.1: Results of the ANOVA on social acceptability of robot motion legibility cues. Partial eta-squared (ηp2) effect sizes of 0.01, 0.06, and 0.14 are considered small, medium, and large, respectively [42]. Effect Num. DOF Denom. DOF F Statistic p value ηp2 Scenario 1 227 37.9 < .001 0.14 Type 1 227 120.6 < .001 0.35 Scenario X Type 1 227 62.1 < .001 0.21 Mode 2 395 8.4 .001 0.04 Scenario X Mode 2 395 3.5 .038 0.02 Type X Mode 2 434 15.6 < .001 0.06 Scenario X Type X Mode 2 434 32.1 < .001 0.12    Figure 3.7: Results of the ANOVA on social acceptability score, showing the significant main effects of cue type and cue mode. The confidence intervals round each mean are within-subjects 95% intervals calculated using the Cousineau-Morey-O’Brien method [43]. The brackets show statistically significant differences identified with pairwise comparisons using the following symbols: +, p < .1; *, p < .05; ** p < .01; ***, p < .001. Subsequent plots in this chapter use the same confidence intervals and significance levels. 32   Figure 3.8: Results of the ANOVA on social acceptability score, showing the significant cue type X cue mode and cue type X scenario interactions.  Figure 3.9: Results of the ANOVA on social acceptability score, showing the significant 3-way interaction between cue type, cue mode, and scenario. 33  3.2.1.1 H1.1: Arrows are a more socially acceptable cue type than lights. The ANOVA detected a significant main effect of cue type on social acceptability (ηp2 = 0.35). A pairwise comparison revealed that arrows were significantly more socially acceptable than lights (MD = 0.6, p < .001, r = 0.59)*. Figure 3.7 illustrate these results. These results support H1.1, but the main effect should not be considered in isolation as there were significant two-way interactions between cue type and cue mode (ηp2 = 0.06), and cue type and scenario (ηp2 = 0.21), as well as a significant three-way interaction between cue type, cue mode, and scenario (ηp2 = 0.12).  Pairwise comparisons revealed that H1.1 is supported in the two-way interactions. Arrows were significantly more socially acceptable than lights in all cue modes, as well as in both robot movement scenarios. Figure 3.8 illustrates the two-way interactions.  In the three-way interactions, H1.1 is supported in four of six pairwise comparisons between arrows and lights. In the straight scenario, arrows are significantly more socially acceptable than lights in all cue modes. In the turn scenario, arrows are significantly more socially acceptable than lights in path mode (MD = 0.8, p < .001, r = 0.33). In the turn scenario, arrows are slightly less socially acceptable than lights in goal mode and path&goal mode. Figure 3.9 illustrates the three-way interactions.    * “MD” is the absolute mean difference in the pairwise comparison. Pearson’s r is an effect size measure for the t-test used in the pairwise comparisons. 0.1, 0.3, and 0.5 are considered small, medium, and large effect sizes, respectively [42]. 34  3.2.1.1.1 Preference for Type of Motion Legibility Cues This section presents the results of follow-up Question 2 on cue type preference described in Section 3.1.4. The summary statistics in Figure 3.10 show a strong preference for arrows over lights as a cue type. This result supports H1.1.   Figure 3.10: Boxplots for the follow-up question on cue type preference, divided by scenario. The thick vertical lines show the median, which is 5 in the straight scenario. The boxes span the interquartile range (IQR) and the whiskers span 1.5 x the IQR. Points show outliers beyond 1.5 x the IQR.   35  3.2.1.2 H1.2: Path&goal mode is a more socially acceptable cue mode than either path or goal mode. The ANOVA detected a significant main effect of cue mode on social acceptability (ηp2 = 0.04). Pairwise comparisons revealed that path&goal mode was significantly more socially acceptable than both path mode (MD = 0.1, p = < .05, r = 0.12) and goal mode (MD = 0.2, p < .001, r = 0.19). Path mode and goal mode were not significantly different. Figure 3.7 illustrates these results. These results support H1.2, but the main effect should not be considered in isolation as there were significant two- and three-way interactions involving the cue mode factor.  In the two-way interactions, pairwise comparisons revealed there are no significant differences that disprove H1.2. Neither path mode nor goal mode is ever significantly more socially acceptable than path&goal mode. In the significant cue mode X cue type interaction (ηp2 = 0.06), path&goal mode is significantly more socially acceptable than goal mode with the arrows cue type (MD = 0.2, p < .001, r = 0.13). In the cue mode X cue type interaction, path&goal mode is significantly more socially acceptable than path mode in the lights cue type (MD = 0.3, p < .001, r = 0.16). There were no significant differences between path&goal mode and goal mode in the lights cue type, or between path&goal mode and path mode in the arrows cue type. Figure 3.8 illustrates the two-way interactions.    36  In the significant three-way interaction between cue type, cue mode, and scenario (ηp2 = 0.12), pairwise comparisons revealed that H1.2 is supported only in the turn scenario with the lights cue type: path&goal mode is significantly more socially acceptable than path mode (MD = 0.7, p < .001, r = 0.28) and goal mode (MD = 0.3, p = .004, r = 0.11). In the turn scenario with the arrows cue type, H1.2 is disproven: path mode is significantly more socially acceptable than both goal mode (MD = 0.5, p < .001, r = 0.19) and path&goal mode (MD = 0.2, p = .007, r = 0.1). In the turn scenario with the arrows cue type, path&goal mode is significantly more socially acceptable than goal mode (MD = 0.2, p = .02, r = 0.09). In the straight scenario with the arrows cue type, path&goal mode is significantly more socially acceptable than goal mode (MD = 0.2, p = .02, r = 0.09), but not significantly different from path mode. In the straight scenario with the lights cue type, there were no significant differences in social acceptability between the cue modes. Figure 3.9 illustrates the cue type X cue mode X scenario interaction.  These results do not provide statistical support for H1.2, that path&goal mode is more socially acceptable than path mode or goal mode. The results do, however, favour path&goal mode over path mode or goal mode. Subsequent sections show that the preference for and comprehension of cue modes support H1.2, but that the identifiability (Question 3) of cue modes does not.  H1.2 does not compare path mode and goal mode, but it is worth mentioning results from the cue mode X cue type interaction: with the arrows cue type, path mode was significantly more socially acceptable than goal mode (MD = 0.3, p < .001, r = 0.15); with the lights cue type, goal mode was significantly more socially acceptable than path mode (MD = 0.2, p = .02, r = 0.09).   37  3.2.1.2.1 Preference for Mode of Motion Legibility Cues This section presents the results of the follow-up Question 4 on cue mode preference described in Section 3.1.4. The summary statistics in Figure 3.11 show a preference for path&goal mode, which supports H1.2.   Figure 3.11: Boxplots for the follow-up questions on cue mode preference, divided by robot movement scenario. The thick vertical lines show the median. The boxes span the interquartile range (IQR) and the whiskers span 1.5 x the IQR.   38  3.2.2 Comprehension of Motion Legibility Cue Modes This section presents the results of the Aligned Rank Transform (ART) ANOVAs on the Likert responses to the cue mode comprehension Statements 1.3 and 1.4. The results help answer the research question, “Do participants comprehend the intended information from the different cue modes?” The tables below summarize the results.  Table 3.2: Results of the ART ANOVA on Statement 1.3: path comprehension. Effect Num. DOF Denom. DOF F Statistic p Value ηp2 Type 1 227 194.6 < .001 0.46 Mode 2 454 34.4 < .001 0.13 Scenario 1 227 80.5 < .001 0.26 Type X Mode 2 454 15.5 < .001 0.06 Type X Scenario 1 227 114.6 < .001 0.34 Mode X Scenario 2 454 10.0 < .001 0.04 Type X Mode X Scenario 2 454 29.3 < .001 0.11  Table 3.3: Results of the ART ANOVA on Statement 1.4: goal mode comprehension. Effect Num. DOF Denom. DOF F Statistic p value ηp2 Type 1 227 149.0 < .001 0.40 Mode 2 454 25.6 < .001 0.10 Scenario 1 227 90.0 < .001 0.28 Type X Mode 2 454 23.3 < .001 0.09 Type X Scenario 1 227 84.3 < .001 0.27 Mode X Scenario 2 454 25.0 < .001 0.10 Type X Mode X Scenario 2 454 27.8 < .001 0.11   39  Both ART ANOVAs detected significant main effects of cue mode on the comprehension statements. In the main effect of cue mode on the path comprehension statement (ηp2 = 0.13), path mode ranked higher than goal mode (p < .001). Path&goal mode also ranked higher than goal mode (p < .001) in the path comprehension statement. In the main effect of cue mode on the goal comprehension statement (ηp2 = 0.1), goal mode ranked higher than path mode (p < .001). Path&goal mode also ranked higher than path mode (p < .001) in the goal comprehension statement. These main effect results show that participants comprehended the intended information from each cue mode. There were, however, significant two-way and three-way interactions between cue mode, cue type, and robot movement scenario. Figure 3.12. illustrates the main effects.  Figure 3.12: Violin plots showing the main effect of cue mode on the two comprehension statements. The widths represent the proportions of each response. Black points show the means of the responses on a 5-point Likert scale to help visually distinguish the violins. 40  In the significant three-way interactions (ηp2 = 0.11 for both tests), 20 of 24 comparisons show comprehension of cue modes. Two show miscomprehension and two show a lack of comprehension. In the turn scenario, two trends show miscomprehension of cue modes: in the path comprehension statement with the lights cue type, goal mode ranked higher than path mode; in the goal comprehension statement with the arrows cue type, path mode ranked higher than goal mode. In both statements with the arrows cue type in the straight scenario, the goal mode and path mode scores were indistinguishable. This result shows a lack of comprehension of cue modes with the arrows cue type in the straight scenario.  Figure 3.13 illustrates the effect of cue type and robot movement scenario on cue mode comprehension.  In summary, the main effects suggest that the different cue modes are comprehensible, but the interaction effects show that the cue modes are not comprehensible in some combinations of cue type and scenario. The trends involving path&goal mode all showed comprehension, which support H1.2.   41    Figure 3.13: Violin plots showing the effect of cue type and robot movement scenario on the two cue mode comprehension statements. Black points show the means of the responses on a 5-point Likert scale to help visually distinguish the violins.   42  3.2.2.1 Identifiability of Motion Legibility Cue Modes This section presents the results of the Aligned Rank Transform (ART) ANOVA on follow-up Question 3 on the identifiability of different cue modes. This question is related to the comprehensibility of the different cue modes discussed above. The ART ANOVA detected a significant three-way interaction between cue mode, cue type, and scenario (p < .001, ηp2 = 0.11). Figure 3.14 illustrates the results.  Figure 3.14: Violin plots showing the effect of cue type and robot movement scenario on the identifiability of different cue modes. Black points show the means of the responses on a 5-point Likert scale to help visually distinguish the violins. All trends but one show that path mode is the most identifiable and path&goal mode is the least identifiable. In the straight scenario with the lights cue type, goal mode is the least identifiable.    43  3.2.3 Social Acceptability of Motion Legibility Cues Compared to No Cues This section presents the social acceptability results when including the control conditions of no communication cue, which I use to answer the research question, “Is the robot more socially acceptable with motion legibility cues than without?” The ANOVA on social acceptability score detected a significant two-way interaction between cue type and scenario (F(2,407) = 26.5, p < .001, ηp2  = 0.11). Planned pairwise comparisons revealed that in the turn scenario, the none control condition was significantly less socially acceptable than both the arrows cue type (MD = 0.9, p < .001, r = 0.45) and the lights cue (MD = 0.7, p < .001, r = 0.38). In the straight scenario, the arrows cue type was significantly more socially acceptable than the none condition (MD = 1, p < .001, r = 0.47), but the lights cue type scored only slightly higher than the none condition.  Figure 3.15 illustrates these results.  Figure 3.15: Results of the ANOVA on social acceptability score comparing motion legibility cues to no cues, showing the significant cue type X scenario interaction.  44  3.2.4 Effect of Different Robot Movement Scenarios on Motion Legibility Cues This study asked the research question, “Does the robot’s movement scenario affect the social acceptability of different cue types or cue modes?” The previous sections show that the robot’s movement scenario has a significant effect on the social acceptability of the lights cue type, but not on that of the arrows cue type. In the significant two-way interaction between cue type and  robot movement scenario interaction shown in Figure 3.8 (ηp2  = 0.21), pairwise comparisons revealed that lights were significantly more socially acceptable in the turn scenario than in the straight scenario (MD = 0.9, p < .001, r = 0.43).   Figure 3.15 also shows that the lights cue type was very similar to the no cue control condition in the straight scenario. Figure 3.14 shows that with the lights cue type, goal mode and path&goal mode were significantly more identifiable in the turn scenario than the straight scenario. Lastly, Figure 3.10 illustrates that the preference for the lights cue type is stronger in the turn scenario.   45  3.3 Discussion In summary, the main effects show support for hypotheses H1.1 and H1.2. However, three-way interaction effects do not show statistically significant support for the hypotheses in every pairwise comparison. Despite the lack of statistically rigorous support, the results show more support for the arrows cue type than the lights cue type, and more support for the path&goal cue mode than path mode or goal mode.  The results show that the lights cue type scores better in goal mode or path&goal mode, whereas the arrows cue type scores better in path mode or path&goal mode. Path&goal mode communicates both path and goal information, so it follows that the arrows cue type best communicates path information, and the lights cue type best communicates goal information. We also see that the lights cue type is best suited to the turn scenario, whereas the arrows cue type is well-suited to both turn and straight scenarios.   An interesting result is that path&goal mode was the most preferred, but the least identifiable cue mode. It is possible that participants thought they wanted more information when asked, but the added complexity made path&goal mode more difficult to identify.  In the turn scenario the robot’s body language is somewhat communicative. In the straight scenario the communication comes strictly from the cues, which may create some uncertainty. The lights in goal mode remain off in the straight scenario. This functionality may allow the uncertainty to decrease the social acceptability. The preferred mode (goal or path&goal) and scenario (turn) for the lights cue type is likely also influenced by motor vehicle turning signals, 46  which operate in goal mode and only activate in turning scenarios. The arrows cue type, conversely, does not have a familiar equivalent in society, which may have prevented expectations for its operation and allowed the arrows to be socially acceptable in both scenarios. Pedestrians communicate path information with their body language, rather than goal information, so it is expected that path mode and path&goal mode were socially acceptable with arrows, the cue type that was socially acceptable in both scenarios.  Section 3.2.3 shows that the lights in goal mode were not significantly more socially acceptable than the no cue condition in the straight scenario. The goal mode only activates when the robot is close to and facing its goal. In the straight scenario, this functionality resulted in the lights not flashing at all in goal mode.  Lastly, I treated the robot’s movement scenario as an independent variable in this experiment, but robots encounter multiple movement scenarios in public spaces. Designers will likely choose one type and one mode for motion legibility cues but will have to design for multiple scenarios. The lights cue type used in this experiment could be improved for the straight scenario.  3.3.1 Limitations Experimentation with videos and an online survey is a major limitation of this study. The videos were not entirely representative of an in-person interaction with the robot. Measures such as comfort and social compatibility are likely to be different in in-person interactions. While I would have preferred to use in-person trials instead of an online survey with videos, there is evidence that experimentation with videos produce meaningful results. Woods et al. showed video results 47  were equivalent to in-person results in a study about which direction was the most appropriate for a robot to approach a human [44]. Furthermore, Lichtenthäler et al. used videos to test navigation algorithms in a human-robot path crossing scenario [11].  The data from Amazon Mechanical Turk respondents are less trustworthy than data collected from in-person trials. The participants were anonymous and therefore had little stake in providing honest and well-considered responses. The survey did use an attention check question after each video, but this method did not check the quality of the responses used for analysis. Furthermore, the participants were likely from a variety of cultures. Different cultures may interpret communication cues in different ways. All the researchers and pilot test participants involved in the design of the robot communication cues lived and worked in western cultures. Lastly, the survey did not check participants’ grasp of the English language, so some wording may have been misinterpreted.  Cue mode, cue type, and robot movement scenario are confounding factors. The cue modes are implemented differently for each cue type. In addition to the basic differences between LEDs and light projectors, there are several confounding differences between the two cue types. The flashing lights were orange and the projected arrows were green. I used orange flashing lights to align with the colour of motor vehicle turning signals. I used green arrows to signify that the robot was moving according to the arrow. Despite these intentional design decisions, the difference in colour was a confound. Second, the flashing lights were binary and the arrows were continuous. Furthermore, the lights remained off in the straight scenario. This design choice was meant to 48  mimic the behaviour of motor vehicle turning signals, but it may have negatively impacted the social acceptability of the lights when compared to the arrows.  3.3.1.1 Data Analysis Limitations Follow-up Question 3 on mode identifiability and Question 4 on mode preference were somewhat ambiguous. In the mode preference question, the “Both” label in the middle of the slider was meant to represent path&goal mode, but subjects may have interpreted it as a neutral preference for both path mode and goal mode. In the mode identifiability question, we cannot know whether subjects actually identified the different cue modes while watching each video.   3.3.1.2 Practical Limitations The first practical limitation is that the light projector used for the arrows cue type may perform differently in outdoor and more crowded environments. Sunlight affects the visibility of the projected images, which may also be occluded or distorted by obstacles in the projection area.  Second, the goal information depends on the robot’s environment. I conducted this experiment in a corridor-like environment containing three obvious options for the robot’s goal. Goal mode and path&goal mode may be less understandable in more open environments with less spatial context.  This study, the first of two in this thesis, has investigated explicit motion legibility cues for human-robot spatial interactions. In the next chapter, Study 2 investigates implicit robot yielding cues for a head-on human-robot spatial interaction at a doorway. 49   Study 2 – Robot Yielding Cues This study was a collaboration among me, Ryan Lee, and Marlene Schlorf. Please refer to the preface to this thesis for individual contribution details. This chapter uses “we” to refer to all three collaborators.  This chapter presents a study on implicit movement cues a mobile robot can use to communicate its yielding to a pedestrian at a doorway. Figure 4.1 illustrates the human-robot spatial interaction (HRSI) for which we designed these robot yielding cues.  Figure 4.1: Diagram of the interaction explored in this study, in which a mobile robot and a human wish to enter the same doorway. The dotted lines represent their desired paths. The yielding cues communicate that the robot will let the human enter the doorway first. This figure is the same as Figure 2.1.   50  We used the robot described in Section 3.1.1 to prototype five different robot yielding cues: (1) Stop, (2) Decelerate, (3) Retreat, (4) Tilt, and (5) Nudge. We conducted a user study to test two hypotheses: • H2.1: The Nudge cue is the most comprehensible yielding cue. • H2.2: The Nudge cue is the most trustworthy yielding cue. Section 4.1.1 describes the cues in detail and explains why we chose these hypotheses.  In addition to the abovementioned hypotheses, we also explored the cues’ social compatibility and likeability, and participants’ comfort during the interaction. These five measures are all components of social acceptability, one of the major outcome measures in Study 1 (Chapter 3). We used videos of the robot demonstrating its yielding cues in an online survey to collect data from 102 participants. The rest of this chapter presents the methods and materials, the results of the data analysis, and a discussion thereof.  51  4.1 Methods and Materials 4.1.1 Design of Robot Yielding Cues This section presents the design and functionality of the implicit robot yielding cues used in this study: (1) Stop, (2) Decelerate, (3) Retreat, (4) Tilt, and (5) Nudge. We prototyped these yielding cues for the interaction shown in Figure 4.1. In each cue, the robot moves towards the doorway with a linear speed of v before performing a movement to communicate that the pedestrian should enter first. The paragraphs below describe the yielding cues in detail, including specific motion parameters for each. We conducted a pilot study to set the values for these parameters; the pilot study is described at the end of this section.  In the Stop cue the robot stops abruptly at the edge of the doorway. The Stop cue is a minimally communicative cue used as a control condition in this study. Many of the studies reviewed in Section 2.2 use a similar stopping cue as a control condition (e.g. Kaiser et al. [29], Moon et al. [31]). In the Decelerate cue the robot starts to decelerate at a distance of Xd before coming to a stop at the edge of the doorway. In the Retreat cue the robot stops abruptly at the edge of the doorway, then retreats a distance of Xr and stops. The literature reviewed in Section 2.2 shows that retreating cues (e.g. Reinhardt et al. [32]) and decelerating cues (e.g. Ackermann et al. [27]) are effective robot yielding cues, but they have not been directly compared.  In the Nudge cue the robot stops abruptly at the edge of the doorway and then makes two rotations, or “nudges”, towards the doorway at an angle of θn with an angular speed of ω. The Nudge cue rotates back to its approach angle between each “nudge” and finishes facing the 52  pedestrian. The Nudge cue is evocative of a person waving their hand or nudging their head to indicate someone else should go through a doorway first.  In the Tilt cue the robot stops abruptly at the edge of the doorway, then turns away from the doorway and stops at an angle of θt. The Tilt cue is evocative of a person turning away from a doorway to indicate someone else should go through first. It is also similar to a diversion cue, which the literature in Section 2.2 found to be an effective robot yielding cue. Both the Nudge and Tilt cues are analogous to robot gaze, despite this mobile robot not having an anthropomorphic head. Research described in Section 2.1 shows robot gaze is an effective robot behavioural legibility cue, and has potential as a robot yielding cue [15]. We selected the two hypotheses listed above to test whether gaze can be used as a yielding cue in a mobile robotics context. The literature reviewed in Section 2.2 has not evaluated robot gaze as a yielding cue with mobile robots, nor has it directly compared robot deceleration, robot retreating, and robot gaze as yielding cues. Figure 4.2 illustrates the cues’ behaviour.  Figure 4.2: The robot’s five different yielding cues from the perspective of the pedestrian, with the doorway to the robot’s right. The images show the point at which the robot starts the Decelerate cue, the point to which the robot reverses in the Retreat cue, and the rotation maximums for the Tilt and Nudge cues. 53  We conducted in-person pilot tests with five participants to set the distance and rotation angle parameters for each cue. We used the Nudge cue to test three different values for the linear speed v and three different values for the angular speed ω. We then used the middle of these three values of v and ω to test parameters specific to individual cues. We tested three values for each of the following parameters: the distance to retreat in the Retreat cue (Xd), the rotation angle for the Tilt and Nudge cues (θt and θn), the number of nudges in the Nudge cue (Nn), and the distance at which to start the Decelerate cue (Xr). After demonstrating all three values for each parameter, we asked participants which value made the cue the most comprehensible and which value made them feel most comfortable. Table 4.1 shows the tested parameter values, as well as the results. Table 4.1: Cue parameters tested in the in-person pilot tests. Cue Parameter Units Low Value Middle Value High Value Result All v m/s 0.25 0.7 1 0.7 All ω °/s 15 60 100 60 Decelerate Xd m 0.5 1 1.5 1 Retreat Xr m 0.1 0.5 1 0.1 Tilt θt degrees 15 40 90 40 Nudge θn degrees 15 55 90 55 Nudge Nn - 1 2 3 2  We implemented the robot yielding cues in software using ROS. The cues were implemented as simple movements that could be executed by the robot in the experiment scenario described in Section 4.1.2. The cues are for demonstration only; the robot does not respond to the physical environment or the presence of pedestrians. Appendix A describes details of this implementation.   54  4.1.2 Experimental Scenario To facilitate data collection, we captured videos of the robot yielding cues in a simulation of the interaction shown in Figure 4.1. Figure 4.3 shows an instructional diagram presented to participants in the online survey described in Section 4.1.3. The videos were recorded from the perspective of the pedestrian.  Figure 4.3: Diagram of the simulated human-robot spatial interaction in this study. The floor tiles measure 2’x2’.   55  To simulate the viewer walking towards the doorway, we applied the zooming animation shown in Figure 4.4 to each video.   Figure 4.4: The videos used a zooming animation to simulate the viewer walking towards the doorway at the same time as the robot. The time since the start of the video is shown in each image.  We considered three video styles other than the zooming animation. We experimented with a moving red dot or stick figure to represent a pedestrian walking towards the doorway. We conducted an online pilot test with 29 participants to compare these three animation styles to a static video without animation. We presented videos of the Nudge cue to participants with the four different animation styles, and asked questions about: (1) the immersion and (2) the naturalness of the videos, (3) participants’ ability to concentrate on and (4) interpret the yielding cue, and (5) their overall preference. The zooming animation style was rated better in all except the concentration question, in which the static video was slightly preferred.  The videos are enclosed with this thesis as supplementary material. 56  4.1.3 Survey Design and Experimental Procedure We designed a digital survey using Qualtrics (Provo, UT, USA) to collect data using the videos described above. After giving consent, participants read instructions explaining the situational context of the videos. The instructions included the diagram in Figure 4.3. We did not state that all the cues were designed to communicate yielding. Thus, participants had to interpret each cue for themselves. Participants first viewed a familiarization video of the robot entering the doorway without making a cue. The next five videos were presented in randomized order to show each robot yielding cue: Stop, Decelerate, Retreat, Tilt, and Nudge.  After each video, the survey first asked participants an attention check question: 1. Did the robot rotate during the video, either to its left or right? a. Yes b. No  The survey then asked participants about their interpretation of the yielding cue: 2. According to the robot’s movement cue, should the robot or you (the viewer) enter the doorway first? a. Robot b. Me   57  Participants then responded to the following statements on a 7-point Likert scale from “Very Strongly Disagree” to “Very Strongly Agree”: 3.1. I was confident in deciding who should enter the doorway first. 3.2. The robot’s movement cue was misleading for me. 3.3. I quickly understood the robot's movement cue.  3.4. The robot's movement cue was sufficient for me to decide who should go through the door first. 3.5. It is difficult to understand what the robot does. 3.6. I trust the robot. 3.7. I can rely on the robot. 3.8. The robot is deceptive. 3.9. I am wary of the robot. 3.10. The robot’s movement cue would be socially compatible in a pedestrian’s environment. 3.11. The robot's movement cue made me feel comfortable. 3.12. I liked the robot.    58  4.1.3.1 Cue Comprehensibility and Trustworthiness Likert Scales We proposed two Likert scales from the statements in Question 3. We proposed a Cue Comprehensibility Scale from Statements 3.2 - 3.5, and a Cue Trustworthiness Scale from Statements 3.6-3.9. Each scale has two statements with positive valence and two with negative valence. We calculated the score for each scale as the mean response to the statements on a 7-point Likert scale. Joshi et al. support taking the mean of a set of Likert-type items to create interval data [33]. The comprehensibility items are adapted from the Trust in Automation Questionnaire in [32] and the Human-Computer Trust Scale in [45]. We pilot tested each scale with 7 participants to verify that their understanding of the statements matched our own.  We analyzed the internal reliability of both scales for each yielding cue condition using Cronbach’s α [34]. We found a minimum α of 0.88 for the Cue Comprehensibility Scale and 0.75 for the Cue Trustworthiness Scale. The internal reliability of both scales is above the traditional minimum of 0.7. We also analyzed the discriminatory power of both scales. We found a minimum discriminatory power of 0.66 for the Cue Comprehensibility Scale and 0.47 for the Cue Trustworthiness Scale. The discriminatory power of both scales is above the traditional minimum of 0.3.  Statements 3.10 - 3.12 measure social compatibility, comfort, and likeability directly as Likert-type items. Section 3.1.4.1 explains how these are sub-constructs of social acceptability.   59  4.1.4 Participants We conducted the survey online and recruited participants using Amazon Mechanical Turk for a reward of US $2.50. Section C.2 includes copies of the Amazon Mechanical Turk advertisement, consent letter, and survey used in this study. At the end of the survey, participants responded to demographic questions about their gender and age. They also rated their “previous experience with robots” on the same 7-point Likert scale used in Question 3 and described any experience in a text entry box.  A total of 128 participants responded to the online survey. Of the 128 responses, 25 were excluded due to incorrect responses to the attention check question described above, and one was excluded due to a self-reported misinterpretation of the instructions, leaving 102 responses to analyze. The participants excluded for failing the attention check question were not paid; the one excluded for misinterpreting the instructions was paid. Figures 4.5 and 4.6 illustrate the distribution of the participants’ self-reported ages and genders, respectively. Figure 4.7 illustrates the distribution of the participants’ self-reported experience with robots. No participants were excluded for their experience because none had experience with robots similar to the one used in this study.  60   Figure 4.5 : Age distribution of participants in the robot yielding cues study. Participants placed themselves into one of these age ranges.  Figure 4.6: Gender distribution of participants in the robot yielding cues study. There was a fourth “Non-binary” option with no responses. 61    Figure 4.7: Responses to the question “I have previous experience with robots” on a 7-point Likert scale from “Very Strongly Disagree” (VSD) to “Very Strongly Agree” (VSA).    62  4.1.5 Data Analysis We performed repeated measures ANOVA tests on the comprehensibility and trustworthiness data. Joshi et al. support taking the mean of a set of Likert-type items to create interval data, and then using parametric tests [33]. Because we had a large sample size (102), we could ignore the assumption of normality for the repeated measures ANOVA tests [37]. Neither the comprehensibility data nor the trustworthiness data met the assumption of sphericity, so we used the Greenhouse-Geisser correction [39]. Appendix B includes the results of the assumption testing. We also analyzed the Pearson correlations [46] between the trustworthiness and the comprehensibility of each yielding cue.  We performed exploratory Friedman’s ANOVA tests on each of statements 3.10 (social compatibility), 3.11 (comfort), and 3.12 (likeability). We used the non-parametric Friedman’s ANOVA because these data are on ordinal scales, whereas the comprehensibility and trustworthiness data are on interval scales. The within-subjects factor for each test was yielding cue (Stop, Decelerate, Retreat, Tilt, Nudge).  Lastly, we performed Bonferroni-corrected [41] post-hoc tests on the significant effects detected by the ANOVA tests. We analyzed Question 2 in conjunction with Statement 3.1 to show how correctly and confidently participants interpreted the cues as yielding or not.    63  4.2 Results This section presents the results of the data analysis described above. Appendix B provides tables summarizing and expanding on the details presented here.  4.2.1 H2.1 The Nudge cue is the most comprehensible yielding cue. The repeated measures ANOVA on comprehensibility detected a significant main effect of cue condition with a medium effect size (F(3, 326) = 8.8, p < .001, ηp2 = 0.08). Post-hoc tests did not show support for H2.1, that the Nudge cue is the most comprehensible yielding cue. The Nudge cue was only significantly more comprehensible than the Stop cue (MD = 0.6, p = .01). The Retreat cue was significantly more comprehensible than the Stop cue (MD = 0.8, p < .001), the Decelerate cue (MD = 0.7, p < .001), and the Tilt cue (MD = 0.5, p = .029). There were no significant differences in comprehensibility between other combinations of cues. Figure 4.8 illustrates the comprehensibility results.  Figure 4.8: Comprehensibility results for the robot yielding cues. 95% confidence intervals are shown around each mean. 64  4.2.2 H2.2 The Nudge cue is the most trustworthy yielding cue. The repeated measures ANOVA on trustworthiness detected a significant main effect of cue condition with a medium effect size (F(3, 292) = 3.8, p = .012, ηp2 = 0.04). Post-hoc tests did not show support for H2.2, that the Nudge cue is the most trustworthy yielding cue. The Nudge cue was not significantly different in trust from any other yielding cue. The Retreat cue was significantly more trustworthy than both the Stop cue (MD = 0.34, p = .002) and the Decelerate cue (MD = 0.34, p = .003). There were no significant differences in trustworthiness between other combinations of cues. Figure 4.9 illustrates the trustworthiness results.   Figure 4.9: Trustworthiness results for the robot yielding cues. 95% confidence intervals are shown around each mean. 65  4.2.3 Correlation of Comprehensibility and Trustworthiness in Robot Yielding Cues The Pearson correlation analysis revealed strong correlations between the comprehensibility and the trustworthiness of each cue. The correlation between these two measures was strong in each robot yielding cue: in the Stop cue (r = 0.58, p < .01), Decelerate cue (r = 0.61, p < .01), Reverse cue (r = 0.71, p < .01), Tilt cue (r = 0.66, p < .01), and Nudge cue (r =  0.74, p < .01).  66  4.2.4 Comfort, Likeability, and Social Compatibility of Robot Yielding Cues The Friedman’s ANOVA tests revealed significant main effects of yielding cue on comfort (χ2 (4) = 27.2, p < .001), likeability (χ2 (4) = 27.2, p < .001), and social compatibility (χ2 (4) = 23.0, p < .001). The Retreat cue was significantly more comfortable than the Stop cue (p < .001), the Decelerate cue (p = 0.015), and the Tilt cue (p = .004). The Retreat cue was significantly more likable than the Stop cue (p = .005) and the Tilt cue (p = .049). The Retreat cue was significantly more socially compatible than the Stop cue (p = .019), the Decelerate cue (p = .035), and the Tilt cue (p = .003). There were no other significant differences in comfort, likeability, or social compatibility between yielding cues. Figure 4.10 illustrates the results.  Figure 4.10: Violin plots showing the responses to Statements 3.10 - 3.12. Black points show the means of the responses on the 7-point Likert scale to help visually distinguish the violins. Comfort, Likeability, and Social Compatibility of Robot Yielding Cues Likeability Comfort Social Compatibility 67  4.2.5 Interpretation of Robot Yielding Cues Question 2 asked participants whether they interpreted the robot’s cue as yielding or not. Statement 3.1 then assessed their confidence in making that interpretation. Participants correctly interpreted the cues as yielding at least 76% of the time (Stop and Nudge). The Retreat cue was correctly interpreted the most often (86%). The Stop cue had the lowest confidence scores, and the Retreat and Nudge cues had the highest confidence scores. With all cues, participants were slightly less confident when deciding the robot was not yielding to them. Figure 4.11 illustrates these results.  Figure 4.11: Interpretation of robot yielding cues and decision-making confidence results. Bar heights show the interpretation percentages. The bar colours and numerical annotations show the mean confidence scores.    68  4.3 Discussion The results do not statistically support the hypotheses that the Nudge cue is the most comprehensible and most trustworthy robot yielding cue. However, the Nudge cue was rated slightly higher than the Stop, Decelerate, and Tilt cues in all five measures of social acceptability. Figure 4.11 shows that the interpretations of the Nudge cue were the most polarized. The Nudge cue was interpreted incorrectly 24% of the time, but these interpretations were made with the second highest confidence. The Nudge cue was the most complex cue as it involved multiple rotations. It is possible that familiarizing participants more with the Nudge cue would produce more significant results. The Nudge cue mimicked a person waving their hand or nudging their head to indicate someone else should go through a doorway first. It is possible the mobile robot we used was too dissimilar from a human to create this imagery. The simple addition of a passive anthropomorphic head could help participants better understand the Nudge cue.  The results that imply the Retreat cue is more socially acceptable than the other robot yielding cues. The Retreat cue was rated higher than the Nudge cue in all five social acceptability measures, but not significantly so. The interpretation results in Figure 4.11 show that the Retreat cue was correctly interpreted more often (86%) than the Nudge cue (76%). The Retreat cue was rated significantly higher than the Stop and Decelerate cues in all five social acceptability measures, and significantly higher than the Tilt cue in all measures except comprehensibility. The success of the Retreat cue is consistent with the literature discussed in Chapter 2. Moon et al. [31] also found that a robot retreating cue was more effective than a simple stopping behaviour in a turn-taking interaction with a robot manipulator.  69  The Decelerate cue scored very similarly to the Stop cue in all social acceptability measures, even though we expected greater differences between these two cues. The Decelerate cue scored slightly higher than the Stop cue in comprehensibility and comfort, but the two cues were indistinguishable in trustworthiness, likeability, and social compatibility. Figure 4.11 does show that the Decelerate cue was more often interpreted correctly and with higher confidence than the Stop cue, but we did not statistically test these data. We pilot tested different distances from doorway at which to start the deceleration (Xd), but we didn’t test different deceleration values. We speculate that the Decelerate cue was visually hard to distinguish from the Stop cue. Dondrup et al. showed that lower mobile robot velocities within a pedestrian’s personal space resulted in less disruption to the pedestrian’s movement [26]. It is possible a higher deceleration rate and a longer period of lower velocity would have made the Stop and Decelerate cues more distinguishable in this study.  In the Retreat cue the robot retreated a distance of 10 cm from the edge of the doorway, which was set with our pilot testing described in Section 4.1.1. This is much smaller than the 45 cm distance Lauckner et al. found to be a minimal robot proxemics for human comfort [25]. Lauckner et al. used a head-on human-robot interaction in a corridor to identify this distance, but without a doorway to the side. It is possible that the space and context afforded by the doorway in this study made participants feel more comfortable.  The literature in Section 2.2 shows retreating to be an effective robot yielding cue in contexts different than the one explored in this study (e.g. [31], [32]). This study shows that retreating can be used as a robot yielding cue in the unexplored human-robot spatial interaction at a doorway on 70  the side of a corridor. Section 2.2 also identifies the unexplored comparison between deceleration and retreating cues. The results of Study 2 imply that retreating cues are more socially acceptable than deceleration cues. While the results were not statistically significant, this study implies that the gaze-like Nudge cue is more socially acceptable than deceleration or stopping cues.  4.3.1 Limitations As discussed in Section 3.3.1, experimentation with video capture and an online survey is a major limitation. We presume that participants would be less comfortable walking through the doorway in front of a real robot. Furthermore, the videos were all less than 12 seconds in duration. Longer videos would better simulate an in-person interaction. Like in Study 1, data from anonymous online participants are less trustworthy than from in-person experiments. As discussed in Section 4.1.1, the robot yielding cues are human-agnostic and entirely open loop. They do not respond to the presence of humans or changes in the environment. This limits the generalizability of the robot yielding cues in their current implementation but does not limit the results of this study. Lastly, the experimental scenario shown in Figure 4.3 contained floor tiles bounded by black lines. The doorway was aligned with one of these black lines, which may have created an additional unintended visual boundary.  This study, the second of two in this thesis, has investigated the design of implicit robot yielding cues for a head-on human-robot spatial interaction at a doorway. The next chapter concludes the thesis and suggests directions for future work. 71   Conclusion This thesis asked the question, “How should mobile robots communicate their behaviour to pedestrians in public spaces?” Through two studies, this research explored questions for two types of robot behaviour legibility cues: explicit motion legibility cues (Study 1) and implicit yielding cues (Study 2). Literature review and pilot testing of robotic cues led to the specific exploratory questions and experimental design for each study. Both studies made novel comparisons between existing methods and developed novel designs. Both studies also proposed Likert scales for use in future research. The sections below discuss the findings from each study. The last section discusses open questions and proposes directions for future work.  5.1 Study 1 – Explicit Motion Legibility Cues Study 1 investigated the design of explicit motion legibility cues, which communicate a mobile robot’s motion to pedestrians. Study 1 asked, “What modality should these cues use?” and “What information should they communicate?” Results show that, overall, projected arrows are a more socially acceptable visual modality than flashing lights. The results also show that designing these cues for both path-predictability and goal-predictability is more socially acceptable than only one or the other. This result implies Lichtenthäler and Kirsch are correct that path- and goal-predictability are not contradictory in a mobile robotics context [24].  These general results are somewhat nuanced. They show that the flashing lights are most communicative when the robot is turning, rather than moving straight. The projected arrows are communicative in both scenarios. The results also show that the lights modality is better at 72  communicating goal information than path information. Conversely, the projected arrows are better at communicating path information than goal information.  The proposed Social Acceptability Scale described in Section 3.1.4.1 is also a contribution of Study 1. The scale has high internal reliability and detected some statistically significant effects of the cues on social acceptability. This scale could be used to assess other robot legibility cues in future research. However, this scale should be rigorously validated before being used for future research.  The results of Study 1 add to the body of  research on explicit motion legibility cues. Previous reviewed literature had not directly compared flashing lights to projected arrows as robot communication cues. Furthermore, previous reviewed literature had not differentiated the design of these cues for path- and goal-predictability. The results also demonstrate the value of designing for robot legibility using explicit visual communication.    73  5.2 Study 2 – Implicit Yielding Cues Study 2 asked, “What motions should mobile robots use as a yielding cue: decelerating, retreating, or rotating?” Study 2 investigated the design of implicit robot yielding cues, which communicate that a mobile robot is yielding to a pedestrian at a doorway. Online survey results show that a retreating motion is the most socially acceptable of the five proposed yielding cues. The Nudge cue was a novel design. While the results were not statistically significant, the Nudge cue was rated higher than two other cues, as well as the control condition.  The Cue Comprehensibility Scale and Cue Trustworthiness Scale described in Section 4.1.3.1 are also contributions of Study 2. Both scales have high internal reliability and discriminatory power. They both detected statistically significant effects of the yielding cues on the scales’ respective measures. The scales also revealed a strong correlation between trustworthiness and comprehensibility of implicit robot yielding cues. This correlation underlines a major motivation for this thesis: robots should be designed for comprehensibility in order to engender trust. Kauppinen et al. validated the relationship between trust and comprehension in the Human Computer Trust Rating Scale, which was tested on air traffic control systems [47]. The results of Study 2 demonstrate this relationship in human-robot spatial interaction. However, these scales should be rigorously validated before being used for future research.  These results add to the body of research on implicit robot yielding cues. Previous reviewed literature had not directly compared decelerating, retreating, and rotating robot motions in this context. The results demonstrate the effectiveness of a simple retreating behaviour and its impact on the robot’s social acceptability. 74  The results of both studies may not generalize well. Study 1 involved confounding factors. In both studies, the cues were dependent on the structured environment around the robot. These interdependencies limit the conclusions one can draw from the results. The following section presents open questions and unexplored avenues that stem from the findings in Study 1 and Study 2. Many of these open questions are challenges to the generalizability of the results in this thesis.  5.3 Future Work The main findings in this thesis would be more compelling if they were verified with richer evaluation methods. The online surveys used in both studies collected only subjective-quantitative data using videos of the cues. In-person interactions could provoke different reactions to the robot behaviour legibility cues and could utilize richer data collection methods. Multiple related works collected objective-quantitative data in conjunction with subjective-quantitative survey results. To evaluate their robot’s behaviour legibility cues, some other studies collected participants’ reaction or decision-making time (e.g. Ackermann et al. [27] and Dragan et al. [22]), and some analyzed subjects’ walking trajectories (e.g. Watanabe et al. [21]). The COVID-19 pandemic interrupted plans to conduct in-person evaluations of the cues in this thesis.  This thesis has shown that explicit motion legibility cues should be designed for both path-predictability and goal-predictability, but this result comes from comparing only flashing lights and projected arrows. Do these results hold for other visual modalities?  This research has evaluated the robot legibility cues in only the chosen corridor-like environments. The motion legibility cues were evaluated in a wide corridor with a four-way 75  intersection. The yielding cues were evaluated in a one-sided corridor with a doorway. Both sets of cues are somewhat dependent on the structured environment around the robot. Would the motion legibility cues be comprehensible in a more visually cluttered or more open environment? Could the yielding cues generalize to a different yielding scenario? Furthermore, the videos simulated only short interactions with participants. Would longer durations or repeated interactions change the cues’ social acceptability? In a head-on human-robot interaction in a hallway, Fernandez et al. showed that participants needed to see the robot demonstrate its flashing lights cue first, otherwise they didn’t understand the cue during the spatial interaction [48].  Neither in this thesis, nor in the literature reviewed in Chapter 2, have robot behaviour legibility cues been investigated for communicating to multiple pedestrians. The videos used in this thesis simulated interactions with a single pedestrian, the viewer. How would pedestrians perceive the cues if there were other pedestrians in the scenario? Would the robot need to specify to which pedestrian it was communicating? To answer this question and others, the cues in this thesis should be tested on a human-aware robot. The cues communicate to observing pedestrians, but they do not utilize pedestrian detection or behaviour prediction, so the communication is strictly unidirectional from the robot to the human. With the addition of human-awareness, the motion legibility cues could be tested on a human-aware motion planner.  What is the effect of a robot’s appearance on its behaviour legibility cues? The robot used in this thesis had a “lab-like” appearance with multiple cluttered cords and visually obvious sensors. Would a more aesthetically pleasing form factor change the cues’ social acceptability? Would a 76  more visually imposing robot need to use different yielding cues than would a more innocent looking robot?  Although the two studies in this thesis are somewhat disparate, it would be interesting to combine their results. How could explicit motion legibility cues be used in conjunction with implicit yielding cues? 77  Bibliography [1] International Federation of Robotics, “Executive Summary World Robotics 2018 Service Robots,” 2018. [Online]. Available: https://ifr.org/downloads/press2018/Executive_Summary_WR_Service_Robots_2018.pdf. [2] M. Joerss, F. Neuhaus, and J. Schröder, “How customer demands are reshaping last-mile delivery,” 2016. Accessed: Jul. 31, 2020. [Online]. Available: https://www.mckinsey.com/industries/travel-transport-and-logistics/our-insights/how-customer-demands-are-reshaping-last-mile-delivery. [3] D. Helbing, P. Molnár, I. J. Farkas, and K. Bolay, “Self-organizing pedestrian movement,” Environment and Planning B: Planning and Design, vol. 28, no. 3, pp. 361–383, 2001, doi: 10.1068/b2697. [4] J. Snape, J. van den Berg, S. J. Guy, and D. Manocha, “Smooth and collision-free navigation for multiple robots under differential-drive constraints,” in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Oct. 2010, pp. 4584–4589, doi: 10.1109/IROS.2010.5652073. [5] P. Collett and P. Marsh, “Patterns of public behaviour: Collision avoidance on a pedestrian crossing,” in Nonverbal Communication, Interaction, and Gesture: Selections from SEMIOTICA, De Gruyter, Inc, 1981, pp. 199–217. [6] Y. F. Chen, M. Everett, M. Liu, and J. P. How, “Socially aware motion planning with deep reinforcement learning,” in IEEE International Conference on Intelligent Robots and Systems, 2017, pp. 1343–1350, doi: 10.1109/IROS.2017.8202312. [7] P. Trautman, J. Ma, R. M. Murray, and A. Krause, “Robot navigation in dense human crowds: Statistical models and experimental studies of human-robot cooperation,” International Journal of Robotics Research, vol. 34, no. 3, pp. 335–356, 2015, doi: 10.1177/0278364914557874. [8] T. Kruse, A. K. Pandey, R. Alami, and A. Kirsch, “Human-aware robot navigation: A survey,” Robotics and Autonomous Systems, vol. 61, no. 12, pp. 1726–1743, 2013, doi: 10.1016/j.robot.2013.05.007. [9] D. V. Lu and W. D. Smart, “Towards more efficient navigation for robots and humans,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nov. 2013, pp. 1707–1713, doi: 10.1109/IROS.2013.6696579. [10] E. A. Sisbot, L. F. Marin-Urias, X. Broquère, D. Sidobre, and R. Alami, “Synthesizing robot motions adapted to human presence: A planning and control framework for safe and socially acceptable robot motions,” International Journal of Social Robotics, vol. 2, no. 3, pp. 329–343, 2010, doi: 10.1007/s12369-010-0059-6. [11] C. Lichtenthäler, T. Lorenzy, and A. Kirsch, “Influence of legibility on perceived safety in a virtual human-robot path crossing task,” in 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, 2012, pp. 676–681, doi: 10.1109/ROMAN.2012.6343829. [12] P. A. Lasota, T. Fong, and J. A. Shah, “A Survey of Methods for Safe Human-Robot Interaction,” Foundations and Trends in Robotics, vol. 5, no. 3, pp. 261–349, 2017, doi: 10.1561/2300000052.   78  [13] A. St. Clair and M. Mataric, “How Robot Verbal Feedback Can Improve Team Performance in Human-Robot Task Collaborations,” in ACM/IEEE International Conference on Human-Robot Interaction, 2015, pp. 213–220, doi: 10.1145/2696454.2696491. [14] J. Thomas and R. Vaughan, “Right of Way, Assertiveness and Social Recognition in Human-Robot Doorway Interaction,” IEEE International Conference on Intelligent Robots and Systems, pp. 333–339, 2019, doi: 10.1109/IROS40897.2019.8967862. [15] A. Moon et al., “Meet me where I’m gazing: How Shared Attention Gaze Affects Human-Robot Handover Timing,” ACM/IEEE International Conference on Human-Robot Interaction, pp. 334–341, 2014, doi: 10.1145/2559636.2559656. [16] K. Fischer, L. C. Jensen, S. D. Suvei, and L. Bodenhagen, “Between legibility and contact: The role of gaze in robot approach,” 25th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN 2016, pp. 646–651, 2016, doi: 10.1109/ROMAN.2016.7745186. [17] A. D. May, C. Dondrup, and M. Hanheide, “Show me your moves! Conveying navigation intention of a mobile robot to humans,” 2015 European Conference on Mobile Robots, ECMR 2015 - Proceedings, pp. 1–6, 2015, doi: 10.1109/ECMR.2015.7324049. [18] M. C. Shrestha et al., “Intent communication in navigation through the use of light and screen indicators,” in ACM/IEEE International Conference on Human-Robot Interaction, 2016, pp. 523–524, doi: 10.1109/HRI.2016.7451837. [19] M. C. Shrestha, T. Onishi, A. Kobayashi, M. Kamezaki, and S. Sugano, “Communicating Directional Intent in Robot Navigation using Projection Indicators,” in RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication, 2018, pp. 746–751, doi: 10.1109/ROMAN.2018.8525528. [20] R. T. Chadalavada, H. Andreasson, R. Krug, and A. J. Lilienthal, “That’s on my mind! robot to human intention communication through on-board projection on shared floor space,” in 2015 European Conference on Mobile Robots (ECMR), 2016, pp. 1–6, doi: 10.1109/ecmr.2015.7403771. [21] A. Watanabe, T. Ikeda, Y. Morales, K. Shinozawa, T. Miyashita, and N. Hagita, “Communicating robotic navigational intentions,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, pp. 5763–5769, doi: 10.1109/IROS.2015.7354195. [22] A. D. Dragan, K. C. T. Lee, and S. S. Srinivasa, “Legibility and predictability of robot motion,” in ACM/IEEE International Conference on Human-Robot Interaction, 2013, pp. 301–308, doi: 10.1109/HRI.2013.6483603. [23] Y. Zhang, S. Sreedharan, A. Kulkarni, T. Chakraborti, H. H. Zhuo, and S. Kambhampati, “Plan explicability and predictability for robot task planning,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 1313–1320, doi: 10.1109/ICRA.2017.7989155. [24] C. Lichtenthäler and A. Kirsch, “Goal-predictability vs. trajectory-predictability - Which legibility factor counts,” in ACM/IEEE International Conference on Human-Robot Interaction, 2014, pp. 228–229, doi: 10.1145/2559636.2559802.   79  [25] M. Lauckner, F. Kobiela, and D. Manzey, “’Hey robot, please step back!’- Exploration of a spatial threshold of comfort for human-mechanoid spatial interaction in a hallway scenario,” IEEE RO-MAN 2014 - 23rd IEEE International Symposium on Robot and Human Interactive Communication: Human-Robot Co-Existence: Adaptive Interfaces and Systems for Daily Life, Therapy, Assistance and Socially Engaging Interactions, pp. 780–787, 2014, doi: 10.1109/ROMAN.2014.6926348. [26] C. Dondrup, C. Lichtenthäler, and M. Hanheide, “Hesitation Signals in Human-Robot Head-on Encounters: A Pilot Study,” in Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction, 2014, pp. 154–155, doi: 10.1145/2559636.2559817. [27] C. Ackermann, M. Beggiato, L. F. Bluhm, A. Löw, and J. F. Krems, “Deceleration parameters and their applicability as informal communication signal between pedestrians and automated vehicles,” Transportation Research Part F: Traffic Psychology and Behaviour, vol. 62, pp. 757–768, Apr. 2019, doi: 10.1016/j.trf.2019.03.006. [28] S. Gupta, M. Vasardani, and S. Winter, “Negotiation Between Vehicles and Pedestrians for the Right of Way at Intersections,” IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 3, pp. 888–899, Mar. 2019, doi: 10.1109/TITS.2018.2836957. [29] F. G. Kaiser, K. Glatte, and M. Lauckner, “How to make nonhumanoid mobile robots more likable: Employing kinesic courtesy cues to promote appreciation,” Applied Ergonomics, vol. 78, pp. 70–75, Jul. 2019, doi: 10.1016/j.apergo.2019.02.004. [30] S. Akita, S. Satake, M. Shiomi, M. Imai, and T. Kanda, “Social Coordination for Looking-Together Situations,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct. 2018, pp. 834–841, doi: 10.1109/IROS.2018.8594141. [31] A. Moon, C. A. C. Parker, E. A. Croft, and H. F. M. Van der Loos, “Design and Impact of Hesitation Gestures during Human-Robot Resource Conflicts,” J. Hum.-Robot Interact., vol. 2, no. 3, pp. 18–40, Sep. 2013, doi: 10.5898/JHRI.2.3.Moon. [32] J. Reinhardt, A. Pereira, D. Beckert, and K. Bengler, “Dominance and movement cues of robot motion: A user study on trust and predictability,” in 2017 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2017, Nov. 2017, pp. 1493–1498, doi: 10.1109/SMC.2017.8122825. [33] A. Joshi, S. Kale, S. Chandel, and D. Pal, “Likert Scale: Explored and Explained,” British Journal of Applied Science & Technology, vol. 7, no. 4, pp. 396–403, 2015, doi: 10.9734/bjast/2015/14975. [34] L. J. Cronbach, “Coefficient Alpha and the Internal Structure of Tests,” Psychometrika, vol. 16, no. 3, pp. 297–334, 1951, doi: 10.1007/BF02310555. [35] R. P. McDonald, Test Theory. New York: Psychology Press, 2013. [36] B. B. Frey, “Mixed Model Analysis of Variance,” The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation, pp. 1075–1078, 2018, doi: 10.4135/9781506326139.n436. [37] A. Field, “Normality Assumption,” in Discovering Statistics Using IBM SPSS Statistics, 4th ed., Sage Publications Ltd., 2013, p. 245. [38] K. A. Pituch and J. P. Stevens, “Mixed Model Analysis of Variance,” in Applied Multivariate Statistics for the Social Sciences: Analyses with SAS and IBM’s SPSS. 6th ed., Routledge, 2016, pp. 220, 480, 499–500.   80  [39] N. Salkind, “Greenhouse-Geisser and Hyunh-Feldt Corrections,” in Encyclopedia of Research Design, Thousand Oaks, California, 2010, pp. 545–546. [40] J. O. Wobbrock, L. Findlater, D. Gergle, and J. J. Higgins, “The Aligned Rank Transform for nonparametric factorial analyses using only ANOVA procedures,” in Conference on Human Factors in Computing Systems - Proceedings, 2011, pp. 143–146, doi: 10.1145/1978942.1978963. [41] W. Haynes, “Bonferroni Correction,” in Encyclopedia of Systems Biology, W. Dubitzky, O. Wolkenhauer, K.-H. Cho, and H. Yokota, Eds. New York, NY: Springer, 2013, p. 154. [42] J. Cohen, “Effect Size,” in Statistical power analysis for the behavioral sciences, 2nd ed., Hillsdale, N.J: L. Erlbaum Associates, 1988, p. 286. [43] D. Cousineau and F. O’Brien, “Error bars in within-subject designs: a comment on Baguley (2012).,” Behavior research methods, vol. 46, no. 4, pp. 1149–1151, Dec. 2014, doi: 10.3758/s13428-013-0441-z. [44] S. N. Woods, M. L. Walters, K. L. Koay, and K. Dautenhahn, “Methodological Issues in HRI: A Comparison of Live and Video-Based Methods in Robot to Human Approach Direction Trials,” in ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication, 2006, pp. 51–58, doi: 10.1109/ROMAN.2006.314394. [45] M. Madsen, “The development of a psychometric instrument for human-computer trust : an investigation of trust within the context of computer-aided decision-making,” Central Queensland University, 2000. [46] A. Field, “Section 7.4.2: Pearson’s correlation coefficient,” in Discovering Statistics Using IBM SPSS Statistics, 4th ed., Sage Publications Ltd., 2013. [47] S. Kauppinen, C. Brain, and M. Moore, “European medium-term conflict detection field trials [ATC],” in Proceedings. The 21st Digital Avionics Systems Conference, Oct. 2002, vol. 1, pp. 2C1-2C1, doi: 10.1109/DASC.2002.1067918. [48] R. Fernandez, N. John, S. Kirmani, J. Hart, J. Sinapov, and P. Stone, “Passive Demonstrations of Light-Based Robot Signals for Improved Human Interpretability,” in RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication, 2018, pp. 234–239, doi: 10.1109/ROMAN.2018.8525728. [49] A. Whitbrook, Programming mobile robots with aria and player: A guide to C++ object-oriented control, 1. Aufl. London ; New York: Springer, 2010. [50] J. F. Kelley, “An iterative design methodology for user-friendly natural language office information applications,” ACM Transactions on Information Systems (TOIS), vol. 2, no. 1, pp. 26–41, 1984, doi: 10.1145/357417.357420. 81  Appendices Appendix A  Implementation Details This appendix provides implementation details for the robot behaviour legibility cues developed in this thesis. Section A.1 describes the robot hardware used for the cues, and Section A.2 describes the software implementation.  A.1 Robot Hardware The mobile robot I used for this research is comprised of a mobile base and several auxiliary components. Figure A.1 shows a schematic and Figure A.2 shows an annotated image, both of which contain generic component names. Table A.1 contains a parts list with a specific name for each general component name. I added the following components for prototyping the robot behaviour legibility cues: microcontroller; LED drivers; LEDs; light projector. My colleagues added the other components before I started the research for this thesis. Figure A.3 contains specifications for the PowerBot mobile base. Figure A.4 shows the projection area of the light projector.  Figure A.1: Schematic of the auxiliary components added to the mobile base. 82    Figure A.2: Annotated image showing the auxiliary components added to the PowerBot mobile base. The LED covers are made of plastic cups and coloured tissue paper.   Table A.1: Parts list for the auxiliary components added to the mobile base. Generic Name Part Mobile Base Adept Mobile Robots PowerBot 2D LiDAR SICK LMS200 Computer Intel NUC -  Ubuntu 16 and ROS Kinetic WiFi Router Netgear Nighthawk AC2600 Remote Joystick Sony Playstation 3 Controller Microcontroller Arduino Uno LED Drivers (2) SparkFun FemtoBuck LED Driver LEDs (2) iPixel 3W LED Star Projector NEC VT700    ComputerLED Cover MicrocontrollerRemote Joystick Wired Joystick WiFi Router LED Driver Projector Mounts83   Figure A.3: Relevant sections of the PowerBot data sheet. All dimensions are in cm.   84   Figure A.4: Diagram of the projection area of the light projector used in Study 1 (not to scale). The area is not rectangular because the projector faces the ground at an angle.     85  A.2 Robot Software The robot behaviour legibility cues in this thesis were implemented using the Robot Operating System (ROS) middleware. I do not attempt to explain ROS in depth. ROS nodes are programs that communicate data over topics (data buses) on the ROS network. The PowerBot’s internal computer runs the Aria program [49] for motion control and sensor interfacing. The open-source RosAria node facilitates interfacing between other ROS nodes and Aria. To move the PowerBot in this research, other ROS nodes send velocity commands to the RosAria node. Using a software multiplexer, the remote joystick allows researchers to override velocity commands from other ROS nodes and take control of the robot. Figure A.5 illustrates how an arbitrary ROS node Ni sends velocity commands to the PowerBot. RosAria also provides sensor data and internal PowerBot parameters to other ROS nodes.  Figure A.5: High-level software diagram illustrating ROS control of the PowerBot. Both the motion legibility cues (Study 1) and the yielding cues (Study 2) use the framework described above to interface with the PowerBot. Three ROS packages are enclosed with this thesis as supplementary material: CommBot, DriveBase, and PowerBotNav. I wrote the CommBot (short for “communicating robot”) package for Study 1. Ryan Lee wrote the DriveBase package for Study 2. The PowerBotNav package contains configuration files for running the ROS Navigation Stack on the PowerBot mobile base. Each package contains a README.md file with running instructions. All code was tested with ROS Kinetic. The sections below describe the implementation for each study separately. Software Multiplexer  Remote Joystick ROS Node Ni RosAria Velocity Velocity Velocity PowerBot ROS Node Nj Sensor Data & Robot Parameters 86  Software Implementation for Study 1 – Robot Motion Legibility Cues I designed the motion legibility cues for use with the popular open-source ROS Navigation Stack, which uses the MoveBase node to move the robot to waypoints in the environment. The Navigation Stack uses 2D LiDAR data to localize the robot in a map of the environment and plan a path to the specified waypoint. The MoveBase node reacts to both static and dynamic obstacles.  I wrote two main ROS nodes for this thesis, one called Supervisor and one called CommBot. I manually specified a series of waypoints {W0, …, Wn}, which the Supervisor node uses to move the robot through the environment. The CommBot node uses the waypoints from the Supervisor node to animate the cues in goal mode and uses the planned path from the MoveBase node to animate the cues in path mode. MoveBase plans both a local and global path through the environment to the waypoint. I used the global path because it changes less drastically than the local path, but one could use the local path by simply changing a ROS parameter.  The Supervisor node uses the MoveBase node to move the robot to Wi. Simultaneously, the Supervisor node sends Wi and Wi+1 to the CommBot node. Once the robot arrives at Wi, the Supervisor node sends Wi+1 to the MoveBase node and sends Wi+1 and Wi+2 to the CommBot Node but holds the robot still for 5 seconds. The cues animate for 5 seconds based on Wi+1 and Wi+2, giving observers more time to understand before the robot resumes movement.  Figure A.6 shows how these nodes interact.  87   Figure A.6: Graph of the ROS network when the motion legibility cues are used with the ROS Navigation Stack. Ovals represent nodes, rectangles and arrows represent topics. Rectangles around nodes and topics represent namespaces. The commbot/arduino_serial node runs on the Arduino microcontroller to control the lights cue type. All other nodes run on the NUC computer. The light projector shows the arrows cue type on the ground using Rviz, the ROS data visualization program. The NUC computer runs Rviz and displays it through the projector.     88  I used a “Wizard of Oz” approach in Study 1 to ensure the robot moved consistently in the videos used for data collection. Using the system described above, I recorded the following data as I moved the robot through the environment: the waypoints from the Supervisor node, the planned path from the MoveBase node, the velocity commands sent to the RosAria node, and the coordinate transforms that represented the robot’s pose. While capturing the videos, I used these recorded data to move the robot and animate the motion legibility cues. The CommBot node ran online to animate the cues while the recorded data moved the robot. Figure A.7 illustrates the system used in this method.   Figure A.7: Graph of the ROS network showing the “Wizard of Oz” set up used for capturing videos of the motion legibility cues. The rosbag_player node sends the recorded data to other nodes. The recorded data are included as “rosbag” files in the enclosed CommBot package. There are four .bag files named with the pattern “a/c_left/right.bag”. “A” and “C” refer to the destinations in Figure 3.4. A is the “turn” scenario and C is the “straight” scenario described in 3.1.3. “Left” and “right” refer to the direction the robot takes to avoid the obstacle in Figure 3.4. To record the videos for Study 1, I used these four .bag files and enabled different combinations of the cues using the Rqt Reconfigure program as described below. 89   Figure A.8: General diagram of the CommBot node.  The CommBot node uses a Commbot object. The LightsFlasher and Projector classes are members of the Commbot class. LightsFlasher and Projector inherit from the Cue class. The LightsFlasher controls the lights cue type and the Projector controls the arrows cue type. The path input comes from the MoveBase node and the waypoints (Wi and Wi+1) are described above. Rqt Reconfigure is a graphical user interface (GUI) that allows researchers to enable different combinations of cue types and cue modes. Rqt Reconfigure also allows researchers to tune the cues with various parameters and settings. In the system in Figure A.1, the laptop runs Rqt Reconfigure; the computer runs Rviz and displays it through the projector for the arrows cue type. Note that shapes in this diagram are different than in the ROS graphs in Figure A.6 and Figure A.7.90  Software Implementation for Study 2 – Robot Yielding Cues As described in the preface to this thesis, Ryan Lee programmed the robot yielding cues with my guidance. He wrote one ROS node, called DriveBase. The DriveBase node implemented the yielding cues as a series of velocity commands, which controlled the robot movements for each cue. The DriveBase node sent the velocity commands to the PowerBot as shown in Figure A.5. He used keyboard commands from the laptop to tell the DriveBase node which robot yielding cue to execute. The cues are implemented for demonstration only – the robot does not respond to the physical environment or the presence of pedestrians. The DriveBase package, which contains the DriveBase node, is enclosed with this thesis as supplementary material.   91  Appendix B  Data Analysis Details B.1 Data Analysis Details for Study 1 – Robot Motion Legibility Cues The analysis details are organized by outcome measure: social acceptability, path mode comprehension (Statement 1.3), and goal mode comprehension (Statement 1.4). Social Acceptability ANOVA Assumption Testing – Social Acceptability Table B.1: Mauchly test for sphericity of Social Acceptability Score. Effect ε Value p Value Cue Mode 0.85 < .001 Cue Mode x Cue Type 0.85 < .001 Cue Mode x Robot Movement Scenario 0.95 0.005 3-way Interaction 0.95 0.005  Table B.2: Shapiro-Wilks test for normality of Social Acceptability Score. Cue Type Cue Mode Robot Movement W Statistic p Value Arrows Path Turn 0.79 < .001 Arrows Path Straight 0.88 < .001 Arrows Goal Turn 0.87 < .001 Arrows Goal Straight 0.87 < .001 Arrows Path&Goal Turn 0.85 < .001 Arrows Path&Goal Straight 0.86 < .001 Lights Path Turn 0.92 < .001 Lights Path Straight 0.97 0.017 Lights Goal Turn 0.90 < .001 Lights Goal Straight 0.97 0.014 Lights Path&Goal Turn 0.86 < .001 Lights Path&Goal Straight 0.98 0.108  Table B.3: Levene test for homogeneity of variance of Social Acceptability Score. Cue Type Cue Mode Num. DOF Denom. DOF F Statistic p Value Arrows Path 1 227 6.5 0.011 Arrows Goal 1 227 0.1 0.701 Arrows Path&Goal 1 227 1.1 0.290 Lights Path 1 227 1.5 0.222 Lights Goal 1 227 10.8 0.001 Lights Path&Goal 1 227 28.3 0.000 92  Pairwise Comparisons – Social Acceptability  Table B.4: Main effect of cue type on Social Acceptability Score. Contrast Mean Diff. Standard Error DOF t Statistic p Value Pearson’s r Arrows - Lights 0.6 0.05 227 11 < .001 0.59  Table B.5: Main effect of cue mode on Social Acceptability Score. Contrast Mean Diff. Standard Error DOF t Statistic p Value Pearson’s r Path - Goal 0.1 0.04 454 1.4 0.523 0.06 Path - Path&Goal -0.1 0.04 454 -2.7 0.024 0.12 Goal - Path&Goal -0.2 0.04 454 -4 < .001 0.19  Table B.6: Cue type and cue mode interaction on Social Acceptability Score. Comparing cue types. Contrast Mode Mean Diff. Standard Error DOF t Statistic p Value Pearson’s r Arrows - Lights Path 0.8 0.07 554 11.8 < .001 0.45 Arrows - Lights Goal 0.4 0.07 554 5.7 < .001 0.23 Arrows - Lights Path&Goal 0.5 0.07 554 7.1 < .001 0.29  Table B.7: Cue type and cue mode interaction on Social Acceptability Score. Comparing cue modes. Contrast Type Mean Diff. Standard Error DOF t Statistic p Value Pearson’s r Path - Goal Arrows 0.3 0.06 904 4.7 < .001 0.15 Path - Path&Goal Arrows 0 0.06 904 0.9 1 0.03 Goal - Path&Goal Arrows -0.2 0.06 904 -3.8 < .001 0.13 Path - Goal Lights -0.2 0.06 904 -2.7 0.023 0.09 Path - Path&Goal Lights -0.3 0.06 904 -4.7 < .001 0.16 Goal - Path&Goal Lights -0.1 0.06 904 -2.1 0.119 0.07  Table B.8: Cue type and scenario interaction on Social Acceptability Score. Comparing cue types. Contrast Scenario Mean Diff. Standard Error DOF t Statistic p Value Pearson’s r Arrows - Lights Turn 0.2 0.07 227 2.3 0.025 0.15 Arrows - Lights Straight 1 0.08 227 13 < .001 0.65  Table B.9: Cue type and scenario interaction on Social Acceptability Score. Comparing scenarios. Contrast Type Mean Diff. Standard Error DOF t Statistic p Value Pearson’s r Turn - Straight Arrows 0 0.09 405 0.5 0.583 0.03 Turn - Straight Lights 0.9 0.09 405 9.6 < .001 0.43   93  Pairwise Comparisons – Social Acceptability  Table B.10: Three-way interaction on Social Acceptability Score. Comparing cue types. Contrast Mode Scenario Mean Diff. Standard Error DOF t Statistic p Value Pearson's r Arrows - Lights Path Turn 0.8 0.09 554 8.1 < .001 0.33 Arrows - Lights Goal Turn -0.1 0.09 554 -1.3 0.192 0.06 Arrows - Lights Path&Goal Turn -0.2 0.09 554 -1.7 0.085 0.07 Arrows - Lights Path Straight 0.9 0.1 554 8.6 < .001 0.34 Arrows - Lights Goal Straight 0.9 0.1 554 9 < .001 0.36 Arrows - Lights Path&Goal Straight 1.1 0.1 554 11.4 < .001 0.44  Table B.11: Three-way interaction on Social Acceptability Score. Comparing cue modes. Contrast Type Scenario Mean Diff. Standard Error DOF t Statistic p Value Pearson's r Path - Goal Arrows Turn 0.5 0.08 904 5.7 < .001 0.19 Path - Path&Goal Arrows Turn 0.2 0.08 904 3 0.007 0.1 Goal - Path&Goal Arrows Turn -0.2 0.08 904 -2.7 0.022 0.09 Path - Goal Lights Turn -0.4 0.08 904 -5.5 < .001 0.18 Path - Path&Goal Lights Turn -0.7 0.08 904 -8.7 < .001 0.28 Goal - Path&Goal Lights Turn -0.3 0.08 904 -3.2 0.004 0.11 Path - Goal Arrows Straight 0.1 0.08 904 1 0.971 0.03 Path - Path&Goal Arrows Straight -0.1 0.08 904 -1.7 0.268 0.06 Goal - Path&Goal Arrows Straight -0.2 0.08 904 -2.7 0.022 0.09 Path - Goal Lights Straight 0.1 0.08 904 1.5 0.417 0.05 Path - Path&Goal Lights Straight 0.1 0.08 904 1.7 0.29 0.06 Goal - Path&Goal Lights Straight 0 0.08 904 0.2 1 0.01    94  Pairwise Comparisons for Ordinal Data Table B.12: Main effect of cue mode on path comprehension (Statement 1.3). Contrast Rank Diff. Standard Error DOF t Statistic p Value Path - Goal 88.8 21.4 454 4.1 < .001 Path - Path&Goal -89.4 21.4 454 -4.2 < .001 Goal - Path&Goal -178.2 21.4 454 -8.3 < .001  Table B.13: Main effect of cue mode on goal comprehension (Statement 1.4). Contrast Rank Diff. Standard Error DOF t Statistic p Value Path - Goal -145.9 20.7 454 -7.1 < .001 Path - Path&Goal -84.8 20.7 454 -4.1 < .001 Goal - Path&Goal 61.0 20.7 454 3.0 0.010   Table B.14: Cue mode identifiability (Question 3). Contrast Rank Diff. Standard Error DOF t Statistic p Value Path - Goal 178.8 16.9 454 10.6 < .001 Path - Path&Goal 189.2 16.9 454 11.2 < .001 Goal - Path&Goal 10.4 16.9 454 0.6 1       95  B.2 Data Analysis Details for Study 2 – Robot Yielding Cues The terminology used in this section is slightly different than in Chapter 4. Here, “movement cues” refers to the robot yielding cues; “reverse” refers to the Retreat cue; “slow” refers to the Decelerate cue. We also use “trust” in place of “trustworthiness”.  Table B.15: Normality tests for the comprehensibility data.   Table B.16: Normality tests for the trustworthiness data.     96  Table B.17: Sphericity tests for the comprehensibility data.   Table B.18: Sphericity tests for the trustworthiness data.    97  Table B.19: Pairwise comparisons for the comprehensibility data. Numbers in the first 2 columns refer to the cues: (1) Stop; (2) Retreat; (3) Tilt; (4) Nudge; (5) Decelerate     98  Table B.20: Pairwise comparisons for the trustworthiness data. Numbers in the first 2 columns refer to the cues: (1) Stop; (2) Retreat; (3) Tilt; (4) Nudge; (5) Decelerate   99  Table B.21: Pearson correlation analysis between comprehensibility and trustworthiness of each robot yielding cue. Recall that “reverse” refers to the Retreat cue, and “slow” refers to the Decelerate cue.  100  Appendix C  Surveys, Consent Letters, and Advertisements This appendix provides copies of the online surveys used in this thesis, along with the corresponding consent letters and advertisements. I present the surveys as a series of screenshots. Unless otherwise specified, participants saw the screenshots as separate pages in the survey in the order they appear here. Before proceeding through the survey to the next page, participants were required to answer each question except the text entry questions. The slider questions (e.g. Figure C.7) had a default slider position, but participants were forced to move the slider to respond; they could move it back to its default position. After each survey finished, participants were shown my “...@alumni.ubc.ca” email, which was also included in the consent letters shown below.   The surveys used embedded videos that were hosted on YouTube. The embedded video player allowed participants to view individual videos on www.youtube.com if they clicked the “Watch Later” or “Share” buttons shown in Figure C.5. However, the videos were hosted as “Unlisted”, so participants could only view the survey videos through the Qualtrics survey tool. On each video page, the subsequent questions only appeared after participants played the video. The videos are enclosed with this thesis as supplementary material. The Study 1 video files are named with the pattern “type_mode_scenario.mov”. The Study 2 video files have the yielding cue in their names.  Section C.1 contains the survey, consent letter, and advertisement for Study 1. Section C.2 contains the survey, consent letter, and advertisement for Study 2.   101  C.1 Survey, Consent Letter, and Advertisement for Study 1 – Motion Legibility Cues In this survey, I used the phrase “communication cues” to refer to the robot motion legibility cues. “Orange lights” and “green arrows” refer to the flashing lights and projected arrows cue types, respectively. “Next movement” and “final destination” refer to the path and goal cue modes, respectively. Survey  Figure C.1: CAPTCHA and consent letter. 102   Figure C.2: Demographics questions. 103   Figure C.3: Robotics experience demographic question. This question was only shown if participants responded “yes” to the last question shown in Figure C.2. 104   Figure C.4: Instructions. 105   Figure C.5: Video page with attention check question. This image was shown above Figure C.6 on the same survey page. Participants had to watch the video to proceed. This page (including Figure C.6) repeated, showing a different video each time. Refer to Section 3.1.4 for details about which videos were shown. 106   Figure C.6: Question 1 – Likert statements. Repeated with Figure C.5.  107   Figure C.7:  Question 2 – cue type preference.   108   Figure C.8: Question 3 – cue mode clarity. This question was repeated twice, once with “orange lights” and once with “green arrows”, inserted as shown.      “orange lights” or “green arrows” inserted 109   Figure C.9: Question 4 – cue mode preference. This question was repeated twice, once with “orange lights” and once with “green arrows”, inserted as shown. “orange lights” or “green arrows” inserted 110   Figure C.10: Last page. A unique ID was inserted as shown.   ID inserted 111  Consent Letter  Figure C.11: Page 1 of the consent letter for Study 1.  112   Figure C.12: Page 2 of the consent letter for Study 1.113  Advertisement  Figure C.13: Study 1 recruitment advertisement on Amazon Mechanical Turk. Participants submitted the unique ID shown in Figure C.10. I used Python scripts to check responses against the attention check question shown in Figure C.5. Because subsets of participants failed the attention check question, I collected data in multiple batches. The “Qualifications Required” mechanism prevented repeat participants. I used the same method in Study 2.  114  C.2 Survey, Consent Letter, and Advertisement for Study 2 - Robot Yielding Cues In this survey, we used “movement cues” to refer to the robot yielding cues. We wanted participants to interpret each cue for themselves. Survey  Figure C.14: CAPTCHA and consent letter.   115   Figure C.15: Instructions.   116   Figure C.16: Familiarization page.   117    Figure C.17: Video page with Question 1 – attention check and Question 2 – interpretation. This image was shown above Figure C.18 on the same survey page. Participants had to watch the video to proceed. This page (including Figure C.18) repeated, showing a cue each time. 118   Figure C.18: Question 3 – Likert statements. Repeated with  Figure C.17.   119    Figure C.19: Demographic questions.   120   Figure C.20: Last page.   121  Consent Letter  Figure C.21: Page 1 of the consent letter for Study 2. 122   Figure C.22: Page 2 of the consent letter for Study 2. 123  Advertisement  Figure C.23: Study 2 recruitment advertisement on Amazon Mechanical Turk. Participants submitted the unique ID shown in Figure C.20. I used the same method described in Figure C.13.  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            data-media="{[{embed.selectedMedia}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0394282/manifest

Comment

Related Items