UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The Haptic Creature : social human-robot interaction through affective touch Yohanan, Steven John 2012

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
ubc_2012_fall_yohanan_steven.pdf [ 48.19MB ]
Metadata
JSON: 1.0052135.json
JSON-LD: 1.0052135+ld.json
RDF/XML (Pretty): 1.0052135.xml
RDF/JSON: 1.0052135+rdf.json
Turtle: 1.0052135+rdf-turtle.txt
N-Triples: 1.0052135+rdf-ntriples.txt
Original Record: 1.0052135 +original-record.json
Full Text
1.0052135.txt
Citation
1.0052135.ris

Full Text

The Haptic Creature Social Human-Robot Interaction through Affective Touch by Steven John Yohanan  BS, The University of Wisconsin-Milwaukee, 1990 MS, The University of Wisconsin-Milwaukee, 1997  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in The Faculty of Graduate Studies (Computer Science)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August 2012 c Steven John Yohanan 2012  Abstract Emotion communication is an important aspect of social interaction. Affect display research from psychology as well as social human-robot interaction has focused primarily on facial or vocal behaviors, as these are the predominant means of expression for humans. Much less attention, however, has been on emotion communication through touch, which, though unique among the senses, can be methodologically and technologically difficult to study. Our thesis investigated the role of affective touch in the social interaction between human and robot. Through a process of design and controlled user evaluation, we examined the display, recognition, and emotional influence of affective touch. To mitigate issues inherent in touch research, we drew from interaction models not between humans but between human and animal, whereby the robot assumes the role of companion animal. We developed the Haptic Creature, a small, zoomorphic robot novel in its sole focus on touch for both affect sensing and display. The robot perceives movement and touch, and it expresses emotions through ear stiffness, modulated breathing, and vibrotactile purring. The Haptic Creature was employed in three user studies, each exploring a different aspect of affective touch interaction. Our first study examined emotion display from the robot. We detail the design of the Haptic Creature’s affect display, which originated from animal models, then was enhanced through successive piloting. A formal study demonstrated the robot was more successful communicating arousal than valence. Our second study investigated affect display from the human. We compiled a touch dictionary from psychology and human-animal interaction research. Participants first rated the likelihood of using these touch gestures when expressing a variety of emotions, then performed likely gestures communicating specific emo-  ii  Abstract tions for the Haptic Creature. Results provided properties of human affect display through touch and high-level categorization of intent. Our final study explored the influence of affective touch. Results empirically demonstrated the human’s emotional state was directly influenced from affective touch interactions with the robot. Our research has direct significance to the field of socially interactive robotics and, further, any domain interested in human use of affective touch: psychology, mediated social touch, human-animal interaction.  iii  Preface This dissertation is composed of research I conducted during my tenure at the University of British Columbia. Except were noted in this preface, I am the primary contributor to all facets of this research, which was conducted under the supervision of Dr. Karon E. MacLean, who is my co-author on all work presented herein. The remainder of this preface enumerates additional collaborations, previously published works, and related ethics approvals. An early roadmap for this research has been previously published. It outlined the research approach in Chapter 1; touched upon the related robots in Chapter 2; and provided a preliminary overview of the Haptic Creature in Chapter 4. • Steve Yohanan and Karon MacLean. The Haptic Creature project: Social human-robot interaction through affective touch. In Proceedings of the AISB 2008 Symposium on the Reign of Catz & Dogz: The Second AISB Symposium on the Role of Virtual Creatures in a Computerised Society, volume 1 of AISB 2008, pages 7–11, April 2008. I conceived of and constructed the Hapticat as well as developed the related user study as presented in Chapter 3. This research was conducted as part of a course project in collaboration with graduate students Mavis Chan, Jeremy Hopkins, and Haibo Sun. Mavis Chan, in particular, provided contributions equal to my own, both in the Hapticat’s fabrication as well as in all aspects of the user study. This chapter’s work has been previously published. • Steve Yohanan, Mavis Chan, Jeremy Hopkins, Haibo Sun, and Karon MacLean. Hapticat: Exploration of affective touch. In Proceedings of the 7th International Conference on Multimodal Interfaces, ICMI ’05, pages 222– 229, New York, New York, USA, October 2005. ACM Press. iv  Preface I was the inventor and chief architect of the Haptic Creature described in Chapter 4. I was responsible for all aspects of the robot’s design and development, including its look and feel, behavior, software, and mechatronics. Such a considerable undertaking, however, would not have been successful without the assistance of manifold individuals under my direction. Tim Oxenford, an undergraduate, conducted the preliminary mechatronics investigation and, in the process, constructed an initial automated prototype. Undergraduates Noel Wu, Tinny Lai, and Kenneth Ng furthered this prototype by stabilizing the platform for actuation, sensing, and communication thereof. They also advanced the mechanics for the ears, breathing, and purring. Geoffrey Lo, an undergraduate, conducted a preliminary investigation of accelerometer use as well as designed the final purring mechanism. Matthew Baumann, a Masters student, designed and constructed the final version of the ears and breathing mechanism. Elaine Khaw, an undergraduate, contributed general mechanical engineering expertise as well as was instrumental in many finishing touches for the Haptic Creature’s shell. Undergraduate Dana Nielsen tirelessly wired and mounted all the touch sensors. Undergraduate Sandra Yuen Helsley developed an early software prototype that lead to the master panel graphical user interface component. Joseph P. Hall III, a Masters student, provided initial designs of the motor control board, which were subsequently stabilized by PhD Candidate Ricardo Pedrosa. Jonathan Chang, an undergraduate, designed and implemented a preliminary version of the gesture recognition engine and, as such, was the primary contributor to this area of research, with active support by Karon MacLean and myself. Sachiyo Takahashi, my wife, sourced and meticulously constructed all versions of the Haptic Creature’s fur. Finally, fellow PhD Candidates Mario Enriquez and Ricardo Pedrosa lent their considerable expertise in electronics and mechatronics countless times throughout the robot’s development. An early overview of the Haptic Creature has been previously published. • Steve Yohanan and Karon E. MacLean. A tool to study affective touch: Goals & design of the Haptic Creature. In Extended Abstracts on Human Factors in Computing Systems, CHI EA ’09, pages 4153–4158, New York, New York, USA, 2009. ACM. v  Preface Jonathan Chang’s proof of concept gesture recognition engine has also been previously published. • Jonathan Chang, Karon MacLean, and Steve Yohanan. Gesture recognition in the Haptic Creature. In Astrid Kappers, Jan van Erp, Wouter Bergmann Tiest, and Frans van der Helm, editors, Haptics: Generating and Perceiving Tangible Sensations - EuroHaptics 2010, volume 6191 of Lecture Notes in Computer Science, pages 385–391. Springer Berlin / Heidelberg, 2010. Dr. Jessica L. Tracy provided significant guidance on the methods and measures employed in the user study from Chapter 5. This chapter’s work has been previously published. • Steve Yohanan and Karon E. MacLean. Design and assessment of the Haptic Creature’s affect display. In Proceedings of the 6th ACM/IEEE International Conference on Human-Robot Interaction, HRI ’11, pages 473–480, New York, New York, USA, March 2011. ACM. Under my direction, undergraduates Jessica Dawson and Juliette Link took on the considerable task of coding the video recordings from the user study in Chapter 6. In addition, Dr. Matthew J. Hertenstein kindly provided specific definitions of several touch gestures used in his studies on human-to-human affective touch. This chapter’s work has been previously published. • Steve Yohanan and Karon E. MacLean. The role of affective touch in humanrobot interaction: Human intent and expectations in touching the Haptic Creature. International Journal of Social Robotics (SORO); Special Issue on Expectations, Intentions, & Actions, 4(2):163–180, April 2012. Lang Qin, a Masters student in the Department of Statistics at the University of British Columbia, provided guidance on the statistical analysis for the user study in Chapter 7. All user studies were approved by the University of British Columbia Behavioural Research Ethics Board (UBC BREB). The study presented in Chapter 3 was conducted under UBC BREB numbers B03-0490 and B03-0490. The studies presented in Chapters 5, 6, and 7 were conducted under UBC BREB number H01-80470. vi  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ii  Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  iv  Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  vii  List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  xiv  List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  xvi  Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  xix  Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  xx  Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1  1.1  Stella and Roi . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1  1.2  Motivation  . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  2  1.3  Research Goals . . . . . . . . . . . . . . . . . . . . . . . . . . .  3  1.4  Research Approach . . . . . . . . . . . . . . . . . . . . . . . . .  4  1.4.1  Preliminary Investigation  . . . . . . . . . . . . . . . . .  4  1.4.2  Haptic Creature  . . . . . . . . . . . . . . . . . . . . . .  5  1.4.3  Interaction Decomposition . . . . . . . . . . . . . . . . .  7  1.5  Summary of Contributions . . . . . . . . . . . . . . . . . . . . .  9  1.6  Dissertation Roadmap  Abstract  1  2  . . . . . . . . . . . . . . . . . . . . . . .  10  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  12  Affect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  13  2.1.1  13  Related Work 2.1  Discrete versus Dimensional Models  . . . . . . . . . . .  vii  Table of Contents  2.2  2.3  2.4  Face as Primary Means of Display  . . . . . . . . . . . .  15  2.1.3  Significance in Social Contexts . . . . . . . . . . . . . .  16  Touch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  17  2.2.1  Social Touch . . . . . . . . . . . . . . . . . . . . . . . .  17  2.2.2  Affective Touch  19  2.2.3  Mediated Social Touch  . . . . . . . . . . . . . . . . . .  20  2.2.4  Methodological Issues in the Study of Touch . . . . . . .  21  Human-Animal Interaction . . . . . . . . . . . . . . . . . . . . .  23  2.3.1  Anthropomorphism and Animal Emotions  . . . . . . . .  23  2.3.2  Influence of Human-Animal Interaction . . . . . . . . . .  25  Socially Interactive Robots . . . . . . . . . . . . . . . . . . . . .  26  2.4.1  Social Interaction Research  30  2.4.2  Differentiating the Haptic Creature  . . . . . . . . . . . . . . . . . . . . . .  . . . . . . . . . . . . . . . . . . . . . . . . . . . .  31  Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  32  Preliminary Investigation . . . . . . . . . . . . . . . . . . . . . . . .  34  3.1  . . . . . . . . . . . . . . .  35  3.1.1  Prototype Actuation . . . . . . . . . . . . . . . . . . . .  36  3.1.2  Body . . . . . . . . . . . . . . . . . . . . . . . . . . . .  37  3.1.3  Ears  . . . . . . . . . . . . . . . . . . . . . . . . . . . .  37  3.1.4  Breathing Mechanism . . . . . . . . . . . . . . . . . . .  38  3.1.5  Purring Mechanism  . . . . . . . . . . . . . . . . . . . .  38  3.1.6  Warming Element . . . . . . . . . . . . . . . . . . . . .  39  3.1.7  Response Settings . . . . . . . . . . . . . . . . . . . . .  39  User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  39  3.2.1  Participants . . . . . . . . . . . . . . . . . . . . . . . . .  40  3.2.2  Study Setup  . . . . . . . . . . . . . . . . . . . . . . . .  40  3.2.3  Procedure  . . . . . . . . . . . . . . . . . . . . . . . . .  41  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  43  3.3.1  Mapping Touch Actions to the Hapticat Responses . . . .  43  3.3.2  Recognizing Hapticat Affect Display . . . . . . . . . . .  44  3.3.3  Participant Affect Report  . . . . . . . . . . . . . . . . .  45  3.3.4  Observational Data . . . . . . . . . . . . . . . . . . . . .  46  2.5 3  2.1.2  3.2  3.3  Hapticat Design and Implementation  Results  viii  Table of Contents  4  3.4  Discussion  . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  47  3.5  Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  49  . . . . . . . . . . . . . . . . . . . . . . . . . .  50  4.1  Design Considerations . . . . . . . . . . . . . . . . . . . . . . .  51  4.2  Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  53  4.2.1  Ears  . . . . . . . . . . . . . . . . . . . . . . . . . . . .  55  4.2.2  Lungs  . . . . . . . . . . . . . . . . . . . . . . . . . . .  55  4.2.3  Purr Box . . . . . . . . . . . . . . . . . . . . . . . . . .  56  4.2.4  Touch and Movement Sensing . . . . . . . . . . . . . . .  56  4.2.5  Communication and Control . . . . . . . . . . . . . . . .  56  The Haptic Creature  4.3  4.4 5  Software  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  57  4.3.1  Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . .  61  4.3.2  Gesture Recognizer  . . . . . . . . . . . . . . . . . . . .  61  4.3.3  Emoter . . . . . . . . . . . . . . . . . . . . . . . . . . .  62  4.3.4  Physical Renderer . . . . . . . . . . . . . . . . . . . . .  63  4.3.5  Actuation . . . . . . . . . . . . . . . . . . . . . . . . . .  66  Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  67  . . . . . . . . . . . . . . . . . . . . . . . . . .  68  Affect Display Design . . . . . . . . . . . . . . . . . . . . . . .  69  5.1.1  Ears  . . . . . . . . . . . . . . . . . . . . . . . . . . . .  71  5.1.2  Lungs  . . . . . . . . . . . . . . . . . . . . . . . . . . .  71  5.1.3  Purr Box . . . . . . . . . . . . . . . . . . . . . . . . . .  74  User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  75  5.2.1  Participants . . . . . . . . . . . . . . . . . . . . . . . . .  76  5.2.2  Study Setup  . . . . . . . . . . . . . . . . . . . . . . . .  76  5.2.3  Stimuli . . . . . . . . . . . . . . . . . . . . . . . . . . .  77  5.2.4  Response Format . . . . . . . . . . . . . . . . . . . . . .  77  5.2.5  Procedure  . . . . . . . . . . . . . . . . . . . . . . . . .  78  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  80  5.3.1  Recognition Scoring . . . . . . . . . . . . . . . . . . . .  80  5.3.2  Perceived Arousal and Valence Ratings . . . . . . . . . .  82  Robot Affect Display 5.1  5.2  5.3  Results  ix  Table of Contents 5.3.3  . . . . . . . . . . . . . . . . . .  87  . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  87  5.4.1  Emotion Label Selections . . . . . . . . . . . . . . . . .  88  5.4.2  Effectiveness of Conveying Arousal . . . . . . . . . . . .  88  5.4.3  Ambiguity in Communicating Valence  . . . . . . . . . .  88  5.4.4  Breathing’s Contribution to Valence . . . . . . . . . . . .  89  5.4.5  Purring’s Contribution to Valence . . . . . . . . . . . . .  90  5.4.6  No Influence by Gender or Animal Experience . . . . . .  90  5.4.7  Interaction Decreases Participant Arousal . . . . . . . . .  91  Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  91  . . . . . . . . . . . . . . . . . . . . . . . . .  92  6.1  Touch Dictionary . . . . . . . . . . . . . . . . . . . . . . . . . .  93  6.2  User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  97  6.2.1  Participants . . . . . . . . . . . . . . . . . . . . . . . . .  97  6.2.2  Study Setup  . . . . . . . . . . . . . . . . . . . . . . . .  97  6.2.3  Procedure  . . . . . . . . . . . . . . . . . . . . . . . . .  99  5.4  5.5 6  Participant Affect State  Discussion  Human Affect Display  6.3  6.4  Results  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102  6.3.1  Touch Gesture Likelihood . . . . . . . . . . . . . . . . . 102  6.3.2  Touch Gesture Profile . . . . . . . . . . . . . . . . . . . 106  6.3.3  Haptic Creature Emotional Response . . . . . . . . . . . 114  6.3.4  Questionnaire Responses  Discussion  . . . . . . . . . . . . . . . . . 115  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119  6.4.1  Reflections on Study Design . . . . . . . . . . . . . . . . 119  6.4.2  Influence of Robot Context and Morphology . . . . . . . 120  6.4.3  Human Intent through Affective Touch . . . . . . . . . . 120  6.4.4  Mirrored Emotional Response Expected from Haptic Creature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124  6.4.5 6.5 7  Implication for Haptic Creature Design . . . . . . . . . . 125  Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126  Influence of Affective Touch 7.1  . . . . . . . . . . . . . . . . . . . . . . 128  Updated Robot Affect Display . . . . . . . . . . . . . . . . . . . 130  x  Table of Contents  7.2  7.3  7.4  7.5 8  7.1.1  Ears  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130  7.1.2  Lungs  7.1.3  Purr Box . . . . . . . . . . . . . . . . . . . . . . . . . . 133  . . . . . . . . . . . . . . . . . . . . . . . . . . . 130  User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 7.2.1  Participants . . . . . . . . . . . . . . . . . . . . . . . . . 135  7.2.2  Study Setup  7.2.3  Human Affective Touch Gestures . . . . . . . . . . . . . 137  7.2.4  Stimuli . . . . . . . . . . . . . . . . . . . . . . . . . . . 139  7.2.5  Response Format . . . . . . . . . . . . . . . . . . . . . . 140  7.2.6  Demand Characteristics Considerations . . . . . . . . . . 141  7.2.7  Procedure  Results  . . . . . . . . . . . . . . . . . . . . . . . . 135  . . . . . . . . . . . . . . . . . . . . . . . . . 144  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149  7.3.1  Participant Affect State  7.3.2  Haptic Creature Emotional Response . . . . . . . . . . . 151  7.3.3  Questionnaire Responses  Discussion  . . . . . . . . . . . . . . . . . . 149 . . . . . . . . . . . . . . . . . 156  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159  7.4.1  Reflections on Study Design . . . . . . . . . . . . . . . . 161  7.4.2  Effects of Demand Characteristics . . . . . . . . . . . . . 162  7.4.3  Middling Responsiveness Impression . . . . . . . . . . . 164  7.4.4  Differences between Factor Levels  . . . . . . . . . . . . 166  Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169  Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 8.1  8.2  Research Contributions . . . . . . . . . . . . . . . . . . . . . . . 171 8.1.1  Platform for the Study of Affective Touch . . . . . . . . . 171  8.1.2  Affective Touch Originating from the Robot  8.1.3  Affective Touch Originating from the Human . . . . . . . 172  8.1.4  Affective Touch Interactions Influence on the Human  . . . . . . . 171 . . 173  Reflections on Research Approach . . . . . . . . . . . . . . . . . 175 8.2.1  Human-Animal Interaction  . . . . . . . . . . . . . . . . 175  8.2.2  Duration of Emotional Interaction . . . . . . . . . . . . . 176  8.2.3  Three-Dimensional Models of Affect . . . . . . . . . . . 178  8.2.4  Embodiment of Emotion . . . . . . . . . . . . . . . . . . 179 xi  Table of Contents 8.3  8.4  8.5  Considerations in Designing for Affective Touch . . . . . . . . . 179 8.3.1  Interaction Context and Robot Morphology . . . . . . . . 180  8.3.2  Robot Affective Touch Gestures . . . . . . . . . . . . . . 180  8.3.3  Robot Response . . . . . . . . . . . . . . . . . . . . . . 182  8.3.4  Recognizing Human Affective Touch . . . . . . . . . . . 183  8.3.5  Touch Sensing Technologies . . . . . . . . . . . . . . . . 185  Future Directions . . . . . . . . . . . . . . . . . . . . . . . . . . 186 8.4.1  The Haptic Creature . . . . . . . . . . . . . . . . . . . . 187  8.4.2  Haptic Creature Affect Display  8.4.3  Human Intent through Affective Touch . . . . . . . . . . 190  8.4.4  Emotion Elicitation through Touch  8.4.5  Ethnographic and Longitudinal Studies . . . . . . . . . . 190  8.4.6  Robot-Assisted Therapy . . . . . . . . . . . . . . . . . . 191  . . . . . . . . . . . . . . 189 . . . . . . . . . . . . 190  Closing Thoughts . . . . . . . . . . . . . . . . . . . . . . . . . . 192  Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193  Appendices A Haptic Creature Materials . . . . . . . . . . . . . . . . . . . . . . . 215 A.1 Hardware Schematics  . . . . . . . . . . . . . . . . . . . . . . . 215  A.2 Graphical User Interface . . . . . . . . . . . . . . . . . . . . . . 220 A.3 Microcontroller Communications Protocol  . . . . . . . . . . . . 232  B Preliminary Investigation Materials . . . . . . . . . . . . . . . . . . 234 B.1 Hapticat Internals . . . . . . . . . . . . . . . . . . . . . . . . . . 235 B.2 Participant Consent Form  . . . . . . . . . . . . . . . . . . . . . 236  B.3 Initial Questionnaire . . . . . . . . . . . . . . . . . . . . . . . . 239 B.4 Post-Study Questionnaire  . . . . . . . . . . . . . . . . . . . . . 241  C Robot Affect Display Study Materials . . . . . . . . . . . . . . . . . 245 C.1 General Participant Recruitment . . . . . . . . . . . . . . . . . . 245 C.2 Participant Registration . . . . . . . . . . . . . . . . . . . . . . . 247 C.3 Participant Consent Form  . . . . . . . . . . . . . . . . . . . . . 249 xii  Table of Contents C.4 Preliminary Instructions  . . . . . . . . . . . . . . . . . . . . . . 252  C.5 User Study Screens . . . . . . . . . . . . . . . . . . . . . . . . . 262 C.6 Post-Study Questionnaire  . . . . . . . . . . . . . . . . . . . . . 270  D Human Affect Display Study Materials . . . . . . . . . . . . . . . . 280 D.1 Participant Registration . . . . . . . . . . . . . . . . . . . . . . . 280 D.2 Participant Consent Form D.3 Preliminary Instructions  . . . . . . . . . . . . . . . . . . . . . 282 . . . . . . . . . . . . . . . . . . . . . . 285  D.4 User Study Screens . . . . . . . . . . . . . . . . . . . . . . . . . 296 D.5 Post-Study Questionnaire  . . . . . . . . . . . . . . . . . . . . . 307  D.6 Video Coding of Touch Gestures . . . . . . . . . . . . . . . . . . 321 E Influence of Affective Touch Study Materials . . . . . . . . . . . . . 325 E.1 Participant Registration . . . . . . . . . . . . . . . . . . . . . . . 325 E.2 Participant Consent Form E.3 Preliminary Instructions  . . . . . . . . . . . . . . . . . . . . . 327 . . . . . . . . . . . . . . . . . . . . . . 330  E.4 User Study Screens . . . . . . . . . . . . . . . . . . . . . . . . . 343 E.5 Post-Study Questionnaire  . . . . . . . . . . . . . . . . . . . . . 354  E.6 Abstract Shapes Sequence Generation . . . . . . . . . . . . . . . 371  xiii  List of Tables 3.1  Hapticat mechanisms ranges . . . . . . . . . . . . . . . . . . . .  38  3.2  Hapticat mechanisms settings for responses . . . . . . . . . . . .  40  3.3  Expected mappings from action to Hapticat response . . . . . . .  44  3.4  Participants’ mean affective state for active and nonactive Hapticat response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  5.1  48  Key Expressions: arousal and valence categorization, actuator rendering parameters . . . . . . . . . . . . . . . . . . . . . . . . . .  72  5.2  Emotion label list for assessing the Haptic Creature’s emotional state 78  5.3  Equivalency mappings between Russell and Ekman emotion labels  82  5.4  Frequency of emotion label chosen for each condition . . . . . . .  83  5.5  Frequency breakdown for aggregate emotion labels in Table 5.4 .  84  5.6  Homogeneous subsets for mean rating of perceived arousal . . . .  86  5.7  Homogeneous subsets for mean rating of perceived valence . . . .  86  5.8  Participant arousal and valence self-reports at specified times . . .  87  6.1  The touch dictionary . . . . . . . . . . . . . . . . . . . . . . . .  94  6.2  Emotion label list for predicting the Haptic Creature’s emotional response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101  6.3  Mean likelihood touch gestures would be used to communicate given emotions . . . . . . . . . . . . . . . . . . . . . . . . . . . 104  6.4  Touch gestures likely to communicate given emotions . . . . . . . 107  6.5  Human (initiator) and Haptic Creature (receiver) points of contact frequency for given touch gestures . . . . . . . . . . . . . . . . . 109  6.6  Mean duration and mean pressure intensity of likely touch gestures when communicating given emotions . . . . . . . . . . . . . . . 112 xiv  List of Tables 6.7  Frequency of emotional response predicted for Haptic Creature based on emotion communicated . . . . . . . . . . . . . . . . . . 116  6.8  Frequency breakdown for aggregate Predicted emotion labels in Table 6.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117  7.1  Key Expressions: arousal and valence categorization, updated actuator rendering parameters . . . . . . . . . . . . . . . . . . . . . 131  7.2  Miserable human touch gestures . . . . . . . . . . . . . . . . . . 137  7.3  Pleased human touch gestures . . . . . . . . . . . . . . . . . . . 138  7.4  Emotion label list for assessing the Haptic Creature’s emotional response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141  7.5  Change in participant emotional state for both levels of emotion communicated factor . . . . . . . . . . . . . . . . . . . . . . . . 150  7.6  Frequency of participant prediction of Haptic Creature emotional response to human touch gestures for both levels of emotion communicated factor . . . . . . . . . . . . . . . . . . . . . . . . . . 153  7.7  Frequency breakdown for aggregate emotion labels in Table 7.6 . 153  7.8  Frequency of participant perception of Haptic Creature emotional response to human touch gestures for both levels of emotion communicated factor . . . . . . . . . . . . . . . . . . . . . . . . . . 154  7.9  Frequency breakdown for aggregate emotion labels in Table 7.8 . 154  7.10 Frequency of participant designation of similarities among the four gesture sequences performed for the Haptic Creature . . . . . . . 158 7.11 Frequency of participant valence and arousal rating of miserable human touch gesture sequence . . . . . . . . . . . . . . . . . . . 160 7.12 Frequency of participant valence and arousal rating of pleased human touch gesture sequence . . . . . . . . . . . . . . . . . . . . 160 E.1 XScreenSaver Deco configuration . . . . . . . . . . . . . . . . . 371  xv  List of Figures 1.1  Seven phases of thesis research . . . . . . . . . . . . . . . . . . .  6  1.2  Affective touch interaction loop between human and Haptic Creature  7  2.1  Research domains related to the Haptic Creature . . . . . . . . . .  12  2.2  Paro robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  28  2.3  Pleo robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  29  2.4  Probo robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  29  3.1  The Hapticat . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  36  3.2  Setup for preliminary investigation . . . . . . . . . . . . . . . . .  41  3.3  Participants’ mappings of action to Hapticat’s response . . . . . .  45  3.4  Participants’ perception of Hapticat’s responses to actions . . . . .  46  3.5  Participants’ affective response to active haptic and nonactive renderings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  47  4.1  The Haptic Creature . . . . . . . . . . . . . . . . . . . . . . . . .  50  4.2  The Haptic Creature without exterior fur . . . . . . . . . . . . . .  53  4.3  The Haptic Creature mechatronics . . . . . . . . . . . . . . . . .  54  4.4  Touch sensor layout, flattened . . . . . . . . . . . . . . . . . . .  57  4.5  FSR linearization circuit . . . . . . . . . . . . . . . . . . . . . .  58  4.6  Overview of the Haptic Creature architecture . . . . . . . . . . .  59  4.7  Host software architecture . . . . . . . . . . . . . . . . . . . . .  60  4.8  The Haptic Creature’s affect space . . . . . . . . . . . . . . . . .  64  5.1  Affective touch interaction loop between human and Haptic Creature. Adapted from Figure 1.2 to highlight affect display from robot 68  xvi  List of Figures 5.2  The Haptic Creature’s affect space adapted from Figure 4.8 to highlight key expressions . . . . . . . . . . . . . . . . . . . . . . . .  5.3  Change in lung volume over four-second time period for key expressions in Table 5.1 . . . . . . . . . . . . . . . . . . . . . . . .  5.4  70 73  Change in purr amplitude over four-second time period for key expressions in Table 5.1 . . . . . . . . . . . . . . . . . . . . . . . .  74  5.5  Setup for robot affect display study . . . . . . . . . . . . . . . . .  76  5.6  Mean perceived arousal and valence ratings by emotion label chosen 81  5.7  Mean ratings for perceived arousal and perceived valence . . . . .  6.1  Affective touch interaction loop between human and Haptic Crea-  85  ture. Adapted from Figure 1.2 to highlight affect display from human as well as emotional influence on robot . . . . . . . . . . . .  92  6.2  Setup for human affect display study . . . . . . . . . . . . . . . .  98  6.3  Human intent through affective touch . . . . . . . . . . . . . . . 122  7.1  Affective touch interaction loop between human and Haptic Creature. Adapted from Figure 1.2 to highlight emotional influence on human . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128  7.2  Change in lung volume for key expressions miserable, neutral, and pleased in Table 7.1 . . . . . . . . . . . . . . . . . . . . . . . . . 133  7.3  Change in purr amplitude over four-second time period for key expressions in Table 7.1 . . . . . . . . . . . . . . . . . . . . . . . . 134  7.4  Setup for influence of affective touch study . . . . . . . . . . . . 136  7.5  Timing protocol for a single touch gesture interaction . . . . . . . 139  7.6  Example of onscreen human touch gesture instructions . . . . . . 148  7.7  Participant change in valence in relation to participant perceived valence response of Haptic Creature for both factor levels . . . . . 151  A.1 FSR PCB schematic . . . . . . . . . . . . . . . . . . . . . . . . . 216 A.2 FSR PCB layout . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 A.3 Motor control board schematic . . . . . . . . . . . . . . . . . . . 218 A.4 Motor control board layout . . . . . . . . . . . . . . . . . . . . . 219 A.5 Master panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 xvii  List of Figures A.6 Master panel with state . . . . . . . . . . . . . . . . . . . . . . . 222 A.7 Creature editor . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 A.8 Scheduler editor . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 A.9 Recognizer editor . . . . . . . . . . . . . . . . . . . . . . . . . . 225 A.10 Emoter editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 A.11 Renderer editor . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 A.12 Sensors editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 A.13 Ear actuator editor . . . . . . . . . . . . . . . . . . . . . . . . . . 229 A.14 Lung actuator editor . . . . . . . . . . . . . . . . . . . . . . . . . 230 A.15 PurrBox actuator editor . . . . . . . . . . . . . . . . . . . . . . . 231 B.1 The Hapticat internals . . . . . . . . . . . . . . . . . . . . . . . . 235 D.1 Human head demarcation . . . . . . . . . . . . . . . . . . . . . . 322 D.2 Human body demarcation . . . . . . . . . . . . . . . . . . . . . . 323 D.3 Haptic Creature demarcation . . . . . . . . . . . . . . . . . . . . 324 E.1 XScreenSaver Deco example image . . . . . . . . . . . . . . . . 372  xviii  Acknowledgments I would first like to thank my supervisor, Dr. Karon MacLean. She gave me the freedom to explore my often wacky ideas, yet also insisted on a solid research approach. Dr. MacLean’s constant questioning required me to think more deeply about many aspects of the work, particularly the experimental design and subsequent analysis, thus resulting in a far stronger thesis. Second, I would like to individually recognize the members of my supervisory committee. Dr. Elizabeth Croft was the first to commit to supervise and remained accessible throughout with her broad insights into the study of human-robot interaction. Dr. Dinesh Pai was uncanny in his ability to quickly spot deficiencies then provide reasonable approaches to rectify them. Dr. Jessica Tracy provided invaluable guidance into the psychology of emotion and research thereof. Furthermore, I would like to acknowledge the members of my examination committee: Dr. Yusuf Altintas, Dr. James Enns, Dr. James Little, and Dr. Ehud Sharlin. Their probing questions and insightful comments greatly enhanced the caliber of this thesis. I would also like to thank the many wonderful colleagues in the Sensory Perception and Interaction Research Group (SPIN) whom I have had the pleasure of befriending over the years. While the list is long, I am particularly indebted to Mario Enriquez, Ricardo Pedrosa, and Colin Swindells, who were there for me from the beginning with technical, academic, and moral support. Finally, this long journey would not have been possible without the unwavering support of my family and friends. I would like to single out my parents, for instilling in me the confidence to pursue a doctoral degree and their constant encouragement throughout. I would also like to expressly thank my wife, Sachiyo Takahashi. She was by my side at every step, forward or backward, and was an essential sounding board during the formulation of many of the more creative ideas within this thesis. xix  Dedication To my parents, Jim and Mary Ann, and to my wife, Sachiyo.  xx  It’s the sense of touch. Any real city, you walk, you’re bumped, brush past people. In LA, no one touches you . . . . We’re always behind metal and glass. Think we miss that touch so much, we crash into each other just to feel something. — from Crash (2004) xxi  Chapter 1  Introduction 1.1  Stella and Roi  Stella is stretched out supinely on her sofa taking a siesta. One arm is draped around Roi, a big ball of fur resting expectantly on her chest, who gently moves up and down with the slow undulation of Stella’s breathing. Gradually Stella begins to stir, and her breathing grows slightly deeper as a result. Sensing the change, Roi becomes excited at the prospect of her awakening. He stiffens his ears, nudges Stella firmly with his head, and begins a pronounced, brisk “prrrrr . . . ” that vibrates in her chest. Forced to rise a little sooner then she would have liked, Stella nonetheless gives Roi a firm hug as she sits up. She places him in her lap and, enjoying the warmth of his body, instinctually strokes his fur. Roi’s breathing and purring both slow somewhat. The two sit there together, pleased, while Stella waits for her lingering drowsiness to fade. Eventually garnering the energy to pick herself and Roi up, Stella moves them across the room to sit at her computer. She returns Roi to her lap and rests her hand against his side. His purring subsides. Stella checks her Inbox, but the message is still not there. She tries to occupy her time (and mind) through a variety of meaningless computer activities. At the same time, she randomly switches between idly fingering Roi’s fur and gently squeezing his now half-stiffened ears. Stella has waited weeks; she was told she would receive their response, one way or the other, by today. The thought of not being accepted for the position has weighed on Stella for some time; however, her worry seems more acute this afternoon as she has yet to hear back. She begins to firmly pat Roi’s back, then vigorously rubs his fur. Roi feels her becoming depressed, so he tries to counter by becoming relaxed. He 1  1.2. Motivation arches his back against her hand, his ears go slack, and his breathing becomes slow and symmetric. The intensity of Stella’s touch diminishes, while her rubbing transitions to massaging. At that moment, the computer notifies her of new mail. Stella quickly glances at the screen to see that it is the reply she has been waiting for. Pulling Roi close to herself, Stella is briefly overcome with a sense of distress: will she get the position or not? She manages the courage to open the missive, which begins with, “After much deliberation, we are very pleased to offer you the position of . . . ” Stella is instantly excited. She squeezes Roi and lifts him up then nuzzles him. Roi’s ears stiffen, his breathing quickens, and he emits an energetic purr. After a brief yet firm hug, Stella places him back in her lap and resumes stoking his fur. The two sit together, again, pleased. The preceding scenario demonstrates the interactions that we investigated in our thesis. Stella and her furry companion, Roi, communicate with each other through touch. Through these touch interactions, each is able to sense the emotional state of the other. In some cases, the exchange alters the emotion of the perceiver. We will periodically return to this scenario throughout this dissertation.  1.2  Motivation  Emotional expression is the external display of internal affective state [176, p. 326]. The ability to communicate emotion plays an important role in social contexts by adding significance to the interaction [22] and allowing for prediction of subsequent behavior [13]. Affect display in humans manifests primarily through facial, vocal, or gestural behaviors [176, p. 26]. While the study of affect display has focused mainly on vision and audition, the modality of touch has received significantly less attention (see [76] for the lower magnitude of general research interest in touch vis-à-vis these two other modalities). This dearth of research at first appears counterintuitive given the unique role touch has among the other senses. For example, the skin is the largest organ in 2  1.3. Research Goals the human body; the first sense organ to form; and plays a major role in early development [112]. In addition, unlike vision and audition, touch is proximal: it requires close or direct, physical contact to sense [71]. When viewed through the lens of social interaction, however, inherent difficulties of this domain become more apparent. Studies in interpersonal touch have shown various confounding factors such as gender, familiarity, social status, and culture (e.g., [46, 103, 118, 186]). These sorts of studies also have been found to cause significant levels of participant discomfort (e.g., [180]). Nonetheless, studies have found that many characteristics of social touch have emotional meaning [82]. Furthermore, recent studies have demonstrated that humans are capable of communicating discrete emotions through touch [74, 75]. Emotional expression research in socially interactive robotics has closely paralleled counterparts in psychology and sociology and, consequently, has had a similar focus on visual and auditory behaviors. The study of affect display in social human-robot interaction has been primarily on facial expressions (e.g., [23, 95, 141]) and, to a lesser degree, on prosody of speech (e.g., [17, 143, 193]). Our thesis is hereby motivated by the importance of emotional expression in social human-robot interaction; however, our investigation is centered on affect display through the lesser-explored modality of touch. As we introduce this unique sense to socially interactive robotics, however, we risk the aforementioned difficulties when studying social touch. Therefore, as alluded to in our scenario with Stella and Roi, we have chosen to draw from models of interaction not between humans but between human and animal, whereby the robot assumes the role of animal. Furthermore, this has the added advantage of leveraging the rich patterns of non-verbal touch communication that already exist between human and animal [9, 32] and the long history of bonds between humans and companion animals [145, 146].  1.3  Research Goals  The overall goal of this thesis was to investigate the role of affective touch in the social interaction between human and robot. In particular, our research examined  3  1.4. Research Approach the display, recognition, and emotional influence of this form of touch. To that end, we set out to answer the following questions: 1. In what manner might a robot express its emotional state through touch to a human? 2. Can a human recognize a robot’s emotional state through touch? 3. In what manner might a human express his or her emotional state through touch to a robot? 4. What are the human’s expectations for the robot’s responses to affective touch? 5. Does the affective touch interaction between human and robot influence the human’s emotional state? As alluded to in the previous section, scant research exists that examines affective touch within the field of socially interactive robotics. The intention of our thesis, therefore, was to contribute to this body of work, but with the potential for impact on the broader study of affective touch. Furthermore, our investigation was foundational. We sought an increased understanding of base emotion communication through touch in the context of social human-robot interaction. It was also our hope that this research helps to lay the groundwork for specific applications — e.g., attachment or therapy.  1.4  Research Approach  The various phases of the thesis research are depicted in Figure 1.1. We highlight here the overall approach that guided the development of our thesis.  1.4.1  Preliminary Investigation  Our research commenced with an exploratory investigation of the thesis’s general premise: the role of affective touch in social human-robot interaction.  4  1.4. Research Approach We began by developing the Hapticat (Figure 1.1, phase 1), a prototype robot pet designed to simulate emotional expression through touch by changes in ear stiffness, manner of breathing, and a vibrotactile purr. This hand-actuated robot puppet, in turn, was employed in a user study that examined several facets of affective touch (Figure 1.1, phase 2). Participants provided their expectations of the Hapticat’s emotional response to various gestures they might use when touching it. Next, they physically performed a sequence of these touch gestures to the robot and were asked to recognize its corresponding emotional display. In addition, participants reported their emotional state as a result of interacting with the Hapticat. Results from the study demonstrated that participants’ expectations of the robot’s response to touch gestures correlated with our mappings. In addition, they were able to recognize a sizable subset of simulated emotional states rendered by the Hapticat. Finally, a general (positive) shift in participants’ emotional state was observed.  1.4.2  Haptic Creature  After initial confirmation of the thesis premise, the next phase of the research was the development of the Haptic Creature robot (Figure 1.1, phase 3). The goal was to construct a robust, automated platform with which to explore affective touch in human-robot interaction. Like the Hapticat prototype, the Haptic Creature employed the same three degrees of freedom to express its emotional state: adjustment of ear stiffness; modulation of breathing; and presentation of a vibrotactile purr. Additionally, to sense human touch, the robot was equipped with an array of force sensors and an accelerometer. For our thesis, with respect to the Haptic Creature platform, the focus was much more on the ability of the Haptic Creature to express its emotional state to the human. While the touch hardware and software were developed such that the robot was able to read and record the sensor inputs, its ability to recognize specific human touch gestures and related emotional content was not a major focus of our work. 5  1.4. Research Approach  Phase Display  Display  Hapticat Development  1  Preliminary Investigation  2  Haptic Creature Development  Sensing  4  Robot Affect Display Study  Human Affect Display Study Display  3  Haptic Creature Development Affective Touch Influence Study  5 6  7  Figure 1.1: Seven phases of thesis research. Unshaded boxes represent development phases. Dashed circles represent iterative refinement during development. Shaded boxes represent user study phases. The overall development of the Haptic Creature was iterative. We began with the Hapticat prototype and the results of the preliminary investigation. We then focused on the robot’s hardware and software infrastructure. Next, we configured the Haptic Creature’s various emotional expressions. Throughout this process, small pilot studies were frequently conducted to test various aspects of the system and results were fed back into the design each time. The Haptic Creature was used in the suite of user studies described in the following section.  6  1.4. Research Approach  Expression  Recognition  1  2  HUMAN  CREATURE 4  Recognition  3  Expression  Figure 1.2: Affective touch interaction loop between human and Haptic Creature. Solid lines between cells represent a display of affective touch. Dashed lines denote an internal update of emotional state as a result of the interaction.  1.4.3  Interaction Decomposition  To systematically study the interplay between human and robot in the course of affective touch, we decomposed the overall interaction into its constituent parts (Figure 1.2). As demonstrated in our opening scenario, the interaction involves a synergistic component whereby Stella’s emotional state modulates in the course of her exchanges with Roi, and her touching patterns adjusts as well. Therefore, we wished to examine each part of the system independently, then later synthesize these in order to observe changes resulting from the full affective touch interaction loop. The outcome was three user studies, each examining a different aspect of affective touch interaction.  7  1.4. Research Approach Affective Touch Originating from the Robot The first study (Figure 1.1, phase 4) examined the manner and success of the Haptic Creature in communicating its emotional state through touch to the human (Figure 1.2, cells 3→4). Animal models served as the initial reference for the robot’s emotion display, then its expressions were refined over successive iterations of informal user tests. Ultimately, the Haptic Creature’s breathing rate and ear stiffness were used to convey its state of arousal, while the asymmetry of breathing and purring communicated its valence. This configuration was then formally tested in a user study. Participants were asked to recognize a variety of the Haptic Creature’s affective touch expressions, which were selected from across the extents of its emotional space. Results identified that the robot was effective in communicating its state of arousal but less for valence. Affective Touch Originating from the Human The second study (Figure 1.1, phase 5) investigated the manner in which humans communicate their emotional state through touch to the Haptic Creature (Figure 1.2, cells 1→2) as well as their expectations of the robot’s reaction to their affective touch (Figure 1.2, cells 2→3). We compiled a touch dictionary of plausible gestures. From this list, participants selected and performed gestures that they would likely use when conveying a variety of emotions to the Haptic Creature. Participants also predicted the emotional response of the robot as a result of the gestures they had just performed. Our principal findings regard patterns of gesture use for emotional expression; physical properties of the likely gestures; expectations for the Haptic Creature’s response to mirror the emotion communicated; and analysis of the human’s higher intent in communication. From the latter finding, we developed five tentative categories of “intent” that overlap emotion states: protective, comforting, restful, affectionate, and playful.  8  1.5. Summary of Contributions Influence of Affective Touch From the original design presented in Section 5.1, we updated the Haptic Creature’s affect display (Figure 1.1, phase 6) in order to increase recognition of its emotional expressions. Next, building upon the previous two studies, our final study (Figure 1.1, phase 7) explored the influence of affective touch interaction on the human’s emotional state (Figure 1.2, cells 4→1) as a result of the full interaction loop. As noted in Section 1.4.2, the Haptic Creature’s sensory system was not at a stage where it could accurately recognize human touch gestures in real time, so we developed a timing protocol for this study whereby the robot simulated reactions to touch. We observed a statistically significant shift towards positive valence for the human’s emotional state when the two-way communication, with both the human and the robot displaying as well as receiving affective touch, was pleased, but not when it was miserable. Also, participants reported an average sense that the Haptic Creature was responding to their touch. We suggest that difference in results between pleased and miserable emotion communication may have been a result of the difference in touch gestures employed or the emotional responses rendered by the robot.  1.5  Summary of Contributions  We summarize here the primary research contributions of our thesis, which we enumerate under four main categories. Each of these categories is closely aligned with the research approach presented in the previous section. These contributions will be reviewed in further detail at the conclusion of the dissertation in Chapter 8. 1. Platform for the Study of Social Human-Robot Affective Touch: (a) The Haptic Creature robot has been designed and implemented for the study of human-robot social interaction through affective touch. Our zoomorphic robot is novel in its sole focus on the touch modality for both affect sensing and display. The Haptic Creature was employed in three affective touch user studies. 9  1.6. Dissertation Roadmap 2. Robot Affect Display through Touch: (a) The design of the Haptic Creature’s affect display system, which we grounded in animal models then iteratively refined through humancentered tests. (b) Quantifiable and generalizable observations relating to the effectiveness of the robot’s affect display system, as demonstrated through formal user testing. (c) Evidence for human expectations for a mirrored emotional response from the robot. 3. Human Affect Display through Touch: (a) A touch dictionary compiled from social psychology and human-animal interaction literature. (b) Properties of human affect display through touch: gestures likely to be used to communicate particular emotions; points of contact between the human and robot; duration and pressure intensity of touch. (c) Categorization of the human’s higher-level intents, which overlap emotional states: protective, comforting, restful, affectionate, and playful. 4. Influence of Affective Touch: (a) Empirical demonstration of human emotional state directly influenced as a result of interacting with the robot through the full affective touch interaction loop.  1.6  Dissertation Roadmap  The following list provides a concise overview of the remaining chapters in this dissertation: • Chapter 2 covers background and related work. • Chapter 3 presents the Hapticat prototype robot and preliminary user study that examines the feasibility of the research approach taken in this thesis. 10  1.6. Dissertation Roadmap • Chapter 4 describes the Haptic Creature, the robot developed for use in this research. • Chapter 5 details a user study that investigates the Haptic Creature’s ability to express its emotional state through touch. • Chapter 6 describes a user study that investigates the manner in which humans communicate emotion through touch. • Chapter 7 describes a user study that examines the influence on the emotional state of the human when communicating emotion through touch between the Haptic Creature. • Chapter 8 summarizes the thesis presented, reviews its contributions, and offers future directions for the research. • The Appendices document various supplemental materials employed in this research.  11  Chapter 2  Related Work The overall goal of this thesis is to investigate the role of affective touch in the social interaction between human and robot, and, for our research, we have chosen to draw from models of interaction between human and animal. Our thesis, therefore, builds upon research from a variety of disparate domains (Figure 2.1). Socially Interactive Robots  Affect  Haptic Creature  Touch  Human-Animal Interaction  Figure 2.1: Research domains related to the Haptic Creature. In this chapter, we present background and related work that laid the foundation for our own research. We begin with a discussion of affect, where we focus on means of display and relevance in social interaction. Next, in Section 2.2, we examine the modality of touch, namely, social and affective touch. We then move on in Section 2.3 to consider human-animal interaction. Finally, we conclude the chapter with a review of those socially interactive robots that, to varying degrees, share a focus similar to our own.  12  2.1. Affect  2.1  Affect  An emotion can be viewed as a mix of experiential, behavioral, and physiological components whereby the human reacts in a structured manner to an event deemed significant [176, p. 325]. This view can be further clarified in relation to mood: an emotion has an identifiable cause and is commonly short-lived, while a mood is considered over a much longer period of time and often with little understanding as to its inducement [167, p. 14]. We begin this section with a comparison of relevant models of emotion. This is then followed by a discussion of the face, which is the primary means of emotional expression for humans. Finally, we examine the social function of affect.  2.1.1  Discrete versus Dimensional Models  Among the myriad of theories of emotion, we discuss here the two that are most relevant to our thesis; namely, the discrete emotion theory and the dimensional model of affect. Our goal is merely to contrast the two theories, which are similar in their interest in the structure of emotions. We will revisit this topic in relation to our thesis approach when we introduce the Haptic Creature’s emotional system in Section 4.3.3. In addition, often debated alongside these theories are the concepts of basic emotions (e.g., [7, 35, 36, 120]) and the universality thereof (e.g., [38, 43, 81, 131, 132, 142]); however, these topics are outside the scope of our thesis and will not be directly addressed as a result. The discrete emotion theory considers emotions categorically. More specifically, this theory views emotions as irreducible and distinct from one another. Darwin (1872) [32] was one of the first to consider the discrete nature of emotions as he observed similarities in affect display between animals and humans. Tomkins (1962, 1963, 1984) [169–171] viewed affect as a primary motivating factor, both unconditioned and biological, from which he identified nine “primary affects”: anger, contempt, disgust, distress, fear, interest, joy, shame, and surprise (alt. startle). Influenced by Darwin and mentored by Tomkins, both Ekman and Izard independently furthered the theory through empirical studies of recognition of facial expressions. Izard (1971, 1977) [79, 80] studied Western and Eastern populations in their development of 10 discrete emotions, which added guilt to the nine 13  2.1. Affect primary affects of Tomkins. Studies by Ekman, in collaboration with Sorenson and Friesen (1969, 1971) [38, 42], added preliterate cultures to the participant population, which resulted in six distinct emotions: afraid, angry, disgusted, happy, sad, and surprised. These emotions identified by Ekman are considered to be the preeminent set of discrete emotions; nonetheless, subsequent research has sought to extend beyond the original six. As two examples, Ekman and Friesen (1986) [41] documented the distinct expression of contempt — already present in the sets of both Tomkins and Izard — and Tracy and Robins (2007) [173] identified the expression of the self-conscious emotion pride. The dimensional model of affect, on the other hand, considers emotions to be composed of multiple, often continuous, dimensions. Our focus here is on the subset of theories that consider emotions to be constructed specifically of two bipolar dimensions, as this is the prevailing view. One of the earliest descriptions of two bipolar dimensions of affect came from Schlosberg (1952) [144] who, through a series of facial expression sorting experiments, developed an oval space whose long axis ranged from pleasantness to unpleasantness and short axis ranged from attention to rejection. This was subsequently followed by Russell’s circumplex model of affect (1980) [130], which was developed through experiments on emotion label categorization and affect state self-reports. Russell’s resultant circumplex described emotions where one axis (valence) ranged from misery to pleasure and the other axis (arousal) ranged from sleepiness to arousal. Watson and Tellegen (1985) [181] conducted a meta-analysis of numerous self-reported affect studies to develop their Positive Affect and Negative Affect structure. While Watson and Tellegen focused on valence, Thayer (1989) [167] investigated the physiological aspects of arousal and, consequently, developed a third model in which one dimension ranged from calmness to tension and the other from tiredness to energy. While these dimensional models may seem somewhat disparate, Barrett and Russell (1998) [8] originally proposed and Yik et al. (1999) [192] later formalized an integration of these models that suggests they are, in fact, closely equivalent. While the discrete and dimensional models of emotion are presented as somewhat mutually exclusive, it is worth mentioning that there are some alternate approaches that attempt to consider the two as complementary — e.g., [33, 79, 133, 14  2.1. Affect 134]. As one example, Russell and Barrett (1999, 2003) present core affect, which they define as “that neurophysiological state consciously accessible as the simplest raw (nonreflective) feelings evident in moods and emotions” [133, 134]. While grounded in dimensional models, this theory allows for a coincidence with discrete emotions [134, Figure 1].  2.1.2  Face as Primary Means of Display  Emotional expression is the external display of internal affective state [176, p. 326]. This affect display is manifest primarily through facial, vocal, or gestural behaviors [176, p. 26]. Here we briefly discuss facial expressions. While utilizing the visual modality, we highlight this area as it is the dominant means of affect display for humans. Similarly, in conjunction with the preceding section on models of emotion, the face has long been the focus of emotion research and, therefore, the vast majority of studies in emotion stem from this work. This is true even for emotion research focused on gestural or haptic mannerisms. Furthermore, much of the methodology and, in particular, the measures used throughout our thesis follow from research on facial expressions. We will later examine affect display through touch in Section 2.2.2. Incorporated into his broader thesis on evolution and natural selection, Darwin’s pioneering work (1872) [32] on emotion expression was one of the first to recognize the face, both in humans and animals, as a means of affect display. Returning to Darwin nearly 100 years later, Tomkins (1962) [169, ch. 7] was the first to focus the research community on the primacy of the face in human emotion expression. Each of his nine “primary affects” described in the preceding section had a corresponding facial expression. Tomkins drew attention to the dominance of the face in relation to other parts of the body when comparing initial development, relative size, and the richness of sensory and actuation features. Furthermore, he also revisited the early work of Duchenne (1862) [34] (also referenced by Darwin) to examine the musculature of the face vis-à-vis affect display. Influenced by Tomkins and guided by their own studies on cross-cultural similarities of facial expressions of emotion, Ekman and Friesen (1976) [39, 40] developed the Facial Action Coding System (FACS). This system, which is in prevalent 15  2.1. Affect use today, is comprised of individual Action Units (AU) based on facial musculature, thereby allowing for the measurement of visually distinguishable facial movements. While agnostic with respect to affective content, the system nonetheless allows for detailed AU positioning that can prescribe, for example, how an actor may construct a particular emotional expression or how a observer might recognize a discrete affect display — the actor or observer, however, must be extensively trained in FACS.  2.1.3  Significance in Social Contexts  Our research centers around the social interaction between human and robot. For emotions and corresponding affect displays to be relevant to our thesis, they must therefore have a social function. Here we wish to present some of the theories on the social significance of affect. One social function of emotion is as a predictor of behavior. Bowlby (1969) [13] notes that to attribute an emotion in another — even an animal — is to make a prediction of how the other will subsequently act. Bowlby points to Hebb (1946) [70], who considers that the personal differentiation of various emotional states arises not from an inborn sense but, rather, from observing the overt behaviors of others. Frijda (1986) [55, Section 8.6] discusses social functions within the context of emotion regulation. While affect is considered as arising from within, it is the external environment, particularly the social environment, that can be a significant regulating factor. He presents social aspects such as deindividuation, or crowd behavior; emotion mitigation through the support of others; and embedding, which expands the support group to the broader culture as a whole. In their functionalist approach to emotions, Campos et al. (1994) [22] consider three ways in which social signals can influence affect. First, they can regulate the observer’s behavior through marking the significance of the situation. Social signals can similarly have an emotional contagion effect, whereby the observer instinctively mimics and synchronizes the affect display of the expresser resulting in a convergence of emotional state [69, p. 5]. Finally, social signals can induce  16  2.2. Touch self-conscious emotions, such as pride or envy, which depend on the approval and disapproval of others.  2.2  Touch  The modality of touch is unique among the five senses. The skin is the largest organ of the human body, and touch is the first and most fundamental modality to develop. This sense begins its influence in the womb; is significant during childbirth; and continues to play an important developmental role throughout infancy and early childhood [6, 45, 51, 111]. Furthermore, unlike the other modalities, touch is proximal: it requires close or direct, physical contact to sense [71]. Given the distinctive nature of touch, it seems surprising that this modality appears to be a neglected field of study, particularly in comparison with vision and audition. Frank (1957) [51] was one of the first to acknowledge this relegation, whereby he sought to focus the research community on the psychophysics of touch; the modality’s role in personality development; as well as the varying cultural patterns associated with touch. Shortly thereafter, Geldard (1960) [57] raised a similar concern; however, his primary focus was on increasing an understanding of the low-level mechanics of touch communication — e.g., location, frequency, duration, intensity. More recently, Hertenstein et al. (2006) [76] reaffirmed Frank’s original concern when they documented 13 times the number of vision-centric publications and three times the number of audition-centric publications. Hertenstein et al. go on to suggest philosophical as well as methodological influences for the diminished research interest in touch. In this section, we wish to emphasize the social aspects of touch. We then turn our focus to the affective qualities of this modality. This is followed by an overview of research in technologically-mediated social touch. Finally, we conclude with a discussion of extant issues of experimental research on touch.  2.2.1  Social Touch  Some of the earliest research on social touch was by Spitz (1945) [158], who conducted a systematical investigation into hospitalism, the condition whereby infants 17  2.2. Touch reared in institutions frequently wasted away [29]. This disorder was thought to arise from a lack of quality in care and living conditions; however, Spitz documented a well-equipped institution that had a notably higher rate in comparison to a poorer hospital. He determined that the significant difference between the two was the amount of human contact. Each child at the poorer nursery had full-time physical care by either the mother or an able surrogate, while infants at the richer institution lacked human touch a majority of the time. Another seminal finding on the importance of social touch came from Harlow, in collaboration with Zimmermann (1958) [67, 68]. The prevailing view at the time was that the main role of the caregiver was the satiation of the infant’s primary drives — e.g., hunger, thirst, pain. The two conducted a series of studies in which infant monkeys were separated at birth from their mothers, then raised by two inanimate surrogates that differed solely on the degree of tactile comfort they provided. In one study, all the monkeys had access to both surrogates, but one group was fed by a soft cloth mother and another was fed by a rigid wire mother. Results clearly demonstrated that, regardless of the which mother provided food, the monkeys spent a much greater amount of time in physical contact with the cloth mother and sought this same surrogate much more frequently when in the presence of a fear stimulus. From this, Harlow and Zimmermann were able to develop their theory of contact comfort, which considers a primary role of nursing to be in maintaining direct, physical contact between the infant and the mother, thereby increasing affectional bonds. Jourard conducted several studies that investigated interpersonal touch in relation to body-accessibility, “the readiness of a person to permit others to contact his body”. In one study (1966) [83] conducted with unmarried United States college students, Jourard found that parents and same-sex friends were allowed less frequent access to touching; sons allowed parental (particularly paternal) touch much less than daughters; and hands, arms, and head received much more physical touch than lower extremities and more sexual body parts. In 1974, Heslin presented an initial taxonomy of touching. Discussed by Heslin and Alper (1983) [77], his taxonomy specifies five “situations/relations” of interpersonal touch: functional/professional, social/polite, friendship/warmth, love/intimacy, and sexual arousal. The taxonomy does not include negative touch types, 18  2.2. Touch as Heslin considered them rare occurrences. In addition, its ordering infers a continuum of increasing levels of intimacy. Jones and Yarbrough (1985) [82] conducted one of the larger social touch studies by recording everyday touches observed by participants at a Western university over an extended time frame. The data collected features such as location and initiator of touch, social occasion, presence of others, as well as the purpose and type of touch. They also recorded demographic information such as gender, familiarity, age, and social status of the other individual. From their results, Jones and Yarbrough constructed 18 types of touch — 12 of which they considered clear and unambiguous — which, in turn, they formed into seven main touch groups: positive affect, playful, control, ritualistic, hybrid, task-related, and accidental. One of the more studied aspects of social touch is that of its influence, especially on the recipient, and, particularly, in securing compliance. An early series of studies by Kleinke (1977) [93] demonstrated that individuals who received a light touch, when compared those who were not, were more inclined to honor a request to return a dime recently found in a public phone booth. Touch wielded similar influence on requests for signing a petition (Willis and Hamm, 1980) [186], sampling food products (Smith et al., 1982) [155], as well as participating in a course activity (Guéguen, 2004) [64]. Crusco (1984) [30] was even able to document that a waitress’s slight touch upon returning a diner’s change was capable of increasing the amount of tip she received.  2.2.2  Affective Touch  In Section 2.1.3, we established the social significance of emotion communication. Further, we have just discussed the importance of touch in social contexts. While the face is considered the primary means of affect display (Section 2.1.2), it is inherently a visual mechanism. Here we wish, instead, to discuss the relevance of affective touch, which we consider to be touch that communicates or evokes emotion. Frank (1957) [51, p. 216] briefly suggests that emotional reactions to interpersonal communication may assert more influence than the actual content. Along with other modes of affect display, the nature of touch — e.g., light versus heavy 19  2.2. Touch touch — can heighten or color the message’s tone, which Frank proposes may be the aspect to which the recipient is ultimately responding. In considering the use of touch in nursing, Barnett (1972) [6] presents emotion communication as one theoretical concept of touch. She enumerates the affectional, sexual, and proximal nature of touch. In addition, Barnett references touch as inherent to the human experience — e.g., through cooperation, societal awareness, and personal disclosure. As we noted in the preceding section, Jones and Yarbrough (1985) [82] developed distinct meanings from their study of social touching patterns. Many of these touch meanings had positively valenced affective qualities such as affection, support, inclusivity, appreciation, and playfulness. More recently, the work by Hertenstein et al. examined the ability of touch to communicate discrete emotions. In their first study (2006) [76], when touch was localized to the arm, participant dyads were able to accurately convey and recognize several distinct emotions common to facial expressions — anger, fear, and disgust — as well as several prosocial ones — love, gratitude, and sympathy. In their second controlled study (2009) [74], touch was allowed anywhere considered appropriate to communicate a specific emotion, and results included the accurate communication of two additional emotions: happiness and sadness.  2.2.3  Mediated Social Touch  Here we briefly discuss research on the convergence of touch and technology for use in social interaction. While our thesis investigates social touch interactions between human and robot, mediated social touch explores human-to-human interaction whereby technology is leveraged as means of connecting the humans. The vast majority of work in mediated social touch has taken the form of conceptual prototypes, some more developed than others, expecting to connect individuals remote from one another. A more extensive review of this topic can be found in Haans and IJsselsteijn (2006) [65]. Several of these prototypes sought to explore new or augment existing means of interpersonal communication. HandJive by Fogg et al. (1998) [49] was a rapidly developed prototype intended for haptic entertainment in which the designers the20  2.2. Touch orized about new forms of communication similar to jazz improvisation or social dance. ComTouch by Chang et al. (2002) [25] augmented traditional voice communication, such as a mobile phone, by translating hand pressure of the sender into vibrational intensity to be felt by the receiver. HIM (Haptic Instant Messaging) by Rovers and van Essen (2004) [129] was a system that allowed for vibrotactile information to be passed along with the standard textual data of instant messages. A significant number of prototypes for mediated social touch have focused on the simulation of physical presence, often coupled with intimacy. As part of their Feather, Scent, and Shaker series, Strong and Gaver (1996) [163] conceived of a device that, when intentionally shaken, would broadcast the vibrations to its paired recipient. InTouch by Brave and Dahley (1997) [16] used mechanically coupled rollers to give a sense that two people are interacting with a shared object. Sensing Beds by Goodman and Misilim (2003) [60] utilized heated cushions to simulate a partner’s body warmth. Hug Over Distance by Mueller et al. (2005) [115] was a pneumatic vest that simulated the receipt of a hug from a remote partner. Following from an exploration of intimacy, and more directly related to our thesis, is the work on mediated affective touch. LumiTouch by Chang et al. (2001) [26] prototyped a pair of picture frames that translated squeezing on one to emotionally colored lights on the other frame. Hansson and Skog (2001) [66] theorized LoveBomb as a simple device to convey love (heartbeat-like vibrations) and sorrow (irregular vibrations) among strangers in a public setting. Bailenson et al. (2007) [5] conducted a series of formal studies on the communication and recognition discrete emotions through a force-feedback joystick. Similarly, utilizing a pair of rotating haptic knobs, Smith and MacLean (2007) [156] explored and formally user tested the dimensions of intimacy, personal space, and personal relationship in their effects on the communication of emotion.  2.2.4  Methodological Issues in the Study of Touch  One purported reason for the relative lack of research on touch, particularly social touch, is the array of inherent confounding factors. Here we highlight some of the more prominent issues. While in many cases the researchers specifically mea-  21  2.2. Touch sured for these factors, often in combination, their existence points to the added considerations necessary in the study of this modality. By far the most considered confound in social touch research has been the differences between genders. Nguyen et al. (1975) [118], for example, found that men and women had similar understandings of touch by a friend of the opposite sex but differed on its implied meaning. One result from their study showed that both genders generally agreed on which touches conveyed sexual desire; however, they were diametrically opposed as to the consideration of this as playful, warm, and loving — in comparison to men, women had by far a lesser sense of the affectionate nature of these touches. Another example in gender difference was a study by Fisher et al. (1976) [46] that demonstrated receiving a seemingly inadvertent touch by a librarian in the process of checking out books had a favorable response in women, while males showed no significant change in response. The factor of status/dominance can also play a significant role in social touch. Henley (1973) [72, 73] conducted an early observational study of touch in public settings that demonstrated nonreciprocal social touch can serve as a reminder of social status: the toucher reminds the touchee of the recipient’s lower status. A subsequent study by Summerhayes and Suchner (1978) [165] qualified Henley’s results to show a general diminishing effect of nonreciprocal social touch: if the touch is initiated by someone of a lower social standing, then the status of the higher-status recipient is reduced. Summarizing their results from a similar study as “it is better to give than to receive”, Major and Heslin (1982) [103] furthered Summerhayes and Suchner by demonstrating that, not only is the social status of the touch recipient diminished, but the status of the initiator is considered increased. Employing a different methodology, Florez and Goldman (1982) [48] compared interpersonal touch between dyads composed of blind and sighted individuals. The status factor was consistent regardless of the sightedness of the participant — the toucher was perceived more highly than the touchee — even though the blind participants recorded greater overall positive evaluations to touching. As introduced in Section 2.2.1, Jourard’s studies on body-accessibility found the location of the touch to be a significant factor [83, 84]. A study by Burgoon (1991) [21], while also investigating dominance along with other factors, examined observers’ perceptions to interpersonal touch. One outcome was that touch 22  2.3. Human-Animal Interaction to specific body locations — e.g., arm, shoulder, waist — more strongly conveyed status. In a similar study, Lee and Guerrero (2001) [97] found body location to be significant in observations of interpersonal touch between colleagues within the context of a work environment. As one example, touches to the face, forearm, and shoulder were all perceived as being flirtatious or affectionate; however, the latter two locations were considered more formal types of touch. In some cases, studies in social touch have demonstrated participant discomfort. A study by Walker (1975) [180] found participants notably uncomfortable after touch interactions in a simulated psychology encounter group. As a further example, Whitcher and Fisher (1979) [185] found that the touch of a nurse prior to surgery resulted in a significant increase in anxiety for males, even though women participants responded positively.  2.3  Human-Animal Interaction  The previous section presented touch as a unique sense modality that is relevant both in social and emotional contexts, yet has been underappreciated by the research community likely due to difficulties inherent in the study of touch. In our thesis, however, we attempt to mitigate these methodological issues by considering the interaction not between humans but, rather, between humans and animals, with a particular focus on companion animals. We begin this section with a discussion of anthropomorphism in relation to animal emotions. We then cover several major areas of human-animal interaction research.  2.3.1  Anthropomorphism and Animal Emotions  In the sciences, the debate over the influence of anthropomorphism — the attribution of human traits to that which is non-human — has been ongoing for well over a century. For our thesis, the relevant discussion has been within the fields of comparative psychology and animal behavior. Wynne (2007) [190] — in a contemporary reaffirmation against anthropomorphism in scientific study — presents a detailed history of the debate, which we summarize in the following paragraph. 23  2.3. Human-Animal Interaction Anthropomorphism in the study of animals began in the late 19th century with the work of Darwin (1872) [32] and, subsequently, Romanes (1882) [128]. Soon after, Morgan (1894) [113] sought to apply greater scientific control over its application in animal psychology — see [28] for a refutation of the common reading that Morgan was wholly against anthropomorphism. In the early 20th century, however, anthropomorphism was summarily rejected by Watson [182] in his establishment of behaviorism. Similarly, though outside the realm of psychology, the founding of ethology by Tinbergen [168] and Lorenz in the 1930s sought to relegate anthropomorphism from their discipline as well. More recently, however, some have called for a reevaluation. Returning to Romanes and Morgan, Burghardt (1985, 1991) [19, 20] called for a “critical anthropomorphism” that drew its data from a myriad of sources, including anecdotes and personal observations, yet remained grounded in scientific methodology. With a focus on the study of animal emotions, Bekoff (2000) [11] also suggested a “biocentric anthropomorphism” that allowed for the use of anthropomorphic vocabulary to make animal behaviors more accessible to the researcher. Moving from the anthropomorphism debate, we draw attention to work within neuroscience that points to emotionality in animals; particularly, mammals. One of the more widely discussed is that of the “triune brain” originated by MacLean (1970, 1990) [101, 102]. The triune brain is a model that divides the brain into three evolutionary layers: the primitive “reptilian” (basal ganglia) level serves innate behaviors and basic motor functions; the secondary “paleomammalian” (limbic system) layer handles attention, emotion, and memory; and the outer “neomammalian” (neocortex) layer deals with perception, reasoning, and language. While the model simplifies the inherent complexity of interconnected brain functions, its structure points to a commonality between humans and other mammals vis-à-vis emotions. The higher-functioning neocortex is found to differ significantly among mammals, with humans having the most developed version; however, the limbic system, which is integral to emotion functions, is very similar across mammals [123]. In another example from neuroscience research, Panksepp (1998) [122] identified specific areas of the brain coupled with the related chemicals employed in the construction of basic emotions. Panksepp considered these neuro-factors to be 24  2.3. Human-Animal Interaction common among all mammals, and he divided the basic systems between positive emotions — SEEKING/expectancy, LUST/sexuality, CARE/nurturance, PLAY/joy — and negative ones — RAGE/anger, FEAR/anxiety, PANIC/separation. Our thesis is further predicated upon humans commonly attributing emotions to animals, especially companion animals. The aforementioned scientific debate points to the natural tendencies within humans to anthropomorphize. In fact, Burghardt’s use of the qualifier “critical” was to distinguish scientific exploration from the naïve form of anthropomorphism found in humans’ everyday interactions with animals. The zoologist and animal behaviorist, Kennedy (1992) [90, p. 5] stated, [A]nthropomorphic thinking about animal behaviour is built into us. . . . It is dinned into us culturally from earliest childhood. It has presumably also been ‘pre-programmed’ into our hereditary make-up by natural selection, perhaps because it proved to be useful for predicting and controlling the behaviour of animals.  2.3.2  Influence of Human-Animal Interaction  Companion animals have been shown to provide a variety of epidemiological, physiological, and social benefits. The most frequently cited work was by Friedmann et al. (1980) [53] that found pet owners had an increased one-year survival rate for coronary heart disease patients. Friedmann and Thomas (1995) [54] later conducted a similar investigation, with similar results, for survivors of a recent heart attack. With a focus on prevention, both Anderson et al. [3] and Patronek and Glickman (1993) [124] demonstrated that pet ownership provided measures against cardiovascular disease. A study by Shiloh et al. (2003) [154] found that petting an animal was able to reduce stress-induced anxiety. Beck and Katcher (2003) [10] consider the positive effects shown in these studies result from the interplay between humans’ evolutionary relationship with animals — biophilia [89, 188] — and the social support of companion animals. The social nature of pets is furthered by Serpell (1996) [145, ch. 8] where, among many examples, he highlights companion animals as patient, mute listeners serving the role of therapist yet superior in their added acceptance of touch interaction.  25  2.4. Socially Interactive Robots Companion animals, however, can also have many deleterious effects. The time, energy, and finances invested in the care of a pet can be extensive [2, 24]. Companion animals can be the cause of chronic allergies and respiratory conditions [1]. Furthermore, in cases of poor hygiene, animals can carry disease and parasites [63, 159]. Absent proper care, pets can inflict damage on the home or cause noise disturbances [12]. Worse, unfamiliar humans or improperly socialized pets can result in aggressive animal behavior [104]. While bond formation can be a positive aspect of pet ownership, the subsequent loss of a companion animal can cause grieving on par to the loss of a human companion [59]. While the above studies demonstrated that companion animals play a significant role — both positive and negative — in the lives of humans, broader research in this field has been found lacking. Beck and Katcher (2003) [10] point to several areas in need of more rigorous investigation. For example, they note that physiological and psychological studies on the benefits of companions animals — many referenced above — have been countered by similar studies showing no or even negative effect; therefore, more research is needed to resolve the conflict. Beck and Katcher also state that healthy populations, especially children, should be considered, as some studies demonstrate positive influences on communication skills and nurturing. Wilson and Barker (2003) [187], on the other hand, critique the methodology employed in human-animal interaction studies. They point to problems with ensuring proper research controls, sample selection, and measurement of outcome variables. Furthermore, Wilson and Barker note important factors commonly not considered in research studies: the size and breed of the animal; the animal’s handler; the setting of the interactions; and the subsequent withdrawal of the animal at the conclusion of the study.  2.4  Socially Interactive Robots  Fong et al. define socially interactive robots as “robots for which social interaction plays a key role” [50, p. 145]. Goodrich and Schultz further clarify this social interaction to include “social, emotive, and cognitive aspects of interaction” in which “the humans and robots interact as peers or companions” [61, p. 205]. 26  2.4. Socially Interactive Robots While these definitions help to distinguish socially interactive robots from within the much larger field of human-robot interaction, the resultant domain is itself still fairly broad. Consequently, for our thesis we limit the field of study to robots that possess the following characteristics: • The robot is designed to interact with humans in a social context. • The robot incorporates emotion into the interaction. • The robot utilizes the touch modality. • The robot is, to some degree, zoomorphic. With these additional constraints in consideration, we present here (alphabetically) an overview of robots that share a focus similar to our own. Next, we will survey social interaction research in which these robots have been employed. Finally, we will differentiate our Haptic Creature from these related robots. AIBO is an autonomous, quadruped robot the size and shape of a small dog but highly robotic in appearance. Developed and sold commercially by the Sony Corporation, it was envisioned as an “entertainment robot” pet with emergent behavior [56]. AIBO has sensors for distance, sound, vision, acceleration, vibration, and temperature. The robot’s touch-sensing capabilities are through tactile sensors mounted on its head, chin, back, and paws. AIBO expresses its emotional state through colored lights on its face, physical posturing, and vocalizations. Huggable is a semi-autonomous, teleoperated plush robot in the form of a child’s teddy bear. Developed by Stiehl in the Personal Robots Group at the MIT Media Lab, it is a research platform to investigate the design and application of robotic companions [98, 162]. The Huggable has sensors for vision, sound, inertia, joint angle, and internal temperature. The robot perceives touch through an extensive, full-body network of force, electric field, and temperature sensors [160]. The Huggable expresses itself through head, ear, and arm movements as well as sounds through a speaker located in its mouth.  27  2.4. Socially Interactive Robots NeCoRo is an autonomous, plush robot with an extremely life-like cat appearance. Developed by Shibata with the Omron Corporation and commercially available, it was envisioned as a gentle, natural interface between humans and machines [117, 166]. NeCoRo has sensors for sound, object movement, body orientation, and body movement. The robot perceives touch through tactile sensors in its head, chin, and back. NeCoRo generates expressions through a wide range of cat-like vocalizations as well as physical posturing. Paro is an autonomous, plush robot closely modeled after a baby harp seal (Figure 2.2). Developed by Shibata in partnership with the Japanese National Institute of Advanced Industrial Science and Technology (AIST) and available commercially, it was designed as a “mental commitment” robot [153] for emotional attachment. Analogous to animal-assisted therapy, Paro has been targeted for robot-assisted therapy in hospitals and extended care facilities [179]. The robot has sensors for light, sound, temperature, posture, and it senses touch through tactile sensors mounted across its body. Paro expresses its emotional state through facial expressions, animal-like vocalizations, and physical posturing.  Figure 2.2: Paro robot. Pleo is an autonomous, quadruped robot representing a miniature baby Camarasaurus dinosaur (Figure 2.3). Developed and sold commercially by Ugobe, it was designed, similar to AIBO, as a robotic pet with emergent behavior [175]. Pleo has sensors for sound, vision, tilt, vibration, leg force feedback, and mouth object detection. Utilizing capacitive sensors, the robot is able  28  2.4. Socially Interactive Robots to sense touch on its head, chin, shoulder, back, and legs. Pleo expresses its emotional state through posturing and vocalizations.  Figure 2.3: Pleo robot. (Photo: Jiuguang Wang, with permission) Probo is a teleoperated, plush robot representing a small, imaginary creature with elephant-like features (Figure 2.4). Developed by Saldien and Goris at the Vrije Universiteit Brussel, it is a research platform for the study of humanrobot interaction and robot-assisted therapy [140]. Probo’s current sensing capabilities are limited, but it will eventually have sensors for vision, sound, and touch. The robot expresses its emotional state through nonsensical speech [193] and facial expressions, with the unique distinction of an actuated trunk.  Figure 2.4: Probo robot. (Photo: Vrije Universiteit Brussel, with permission)  29  2.4. Socially Interactive Robots  2.4.1  Social Interaction Research  Some of the robots introduced in the previous section were developed specifically for commercial purposes; however, all the robots have been employed to some degree in research on social interaction. Many of these studies drew from participant pools of either children or the elderly; in a few of the latter cases, the participants had psychological disorders — e.g., dementia. Therefore, a minority of the research concentrated on normal, adult populations. Studies commonly took place in controlled environments — e.g., observation labs, classrooms, nursing homes. In cases where free-play was part of the study procedure, the interaction durations were often short, on the order of 5–10 minutes. Most relevant to our thesis, of course, are the aspects of touch examined. From this subset of studies, many focused only on high-level properties, such as frequency of touch rather than, for example, manner, location, or intent. More importantly, all touch studies were limited to human-initiated touch; none investigated touch originating from the robot. AIBO and Paro have received considerably more research attention than the other robots. The vast majority of studies with AIBO have focused on children [85, 107–109, 183], while a few have also compared results with adults or the elderly [91, 174]. Participants often reasoned about and interacted with the robot and, for comparison, a live animal or related plush animal toy. Results generally showed that, while participants considered AIBO to be man-made, they interacted with the robot as if it were a real animal. Paro has most frequently been studied with the elderly [137, 150, 174, 177] and, sometimes, children [152]. Paro has also had some cross-cultural coverage [149]. Many of the studies with Paro have been qualitative or through surveys [149, 151, 174]; however, some have employed physiological or neurological measures [88, 110, 137, 177, 178]. While investigations with AIBO have concentrated on perceptions of the robot and the types of interaction with it, studies with Paro have generally focused on the psychological and social effects of the interaction. In studies with elderly populations, results have demonstrated increased social interactions; improvements in mood; and a reduction of stress among nursing home patients. 30  2.4. Socially Interactive Robots NeCoRo and Pleo, both commercial robots, have had relatively minor research applications. A study incorporating open-ended play with NeCoRo found that prior experiences with felines affected subjective views of the robot [153]. NeCoRo has also been employed in the development of robot psychology practices [100]. In one preliminary investigation, nursing home residents with dementia had increased levels of pleasure and interest when interacting with NeCoRo, but not with a passive, plush toy cat. Conversely, residents’ agitation levels decreased with the plush toy, but not NeCoRo [99]. Pleo was employed in a study that demonstrated naïve instructors intuitively used affective speech when guiding the robot through a task [92]. A study that investigated expectation setting found that participants primed with high expectations for Pleo’s touch sensing capabilities were more disappointed after interacting with the robot than participants primed with low expectations [121]. In one of the only long-term studies among these related robots, Pleo was observed in casual play within families’ homes over the course of 2–10 months [44]. Participants frequently considered Pleo more as an alternative to a live pet, and less so as a toy, yet never as a companion. However, participants long-term interest waned as Pleo’s interactivity fell short of expectations; the alternative pet was thus relegated to toy status. Surprisingly, the Huggable and Probo, both developed specifically as social touch research platforms, so far have had limited utilization in formal user studies. The Huggable, the most mechatronically advanced among these robots, has mostly focused on gesture recognition algorithms [94, 161] — cf. our gesture recognition work [27] — and a pilot investigation of platform teleoperation by novices [98]. Probo similarly has been used in only one research study, which examined participants’ ability to recognize the robot’s facial expressions [139].  2.4.2  Differentiating the Haptic Creature  In Chapter 4, we will present the design and development of our Haptic Creature robot. This robot differs from those presented above in two distinct ways. 31  2.5. Summary Perhaps the primary differentiation of the Haptic Creature is its strong concentration on the modality of touch for affect display. The Huggable is the only other device possessing full-body sensing — much more advanced than even our own — Paro and AIBO both have only limited interaction points for touch input; and Probo, though planned, currently does not have any touch sensing capabilities. More importantly, however, is that each of the other robots focuses much less on touch for affect display originating from robot itself; rather, they rely more on visual — facial or postural — and auditory expression. The Haptic Creature, on the other hand, relies solely on touch when communicating its affective state to the human. A second differentiating aspect of the Haptic Creature is the level of zoomorphism. The robots mentioned in Section 2.4 all, to varying degrees, have clearly defined features and overall shape. While a consideration of the Haptic Creature was that it be recognizable as animal-like, it was consciously designed to have a more minimalistic appearance.  2.5  Summary  In this chapter, we have presented foundational research related to our thesis, which is to investigate the role of affective touch in the social interaction between human and robot. We first demonstrated the general social importance of emotion communication. We then discussed the specific social importance of touch and this modality’s potential to communicate emotions. However, we also pointed to methodological reasons that may explain why this unique sense has received less research attention relative to vision and audition. We therefore presented research in human-animal interaction as an alternate domain, in the hopes of obviating some of these confounding factors in social touch studies. Finally, we presented the small set of social robots that overlap the various domains discussed. In relation to these robots, we covered relevant social interaction research and, then, differentiated our Haptic Creature robot.  32  2.5. Summary In the next chapter (3), we present the preliminary user study that explored the feasibility for our thesis. This is followed by Chapter 4, where we introduce our Haptic Creature robot. Subsequent chapters (5–7) present the various interaction decomposition user studies.  33  Chapter 3  Preliminary Investigation In this chapter, we begin our investigation into social human-robot affective touch. This encompasses the first two phases of our research, as shown in Figure 1.1, which served as an exploratory investigation of the general premise of our thesis: the role of affective touch in social human-robot interaction. We begin this chapter with a description of the Hapticat, our prototype robot pet designed to simulate emotional expression through touch. Next, in Section 3.2, we document a preliminary study in which this hand-actuated robot was employed to examine several facets of affective touch. Specifically, we investigate humans’ expectations of the Hapticat’s emotional response to specific touch gestures; the ability of the robot to communicate its emotional state through touch; and humans’ affective responses to interacting with the Hapticat. In the study, participants reported their expectations of the Hapticat’s emotional response to various gestures they might use when touching it. They then physically performed a sequence of these touch gestures to the robot and were asked to identify its corresponding emotional display. Participants also reported their emotional state as a result of interacting with the Hapticat. In Section 3.3, we present the study results, which demonstrated that participants’ expectations of the robot’s response to touch gestures correlated with our predetermined mappings for gently petting, rubbing ears, pinching body, poking body, resting hands on top, and leaving it alone, but not for shaking, vigorously petting, hugging, and tickling. The Hapticat was effective at displaying its simulated emotional state for playing dead, asleep, upset, and content; however, participants often perceived happy as content. When compared with a nonactive Hapticat, an overall (positive) shift in participants’ self-reported emotional state was observed when the robot presented active haptic responses. For the individual responses,  34  3.1. Hapticat Design and Implementation there was a statistically significant difference for both the asleep and upset emotional renderings.  3.1  Hapticat Design and Implementation  In the opening scenario from Chapter 1, Stella participated in a variety of tactile interactions with her furry companion. She could feel Roi’s weight, warmth, furry exterior, vibrations from purring, and subtle movements as he adjusted positions. Furthermore, Stella’s direct interactions with Roi — e.g., stroking or nuzzling — caused many of his tactile features to adjust. For our initial investigation in affective touch, we chose to ground the robot’s behavior in those of a cat. We note, however, that we were not attempting to produce a realistic artificial cat — see [166] for the pursuit of a realistic cat robot. Rather, we were using a set of cat-like qualities as a starting point. This approach had several advantages. Most importantly for our work, it gave us the freedom to include other tactile and affective features not inherent to felines, as well as eliminate features as we saw fit. Secondly, our robot need never approximate realism. As a result, we had hoped to obviate the pitfalls of Mori’s uncanny valley [114], which posits that humans have strongly negative responses when robots attempt, but ultimately fall short of, realistic appearance and behavior. The final advantage was that both complexity and cost were greatly reduced, which allowed for rapid iteration of designs. Our end result was the Hapticat, a prototype robot pet designed to study affect display through touch (Figure 3.1). Two overall considerations guided our decisions for the design of the Hapticat. First, we carefully considered which distinct actuations to implement. Cats provide a variety of tactile interaction; however, we were not limited solely to cat-like qualities, so our initial set of choices was rather large. Following from this initial consideration, our second consideration was to avoid the robot being perceived simply as a “bag of tricks”: a random and unrelated set of actuations. Rather, we wanted to provide a holistic, integrated experience. We finally limited the actuation to a small set that we could quickly implement and would work well in concert with one another. Our goal was that, as for a cat, 35  3.1. Hapticat Design and Implementation  Figure 3.1: The Hapticat. (Photo: Martin Dee, with permission) several of these actuations employed together at varied settings would provide an expressive means of affect display. The prototype itself was composed of five major features: a body, two ear-like appendages, a breathing mechanism, a purring mechanism, and a warming element — Figure B.1 displays details of the prototype internals. The Hapticat had a total of four degrees of freedom, which are provided by the ears, the breathing mechanism, the purring mechanism, and the warming element. The prototype’s actuation and the implementation of its major features are described in the following sections.  3.1.1  Prototype Actuation  The prototype was controlled through Wizard of Oz techniques [105]. That is, by watching the actions of the human with the robot, we manually actuated the ears, breathing, and purring mechanisms to simulate an automated response in the Hapticat. We chose to use this approach as an expeditious and economical method to evaluate our proof-of-concept before introducing sensors and computer-controlled actuators.  36  3.1. Hapticat Design and Implementation  3.1.2  Body  The form factor of the body was intended to be organic yet relatively non-zoomorphic. Several styles were produced, with the final body design being reminiscent of a rugby ball. The individual parts making up the body were: an outer shell, an inner filling, and a tail. While the focus of our research was on touch, we also did not want the general appearance of the Hapticat to detract from the interaction. Therefore, the outer shell was designed to be pleasing both visually as well as haptically. A variety of materials and colors were examined for use. The original design was to use synthetic fur, but we eventually settled upon polyester fleece for its ease of construction, comfortable feel, and lower cost. The color of the shell was solid, light brown adding to its organic appearance. The design goal for the inner filling was to provide a balance between comfortable feel as well as proper mass for the body. The system was comprised of several small cloth bags filled with polystyrene (“bean bag”) pellets that were sealed with twine. The bags were constructed in a variety of sizes to better fit the different parts within the shell. To adjust the weight and feel of the prototype without changing the overall size, we added uncooked rice to several of the bags. During pilot tests of the prototype, it became clear that we needed a means to conceal the hoses and cords attached to the actuators within the body. As a result, the cords were bound together then wrapped with the same fleece material used for the outer shell, giving the impression of a non-functioning tail.  3.1.3  Ears  Although the main role for ears is normally hearing, in animals they also provide a means for expression: their erectness and orientation convey information [32]. With our focus on touch, however, we chose to use stiffness as a haptic analog for these visual properties. Additionally, ears provide a physical interaction point where a human can grasp or stroke them. Atop the body of the Hapticat were two small appendages visually resembling ears (Figure 3.1). While their location was different from where one might expect  37  3.1. Hapticat Design and Implementation Table 3.1: Hapticat mechanisms ranges. Mechanism  Range  Ears Breathing Purring Warming  Limp, Medium, Stiff None, Slow, Medium, Fast None, Slow, Medium, Fast None, Low  ears on an animal, this position provided easy access when the Hapticat was on a human’s lap. Table 3.1 presents the various ranges the ears can represent. The outer, visible portion for each ear was a skin made of a lightweight, white cloth sewn into the body. The actuation mechanism was a closed-air system comprised of one balloon for each ear clamped to plastic tubing. The tubing, in turn, ran out the body via the tail to a manually controlled syringe that regulated the flow of air in the system.  3.1.4  Breathing Mechanism  Designed to bring a living quality to the Hapticat, breathing provided both visual and haptic feedback to the human. One could see as well as feel the body expand and contract with each actuation of the mechanism. Table 3.1 lists the various ranges that can be represented by the breathing mechanism. The breathing mechanism was a closed-air system built with a latex bladder clamped to plastic tubing that exits the body through the tail. Outside the tail on the opposing end, the tubing had a coupler that attaches to a makeshift bellows used to inflate and deflate the bladder.  3.1.5  Purring Mechanism  Purring in the Hapticat was designed to mimic a cat’s purr; however, its focus was on the vibratory, rather than audible, qualities of the purr. The prototype’s purring could be felt when in contact with the human’s body. Table 3.1 presents the various ranges that can be represented by the purring mechanism.  38  3.2. User Study Purring was actuated by means of a small (1 watt) brushed DC motor with an offsetting weight attached to the shaft. The motor was mounted in a tight housing for protection as well as to amplify the vibration. This housing, in turn, was enclosed in the center of the Hapticat’s body. The motor’s power lines ran out the body, through the tail, to custom electronics that attach to a computer via the parallel port. The states were regulated by custom software written in C++ to drive the motor [148].  3.1.6  Warming Element  In an attempt to radiate warmth from the Hapticat, a household heating pad was inserted between the outer shell of the body and the inner filling. An unintended positive side-effect was that the pad helped to pull the look and feel of the body together. Previously, the coarse granularity of the inner bags could be seen and felt as lumps; the pad provided a more cohesive shape. The heating pad had four settings: none (off), low, medium, and high. We elected to only use none and low (Table 3.1); in pilot tests of the prototype, the others settings proved too warm. It should be noted that once the pad was warm it took a considerable amount of time — several minutes — for the heat to dissipate when turned off. For this reason, we left the warming element off during the user study.  3.1.7  Response Settings  The Hapticat was capable of producing five discrete responses: playing dead, asleep, content, happy, and upset. These responses were rendered by selecting a setting for each mechanism from within its respective range (Table 3.1). Table 3.2 lists the specific setting chosen for each response.  3.2  User Study  With construction of the Hapticat complete, we wished to further our investigation of social human-robot affective touch through empirical testing. To that end, we conducted a user study designed to answer the following research questions: 39  3.2. User Study Table 3.2: Hapticat mechanisms settings for responses. Mechanism Response  Ears  Breathing  Purring  Playing Dead Asleep Content Happy Upset  Limp Limp Medium Stiff Stiff  None Slow Medium Medium Fast  None None Slow Medium Fast  1. Do the actions we have designated to activate the Hapticat’s responses match those expected by the human? 2. Can the Hapticat communicate to a human the emotional responses we had implemented? 3. Does the response of the Hapticat initiate any notable emotional response from a human? The following sections describe the study in more detail.  3.2.1  Participants  A total of 13 participants (23% female), ranging in age from 20–39, volunteered to take part in the user study. All participants were graduate students in the Department of Computer Science at a Canadian university. Each received CAD$5.00 as compensation for their participation in the study. Nearly half of the participants reported little to no experience with haptic devices.  3.2.2  Study Setup  The setup for the user study consisted of the Hapticat presented to the participant sitting in a chair in front of a partition. The prototype was connected to the haptic actuators located on the other side of the partition such that the participant could not see the experimenters manipulating the Hapticat. Since this was a Wizard of Oz study, it was necessary to conceal these experimenters to maintain the illusion 40  3.2. User Study  Figure 3.2: Setup for preliminary investigation. that the Hapticat was responding independently. The participant was able to see the experimenters when entering the room; however, the participant’s back was to the partition so the experimenters were not viewed during the study. At no time was the participant able to see the Hapticat’s actuating mechanisms. One experimenter controlled the breathing while the other controlled both the purring and the ears. The study facilitator sat with the participant in front of the partition. He discretely held a small signaling device — a switch controlling a LED behind the partition — to communicate to the other experimenters when to start the response of the Hapticat. Figure 3.2 illustrates the study setup.  3.2.3  Procedure  The study took approximately 30 minutes to complete for each participant. It was divided into three parts: mapping actions to Hapticat responses, observation of affective response, and a questionnaire. We detail each part here in turn.  41  3.2. User Study Mapping Touch Actions to Hapticat Responses During the first part of the study participants were asked to look at the Hapticat, which was originally placed beside them. Without touching or interacting with it, participants were asked to answer a questionnaire regarding the responses expected after performing a particular action (Section B.3). The list of actions participants evaluated was: gently petting, vigorously petting, rubbing ears, pinching body, poking body, hugging, tickling, resting hands on top, shaking, and leaving it alone. The possible Hapticat responses were renderings meant to convey playing dead, asleep, content, happy, and upset. Hapticat and Participant Affective Responses The basic approach for the second portion of the study was observational; however, we also took the opportunity to gather data to compare with our observations. The Hapticat was placed on the participants’ laps, and the facilitator asked them to perform a sequence of touch actions (from a subset of the previously mentioned actions). After experiencing a response from the Hapticat for each touch action performed, participants answered two questions. First, participants were asked what the perceived response of the Hapticat was from the list: playing dead, asleep, content, happy, or upset. Second, participants were asked their emotional response to the Hapticat by reporting a level of agreement to the statement, “I had a positive emotional response to the device”. Response to this statement was ranked on a five-point Likert scale: strongly disagree (-2), disagree (-1), neutral (0), agree (1), or strongly agree (2). This part of the study was conducted using a within-subjects design, where the independent variable was the Hapticat’s haptic response: either active or nonactive. Each participant performed the touch action sequence twice. Counter-balancing was achieved by seven of the participants always receiving the active response in the first sequence of touch actions, while six of the participants received the active response during the second sequence. Furthermore, the presentation order of the actions was randomized for each sequence.  42  3.3. Results At no time were participants told that the Hapticat was controlled by the individuals behind the partition. Debriefings with participants afterwards confirmed that they did not suspect this. Post-Study Questionnaire During the final part of the study, participants completed a post-study questionnaire (Section B.4). This questionnaire gathered information regarding demographics, background on pet ownership and interaction with animals, and comments regarding the Hapticat and the user study.  3.3  Results  This section details the statistical results obtained from the user study. As described in Section 3.2.3, the data gathered were intended for comparison with our observations during the study. The following subsections describe the results in more detail: mapping actions to the Hapticat’s responses, recognizing the Hapticat’s response, the emotional response of the participant, and facilitator observations.  3.3.1  Mapping Touch Actions to the Hapticat Responses  In the first part of the study, participants were asked to look at, but not interact with, the prototype. They were then asked to generate a list of mappings from actions performed to the expected responses from the Hapticat. We found that our mappings from the actions to the Hapticat responses generally matched the responses expected by the participants. For example, 77% of participants expected the gesture leave alone to cause the robot to become asleep, and 93% expected pinching to cause upset. On the other hand, for shaking, 77% of participants expected the Hapticat to response would be upset while our mapping was playing dead. Table 3.3 lists our mappings, and Figure 3.3 charts the breakdown of the participants’ responses. There were four mappings that participants did not show an obvious agreement with ours: shaking, vigorously petting, hugging, and tickling. In the case of shaking, 77% of the participants expected the Hapticat to be upset, while only 33% 43  3.3. Results Table 3.3: Expected mappings from action to Hapticat response. Action  Response  Shaking Leave Alone Rubbing Ears Gently Petting Vigorously Petting Poking Pinching Hugging Tickling Resting Hand on Top  Playing Dead Asleep Content Happy Happy Upset Upset Content Happy Asleep  agreed with our mapping, playing dead. In the other three cases — vigorously petting, hugging, tickling — the majority of participants agreed with our mappings; however, due to our small sample size, we can not definitively say that our mappings were correct. Of note, though, looking closer at the demographics of our sample population, we discovered that the majority of the responses that agreed with ours were from pet owners.  3.3.2  Recognizing Hapticat Affect Display  In the second part of the study, participants physically interacted with the prototype. They were then asked to specify which response was being expressed by the Hapticat. Participants were able to easily recognize three of the five responses we haptically rendered. For the response of playing dead, 85% of the participants recognized our rendering, 77% of the participants recognized our rendering of asleep, and 62% of the participants recognized our rendering of upset. There appeared to be some difficulty differentiating between our renderings of happy and content. When the participant rubbed the Hapticat’s ears, our rendered response was content; a majority of the participants recognized the response as being content (62%) but 31% stated the response they felt was either asleep or happy. Similarly, when the participant petted the Hapticat, our rendered response 44  3.3. Results  Figure 3.3: Participants’ mappings of action to Hapticat’s response. was happy; however, most of the participants recognized the response as being either content (46%) or happy (39%). Figure 3.4 charts the participants’ perception of the Hapticat’s response.  3.3.3  Participant Affect Report  After specifying the Hapticat’s response to an interaction, participants were then asked to report any change in affect. Participants reported a slightly more positive emotional response when the Hapticat responded haptically to most actions when compared to a nonactive Hapticat during the same action (Figure 3.5). In addition, Table 3.4 shows a comparison of the means for the active haptic and nonactive responses during each action (a response of 0 indicates a neutral response). When participants experienced the haptic rendering of asleep, they had a statistically significant more positive emotional response compared to no active rendering (t(24) = 5.196, p < 0.05). Similarly, participants had a statistically significant more positive emotional response to the haptic rendering of upset compared to no active rendering (t(24) = 0.490, p < 0.01).  45  3.3. Results  Figure 3.4: Participants’ perception of Hapticat’s responses to actions. The renderings of content and happy did not show a statistically significant greater positive emotional response compared to no active renderings to the same actions at p = 0.05, but we note that the means for both the content and happy renderings are slightly higher than for no active renderings. The rendering of playing dead also did not show a statistically significant different emotional response compared to no active renderings at p = 0.05, but we note that the mean was slightly lower than for no active renderings.  3.3.4  Observational Data  Much of the study was intended to observe participants’ responses to the Hapticat. Our goal was to see if humans had a change in affect while interacting with the prototype when it rendered active haptic responses. Throughout the study, the facilitator was able to observe the reactions of participants through their posture, facial expressions, and verbal comments. It was  46  3.4. Discussion  Figure 3.5: Participants’ affective response to active haptic and nonactive renderings. particularly interesting to watch their reactions the first time the Hapticat began to respond to their actions. Nearly all exhibited strong positive reactions. One participant began to laugh so hard that tears came to her eyes, and she was unable to report her responses until she took a short break to regain her composure. The vast majority of participants remained apparently excited and engaged with the Hapticat for the duration of the study. However, one participant felt slightly disturbed by the Hapticat and commented about this throughout the trials. Whether positive or negative, we were encouraged to observe a change in a participants’ emotional states.  3.4  Discussion  When mapping the response of the Hapticat to a particular action, we found participants generally agreed with our mappings. It was interesting to see how the participants would respond since we did not reveal the Hapticat to be any particular species. 47  3.4. Discussion Table 3.4: Participants’ mean affective state for active and nonactive Hapticat response. Response  M  SD  F  p  t  Playing Dead  -0.31 -0.51 1.15 0.00 0.77 -0.15 0.92 0.08 0.08 -0.15  0.63 0.56 0.69 0.41 0.73 0.69 0.95 0.86 1.44 0.90  1.13 — 5.44 — 0.00 — 0.42 — 10.01 —  0.299 — 0.028 — 0.948 — 0.525 — 0.004 —  -0.66 — 5.20 — 3.33 — 2.37 — 0.49 —  Asleep Content Happy Upset  In answer to “I had a positive emotional response to the device”, the scale ranged from -2 (strongly disagree) to 2 (strongly agree), with 0 signifying neutral. We suspect that participants’ responses were largely based on their previous experience with animals. An example of this was the agreement with our mappings being strongest for those who are pet owners in the cases of vigorously petting, hugging, or tickling the Hapticat. In the case of shaking the Hapticat, we conclude that we may have incorrectly mapped the Hapticat’s response. While we mapped shaking the Hapticat to playing dead, our participants thought it would be upset instead. Our rationale in choosing playing dead was that if one was particularly cruel to a creature, it would react more strongly than being upset by effectively “playing possum”. Our results clearly show, though, that participants did not make the same connection. One participant commented that if their pet was so uncomfortable in a situation, it would simply run away. Our use of the purring, breathing, and ear mechanisms in the Hapticat effectively rendered three of the five responses we defined. There was some confusion between the happy and content responses. The difference between the renderings was in the speed of the purring and a half stiff or fully stiff ear; the breathing of the Hapticat remained the same for both responses. 48  3.5. Summary It is possible that the differences between the renderings were too subtle for participants to differentiate them. Particularly since there was no training phase in the study in which to demonstrate the differences, participants likely had to primarily rely on transfer from their knowledge of the responses of animals. However, we also suggest that the emotions of content and happy may be too similar for the participants to conceptually differentiate the two. Our participants reported a slightly greater positive emotional response when they felt the active haptic rendering of the Hapticat when compared to the nonactive rendering (Figure 3.5). While we found statistical significance in only two of the five renderings, all but one caused a greater mean positive emotional response from our participants when active haptics were applied than without (Table 3.4). Only playing dead caused a slightly negative emotional response in our participants. We suggest that when the creature was clearly in an active state, the switching to inactive was interpreted as “dead” as opposed to simply “off”, thus eliciting the negative emotional response.  3.5  Summary  Our Hapticat prototype robot pet enabled us to quickly test the feasibility of social human-robot interaction through affective touch. Furthermore, we were encouraged to continue on with our thesis based on the empirical and observational results of our preliminary investigation. Our next step was to move from a prototype to a fully automated robot. This work will be presented in the following chapter (4). Once the robot was completed, we were then able to delve deeper into various components of the full affective touch interaction loop (Figure 1.2). Chapter 5 will present research on affect display originating from the robot, while Chapter 6 will discuss affective touch originating from the human. Finally, Chapter 7 will present work pertaining to the influence of affect display through touch.  49  Chapter 4  The Haptic Creature In this chapter, we discuss the design and development of our Haptic Creature robot, which encompasses the third phase of our research (Figure 1.1). Our approach was to leverage research in human-animal interaction through use of a robotic creature that mimics a small animal, such as a cat or dog, sitting on a person’s lap. The Haptic Creature (Figure 4.1) interacts with the human through the modality of touch. An array of touch sensors over its body, coupled with an accelerometer, allow the robot to sense being touched and moved. It displays its emotional state through adjusting the stiffness of its ears, modulating its breathing, and presenting a vibrotactile purr.  Figure 4.1: The Haptic Creature. (Photo: Martin Dee, with permission) The development of the robot was iterative. We began with the results of our preliminary investigation (Chapter 3); however, as various aspects of the Haptic Creature advanced, pilot test were frequently conducted to demonstrate what as50  4.1. Design Considerations pects worked and what did not. Results were fed back into modifications then pilot tested again. In particular, this iterative process drove the design of the Haptic Creature’s emotional expressions, to be described in Section 5.1 of the next chapter. We begin this chapter by presenting the considerations followed in the design of the Haptic Creature. The remaining two sections of the chapter present the robot’s various hardware and software components.  4.1  Design Considerations  The Haptic Creature robot continues directly from our Hapticat prototype as discussed in Section 3.1. The Haptic Creature differs from related robots in its strong concentration on the modality of touch for affect display in addition to its minimalistic appearance (Section 2.4.2). Here we outline the considerations followed in the design and development of the robot. 1. The Haptic Creature should be perceived as animal-like but not represent a specific animal nor attempt to be overly realistic. Similar to our consideration for the Hapticat, this removed any limitations on characteristics inherent to any one species. More importantly, however, this reduced the human’s expectations of the robot that, in turn, allowed for a shift in focus from the form to the interaction. 2. The Haptic Creature’s actuation mechanisms must work in concert with one another. While not limited to the characteristics of one particular species, the robot’s various means of expression must still seem to be part of a coherent whole in order to provide an engaging experience. Some features may dominate others — e.g., breathing versus ear stiffness — but they should all appear as belonging together. 3. The Haptic Creature should interact solely through the touch modality. The robot must sense the human solely though touch and, similarly, its ex51  4.1. Design Considerations pressive capabilities must be limited only to haptic means. This, however, has the implied restriction that visual and auditory artifacts must be minimized wherever possible. 4. The Haptic Creature should have a pleasant feel. Since the focus of the interaction is touch, the overall feel of the robot should not be unpleasant — e.g., minimize sharp edges, ensure fur is comfortable to touch. This includes the robot’s weight, which should approximate that of a similarly sized animal. 5. The Haptic Creature’s contact points should be maximized. The form of the Haptic Creature was often guided to facilitate human haptic interaction. For example, the robot’s ears were elongated to better afford grasping. Similarly, its backside was expanded to increase surface area and rounded to accommodate the natural position of the human hand. 6. The Haptic Creature should have no discernible facial features. As presented in Section 2.1.2, the face is the primary means of emotion expression for humans. Furthermore, as presented in Section 2.3.1, humans have a tendency to anthropomorphize animal emotions. Therefore, our focus on affective touch required the removal of any confounds related to interpretations of emotion from the face. As demonstrated by the myriad of emotion recognition studies utilizing images of facial expressions, even a static face can convey emotion. Furthermore, the robot’s fur has the potential to adjust when touched. If we were to employ a static face, the shifting of the fur could unexpectedly modify the face, resulting in the perception of expressions changes. 7. The Haptic Creature should be robust. The robot must be able to withstand direct, physical interaction in semisupervised environments with successive untrained individuals over extended time periods.  52  4.2. Hardware  4.2  Hardware  The robot weighs 2.5kg (5.5lbs). Its body is 33cm (13in) long — 13cm (5in) from snout to back, 20cm (8in) from back to rump — and its tail (which masks the communication and power cables) is 100cm (39in) in length. The robot is 10cm (4in) wide at its snout and 20cm (8in) wide at the broadest part of its back.  E  R  Figure 4.2: The Haptic Creature without exterior fur. Visible are the fiberglass shell; touch sensor mesh; the two ear bulbs [E]; and the rib cage [R]. The Haptic Creature’s exterior is constructed of a synthetic (faux) fur. The thread length for the majority of the fur is approximately 2cm (.8in) nap, with a much shorter nap for the underbelly to provide a contrast in texture. Directly beneath the fur is a hand-moulded fiberglass shell (Figure 4.2). This shell serves both as a stable structure with which to affix the touch sensors as well as to protect the electronics and mechatronics mounted to a removable acrylic chassis within (Figure 4.3). The Haptic Creature has three degrees of freedom through which it communicates its emotional state: a pair of ears, which vary in stiffness; lungs, which 53  4.2. Hardware simulate breathing; and a purr box, which renders a vibrotactile “purr”. The robot has an array of force sensors across its body to sense touch and an accelerometer to sense movement. These features are all controlled by means of a microcontroller that communicates with a host computer. Each of these will be described in the following sections.  E  A  L  M  P Figure 4.3: The Haptic Creature mechatronics. Visible, from left to right, are the motor control board [M] and the mechanisms for the ears [E], purr box [P], accelerometer [A], and lungs [L]. Not visible, underneath chassis, is the FSR board, which houses the microcontroller. Two features were initially developed for the Haptic Creature but not utilized in the current version. First, in an attempt to provide a more flesh-like feel, a prototype skin was fabricated from silicon rubber (Smooth-On “Dragon Skin”). This skin was to layer between the fur and the fiberglass shell; however, it interfered with the touch sensors and was cumbersome to integrate into the system. Second, to give a sense of warmth, we designed for the use of heating elements but found the Haptic Creature’s mechatronics generated adequate heat.  54  4.2. Hardware  4.2.1  Ears  The Haptic Creature has two ears, each capable of changing stiffness independent of the other. The ears do not change position or move in any way; rather, they must be physically squeezed to sense their level of stiffness. Each ear (Figure 4.2 [E]) is a self-inflating rubber bulb with a one-way valve at its tip (AMG model 106-792). The bulb’s opposing end is connected via a silicon tube to an air outtake valve driven by a Hitec HS-645MG analog servo (Figure 4.3 [E]). A servo was chosen for the ear mechanism because it provided a low cost, off-the-shelf solution for simple yet accurate position control. When the valve is fully closed, squeezing the bulb allows no air to be released, so the ear is at maximum stiffness. Conversely, when the valve is fully opened, then air is allowed to freely escape when the bulb is squeezed, so the ear is at minimum stiffness. The servo’s full range of motion to adjust the valve from open to closed is 45 steps; however, as determined through informal pilot tests, at most five different levels of stiffness are observable throughout this range.  4.2.2  Lungs  The lungs comprise the mechanism that simulate breathing within the Haptic Creature. It is comprised of a Hitec HSR-5980SG digital servo that drives a cantilevered jack (Figure 4.3 [L]) to which is attached the robot’s rib cage (Figure 4.2 [R]). Like the ear mechanism described above, a servo was chosen for its ease of position control. Furthermore, the particular servo model here was selected for its speed and high torque necessary for the lung actuation. A trade-off for utilizing a servo, however, was that its discrete steps could at times could be felt. While efforts were undertaken to dampen the effects, the movement was not as smooth as we would have preferred. The mechanism’s full range of motion, from fully exhaled (minimum volume) to fully inhaled (maximum volume), is 3 cm (1.2 in) laterally and 3 cm (1.2 in) vertically, which corresponds to 100 steps of the servo. However, this far exceeds natural, realistic breathing, so the range is limited to 1.4 cm (0.5 in) laterally and 1.4 cm (0.5 in) vertically, which corresponds to 45 steps of the servo.  55  4.2. Hardware  4.2.3  Purr Box  The purr box is the mechanism within the Haptic Creature that generates the vibrotactile purr. It is comprised of a motor with an eccentric mass attached to its shaft (Figure 4.3 [P]). The DC motor is a 20 watt Maxon RE 25 (model 118752) 25mm in diameter with graphite brushes. The eccentric mass is fabricated from a C1018 steel disk and weighs 12g. It is 10mm thick with an 18mm outer diameter and 9mm of material remaining. We tested several less expensive motors; however, all generated unwanted audible artifacts, and, after extended use, many degraded in performance. The Maxon RE 25, while more costly, was both silent and robust.  4.2.4  Touch and Movement Sensing  Touch sensing is achieved through a mesh of 56 Interlink force sensing resistors (FSR) — 47 round (1.3cm / 0.5in), 9 square (3.8cm / 1.5in) — mounted to the Haptic Creature’s fiberglass shell (Figure 4.2). Covering the extent of the robot’s body, the sensors are placed at approximately 5cm (2in) intervals on-center, front to back and left to right. Each ear has a sensor on its front and outer side. Figure 4.4 diagrams a two-dimensional representation of the touch sensor layout. Movement is sensed via a Freescale XYZ-axis accelerometer (model MMA7260QT), set to 6g sensitivity, and mounted on a Pololu breakout board (model 766). In Section 8.3.5, we reflect further on general considerations for touch sensing technologies.  4.2.5  Communication and Control  Communication with the Haptic Creature for low-level control of its mechatronics are managed by a Microchip PIC18F87J50 microcontroller as part of the Microchip Full Speed USB Demonstration Board (model MA180021). Control commands are sent by the host software to the microcontroller via USB 2.0 in order to set servo positions or motor speeds as well as query touch or accelerometer values.  56  4.3. Software  Figure 4.4: Touch sensor layout, flattened. = Front (Side); = Front (Ear); = Back; = Underbelly.  = Snout; = Back (Side);  = Front; = Rump;  The overall system includes a motor control board (Figure 4.3 [M]) and a FSR board (Figure 4.3 underneath chassis). The motor board comprises the basic electronics that drive all the motors — two ear servos, the lungs servo, and the purr box motor — as well as the (unused) heating elements. The FSR board comprises the touch sensing circuitry in addition to housing the microcontroller. This board is capable of connecting 60 sensors, each of which is addressable via one of four multiplexers (MUX). The FSR board also provides simple circuitry to linearize sensor response (Figure 4.5). Detailed schematics and layout for the two boards can be found in Appendix A.1.  4.3  Software  Figure 4.6 depicts a high-level view of the Haptic Creature architecture, which is actually composed of two software systems. Low-level mechatronics control is handled by the microcontroller firmware. All other processes are handled by  57  4.3. Software  Figure 4.5: FSR linearization circuit. the host software. The two systems communicate through a specified protocol (Appendix A.3) transmitted over USB 2.0. The microcontroller firmware was written in C (MPLAB C for PIC18 v3.31); however, since its function is simply low-level motor control and sensor reading, its code comprises a very small portion of the robot’s software system. The host software, on the other hand, encompasses the vast majority of the Haptic Creature software, which consists of 390 classes written in Java (v1.6.0). The host system was developed simultaneously on Gentoo Linux and Apple Mac OS X (v10.6) — and occasionally tested for compatibility on Microsoft Windows XP. Due to the portability of the Java Virtual Machine (JVM), no special modifications were necessary to run on any of these operating systems. Figure 4.7 presents an overview of the primary classes of the host software system. This system is divided into several layers, each of which is categorized as either behavioral or mechatronic. The Central Nervous System (CNS) layer constitutes the Haptic Creature’s high-level behavior. A Scheduler manages the execution of the Recognizer, Emoter, and Renderer, which allows each to have an execution frequency independent of the other. To simplify debugging of the current implementation, however, the update rate was set the same for all components, 30Hz, which was the required rate of the highest-frequency class, the Recognizer. Any class called to execute when no work was required resulted in a NOP (No Operation), so this approach incurred little additional overhead. 58  4.3. Software  Human  Robot  Gesture Recognizer  Sensing Haptic Channel  Emoter Actuation Physical Renderer  Figure 4.6: Overview of the Haptic Creature architecture. Human (left) interacts with the Haptic Creature (right) solely through touch. This input passes through the various components of the robot, eventually resulting in an appropriate haptic response to the human. The Physical Abstraction layer provides a sensing and actuation interface that separates the Haptic Creature’s behavior — specifically, the Recognizer and Renderer — from its mechatronics. For example, as we will describe in greater detail in Section 4.3.4, the Renderer manipulates an ear abstractly through a volume parameter rather than directly with a servo position. This has the advantage of allowing the mechatronics of the ear to change — e.g., substituting a motor for the servo — without any need to modify the Renderer class. The remaining two layers comprise the low-level sensing and actuation framework for the host software system. The Transducer Bridge layer provides abstract representations of the Haptic Creature’s transducers, thereby presenting a uniform interface to each specific transducer type — currently, Accelerometer, Motor, PressureSensorMesh, and Servo. The Transducer Implementation layer then provides the corresponding implementations specific to a particular mechatronics 59  4.3. Software  Central Nervous System Scheduler  Behavior  Recognizer  Emoter  Renderer  Physical Abstraction Skin  Ear Lung PurrBox Transducer Bridge  Sensing and Actuation Low-Level  PressureSensorMesh  Servo  Accelerometer  Motor  Transducer Implementation PressureSensorMeshImpl  ServoImpl  AccelerometerImpl  MotorImpl  Figure 4.7: Host software architecture depicting primary classes with membership in one of four software layers. The Central Nervous System layer manages the robot’s high-level behavior, while the remaining layers — Physical Abstraction, Transducer Bridge, and Transducer Implementation — provide increasing levels of specificity for the robot’s sensing and actuation..  60  4.3. Software platform. This framework decouples the classes in the Physical Abstraction layer from the low-level implementation of each transducer, thereby allowing the underlying implementation to vary without affecting other parts of the system. Currently, the robot’s low-level mechatronics are managed solely by our PIC microcontroller platform; however, this framework easily affords the swapping and even intermixing of a variety of alternate low-level solutions. In the remainder of this section, we describe each of the Haptic Creature’s components (Figure 4.6) — Sensing, Gesture Recognition, Emoter, Physical Renderer, and Actuation — while also providing further detail form the primary classes of the host software (Figure 4.7).  4.3.1  Sensing  The Sensing component, as the name implies, handles those aspects of the robot that deal with sensing information from the real world. Specifically, it interfaces with the touch and movement sensors via the control hardware (Section 4.2.5). This component does little interpretation of the data, save simple filtering and normalization. The Skin class (Physical Abstraction layer) represents the entirety of the current sensing infrastructure and is composed of two classes from the Transducer Bridge layer. The PressureSensorMesh encapsulates the touch sensor data, which is normalized within the range [0, 1023] and referenced via a row and column index. The Accelerometer encapsulates the movement data, which is normalized within the range [-512, 512] and referenced via an axis index.  4.3.2  Gesture Recognizer  The Gesture Recognizer component queries the Sensing component and constructs an initial model of the physical data. Its function is to manage the variety of sensor information so as to provide a cohesive view. One example would be the array of pressure sensors that, when monitored, allowed determination of direction and speed of movement along with pressure intensity. The Gesture Recognizer component, in turn, builds a higher-order model of the input data. An example would be distinguishing between a moderate stroke 61  4.3. Software and a firm massage. Both require monitoring the direction, speed, and pressure intensity across a range of sensors; however, this component also interprets these values such that an evaluation of the intention of the user can be determined. A functioning version of the Gesture Recognizer component was not crucial to our thesis, because it was possible to conduct the related study (Chapter 7) simulating its capabilities. Furthermore, a fully functioning version would have been a major undertaking that was beyond the scope of our thesis, so we implemented only the infrastructure as a placeholder for future work. The Recognizer class (Central Nervous System layer) represents the host software for the Gesture Recognizer component. At present, this class manages the interface to the Sensing component, so it can query for sensor data. However, the Renderer class does not apply any additional processing beyond the ability to record the data in an external file for development and testing purposes. This recorded sensor data was utilized in an offline, proof of concept Gesture Recognition Engine (GRE) developed by Chang et al. (2010) [27]. The results of the user study presented in Chapter 6 — particularly the likely gestures; profiles of human touch gestures; and higher-level intents — will directly inform the future development of this component.  4.3.3  Emoter  The Emoter component represents the underlying emotional state of the Haptic Creature. This state is affected either externally through information from the Gesture Recognizer component or by means of its own internal mechanisms — e.g., temporal considerations. One example could be that a gentle stroke elicits a pleased state, then the Emoter component gradually decays into a neutral state shortly after this interaction ceases. This component itself has no knowledge of the Gesture Recognizer implementation and only cursory knowledge of the Physical Renderer component (necessary for change notification). This allows the model to focus on the domain-specific information of the system without being directly concerned with how it is getting its information or how its state is being presented.  62  4.3. Software The Emoter class (Central Nervous System layer) represents the host software for the Emoter component. As the Gesture Recognizer component was not yet fully developed, the current implementation of the Emoter class was not affected by its inputs from the Gesture Recognizer component. Also, while the Emoter class receives regular timing notifications from the Scheduler class, it does not yet implement temporal considerations. The current version of the Emoter class focuses solely on the encapsulation of emotional state, which we describe in more detail next, and change notification thereof. The results of the user study presented in Chapter 6 — particularly the higherlevel intents as well as expectations of the Haptic Creature’s emotional response — will directly inform the advancement of the Emoter component. Future directions for this component are detailed in Section 8.4.1. Affect Space In Section 2.1.1, we presented the discrete and dimensional models of emotion, which are the predominant theories in psychology. For the Haptic Creature, we chose to design its emotion model following from the dimensional approach, as it provided a straightforward framework with which to parameterize the robot’s behavior. Furthermore, precedent for this approach already exists within socially interactive robotics (e.g., [18, 138, 157]). We designed the Haptic Creature’s emotion model in accordance with the twodimensional, bipolar affect space adapted from Russell [130, 136, 192] (Figure 4.8). Conceptually, the horizontal dimension describes the robot’s valence — unpleasant vs. pleasant — while the vertical dimension represents the robot’s arousal — deactivated vs. activated. Its current emotional state, therefore, is defined by specifying a point (v, a) in this affect space, where each dimension is within the range [-1.0, 1.0].  4.3.4  Physical Renderer  The Physical Renderer component is in charge of the higher-order, physical manifestation of the internal state of the Haptic Creature. This component listens for changes in the Emoter component, then translates the results into an orchestrated 63  4.3. Software  A r o u s a l  UnpleasantActivated Activated "distressed" "aroused"  PleasantActivated "excited"  Unpleasant  Neutral  Pleasant  "miserable"  "neutral"  "pleased"  UnpleasantPleasantDeactivated Deactivated Deactivated "depressed"  "sleepy"  "relaxed"  Valence Figure 4.8: The Haptic Creature’s affect space. A two-dimensional, bipolar model of emotion (adapted from Russell). Valence ranges from unpleasant to pleasant. Arousal ranges from deactivated to activated. Quoted names are Russell’s emotion labels.  64  4.3. Software manipulation of the effectors. One example might be that when the robot moves into a pleased state, its breathing response adjusts to very soft, rhythmic in/out motions while it produces a similar “purr” that can be felt. The Renderer class (Central Nervous System layer) represents the host software for the Physical Renderer component. This class provides two distinct functions: emotion transitioning and expression control. When the Emoter component updates the Haptic Creature’s emotional state, the Renderer class must smoothly transition from the emotion actively being expressed to the new emotion. The speed at which this occurs is determined by the valenceReactiveness and arousalReactiveness properties, which specify the time (milliseconds) to transition for the respective affect dimensions. This functionality should not be confused with the temporal considerations presented with the Emoter component above. Rather, these properties exist simply to control how quickly the Renderer responds to changes in emotional state, thus ensuring organic physical transitions. For a particular emotion, the Renderer class also must manage the physical expression. This is accomplished through a suite of software manipulators, one for each of the Haptic Creature’s effectors: EarManipulator, LungManipultor, and PurrBoxManipulator. Each manipulator is configured via rendering parameters, which we detail next. Rendering Parameters The manner in which the Haptic Creature displays a particular emotional state is described through a series of key expressions located at specific points in the affect space. A key expression provides a detailed description of the behavior in the form of specific values for each actuator’s rendering parameters. If the robot’s current emotional state does not coincide with a key expression, then the parameters are interpolated from nearby key expression. This interpolation also allows for tweening values so that the robot may smoothly transition from one emotional state to another. The individual rendering parameters used to define the behavior for each of the Haptic Creature’s actuators will be described here in turn. The specific values  65  4.3. Software used for these parameters, and modifications thereof, are described in Sections 5.1 and 7.1. Ears The two ears can be controlled independently of each other in the single dimension of stiffness. They vary in firmness in a manner not visually perceptible but can be felt when the human squeezes them. Ear stiffness is specified by means of a volume parameter, which ranges from 0% (limp) to 100% (stiff). Lungs The Haptic Creature’s lungs modulate its manner of breathing through four parameters. Rate is defined as breaths-per-minute (bpm). Bias controls the symmetry of each breath by specifying the percentage that is dedicated to the inhalation phase, from 0% (all exhale) to 100% (all inhale) — for example, a bias of 25% would allocate 1/4 of each breath to the inhale and 3/4 to the exhale. Rest (milliseconds) allows for a pause at the end of inhalation and/or exhalation for each breath, and is defined independently for each. Volume defines the minimum and maximum position for each breath. Purr Box The Haptic Creature’s purr box controls the presentation of a modulated vibrotactile purr. Waveform determines the type of wave generated: pulse, sawtooth, reverse sawtooth, sine, triangle, or null. On duration and off duration (milliseconds) define the wave’s duty cycle. Amplitude, specified as percentages from 0% to 100%, define the wave’s minimum and maximum amplitude.  4.3.5  Actuation  The Actuation component is tightly coupled with the Physical Renderer component and is charged with directly controlling the robot’s effectors. Specifically, this component interfaces with the various motors via the control hardware (Section 4.2.5). It does little interpretation of the information, save adjusting normalized data appropriately for the individual hardware devices. The Ear, Lung, and PurrBox classes (Physical Abstraction layer) comprise the current actuation infrastructure. Each of these classes encapsulates an ap-  66  4.4. Summary propriate actuator abstraction from the Transducer Bridge layer: Ear→Servo, Lung→Servo, PurrBox→Motor. The Servo class controls the position of a servo motor via an angle property ([0.0, 180.0]). The Motor class controls the speed of a motor via a speed property ([0.0, 1.0]) and the direction of rotation via a rotation property (CW, CCW). The current implementation of the control hardware, however, does not allow for specification of rotation, so this property is unused at present. The Purr Box is the only hardware currently controlled through the Motor class and, at present, does not have need of the rotation property.  4.4  Summary  In this chapter, we presented the design and development of the Haptic Creature robot, a platform for our investigation of affective touch in social human-robot interaction. This robot was subsequently employed in three user studies. The study in Chapter 5 examined the manner and ability of the Haptic Creature to communicate its emotional state through touch. The study presented in Chapter 6 investigated affective touch originating from the human. Finally, the study in Chapter 7 examined the emotional influence on the human of affective touch interactions.  67  Chapter 5  Robot Affect Display With the results from our preliminary study (Chapter 3) and subsequent development of the Haptic Creature robot (Chapter 4), we move on in this chapter to present the first of the three interaction decomposition user studies as introduced in Section 1.4.3. This study examined the manner and success of the Haptic Creature in communicating its emotional state through touch to the human (Figure 5.1, unshaded cells 3→4). Referring back to our introductory scenario in Chapter 1, the work presented here can be seen in Roi’s varied emotional expressions — e.g., his pronounced purring when excited; his slow, rhythmic breathing when relaxed; or his half-stiffened ears when happy — and his ability to convey these to Stella. Expression  Recognition  1  2  4  3  HUMAN  Recognition  CREATURE  Expression  Figure 5.1: Affective touch interaction loop between human and Haptic Creature. Adapted from Figure 1.2 to highlight affect display from robot.  68  5.1. Affect Display Design The work presented in this chapter encompasses two phases of our research (Figure 1.1): the iterative development of the Haptic Creature’s manner of affect display (third phase), and the robot affect display user study (fourth phase). We begin this chapter with the design of the Haptic Creature’s affect display. In Chapter 4, we described the underlying system for the robot’s affect display, while here we will present how this has been configured for specific emotional expressions. Animal models served as the initial reference, then the robot’s expressions were refined over successive iterations of informal user tests. Ultimately, the Haptic Creature’s breathing rate and ear stiffness were used to convey its state of arousal, while the asymmetry of breathing and purring communicated its valence. The user study, which is presented next in Section 5.2, was designed to assess the overall effectiveness of the Haptic Creature’s affect display while providing insight towards specific areas for improvement. Participants were asked to recognize a variety of the Haptic Creature’s affective touch expressions, which were selected from across the extents of its emotional space. Also, through self-report, participants recorded their emotional state at various points throughout the study. We continue on to the results of the study in Section 5.3. The robot was shown to be effective in communicating its state of arousal but less for valence, with no influence of gender or experience with animals. We also found that participants’ arousal decreased as a result of the interaction. We conclude the chapter with a discussion of the results, where we recommend the use of breathing depth along with modified parameters for breathing rate, breath asymmetry, and purring, as a means of improving valence communication.  5.1  Affect Display Design  Section 4.3.3 described the Haptic Creature’s emotion model as represented by an affect space composed of two dimensions: valence and arousal. In turn, Section 4.3.4 introduced key expressions coupled with actuator rendering parameters as a means of defining the Haptic Creature’s behavior. For the study presented in this chapter, the Haptic Creature’s affect display was described by means of nine key expressions located within its affect space (Figure 5.2, diamonds): three levels of arousal — high, medium, and low — each matched with three levels of valence 69  5.1. Affect Display Design  A r o u s a l  UnpleasantActivated Activated "distressed" "aroused"  PleasantActivated "excited"  Unpleasant  Neutral  Pleasant  "miserable"  "neutral"  "pleased"  UnpleasantPleasantDeactivated Deactivated Deactivated "depressed"  "sleepy"  "relaxed"  Valence Figure 5.2: The Haptic Creature’s affect space adapted from Figure 4.8 to highlight key expressions. Diamonds signify locations of the nine key expressions that define the robot’s affect display.  70  5.1. Affect Display Design — negative, neutral, and positive. Table 5.1 presents the key expressions’ settings for each rendering parameter. Animal models served as the initial reference for the robot’s emotion display; however, the goal has never been to create a direct replacement for any particular animal. These models provided a useful starting point for many of the actuator parameter settings, but were tuned through informal user tests where participants provided guided verbal feedback as to their general thoughts on the robot’s affect display. Refinements were made that altered the range of expressions or its manner. This procedure was repeated over several iterations. Subsequently, we conducted a mini-study with nine participants to examine how the robot performed under more experimental conditions. Results and feedback again informed more alterations. The remainder of this section details the design of the actuator rendering parameters used in the user study profiled in this chapter.  5.1.1  Ears  The ears were utilized solely to convey arousal, with stiffness proportional to arousal level: low was represented by limp ears and high by fully stiffened ones. Both ears always presented the same stiffness. This approach was intended as a non-visual analog to an animal perking its ears in an alerted state [32]. Most pilot participants understood this concept, although at least one imagined non-stiff ears to connote positive valence.  5.1.2  Lungs  The Haptic Creature’s breathing was tuned to convey both arousal and valence. Arousal was rendered through breathing rate, with faster rates corresponding to high arousal. The rates were normalized to those of domestic cats, dogs, and rabbits [4, 31, 127]; however, in cases of extreme arousal the breathing rate for these animals can exceed 100 breaths-per-minute (bpm). Piloting allowed us to adjust downward the top rate to a convincing level, while not overtaxing the robot’s lung mechanics, to arrive at a range of 15–70 bpm.  71  5.1. Affect Display Design  Table 5.1: Key Expressions: arousal and valence categorization, actuator rendering parameters. Actuator  Parameter  Key Expression Distressed  Ears  Vol  %  Lungs  Rate Bias Vol  bpm % %  Purr Box  Wave On / Off Ampl  ms %  Vol  Lungs  Rate Bias Vol  Purr Box  Wave On / Off Ampl  100  100  70 25 30–90  70 37 30–90  70 50 30–90 Sine 728 / 128 0–33  Vol  Lungs  Rate Bias Vol  Purr Box  Wave On / Off Ampl  Neutral  Pleased  50  50  50  42.5 25 20–85  42.5 37 20–85  42.5 50 20–85 Sine 706 / 706 0–26  Depressed Ears  Excited  100  Miserable Ears  Aroused  Sleepy  Relaxed  0  0  0  15 25 0–70  15 37 0–70  15 50 0–70  Key Expressions ordered in correspondence with the Haptic Creature’s affect space (Figure 4.8). Lungs Rest parameter, both inhalation and exhalation, always 0 milliseconds.  72  4  1 2 3 Time (s)  1 2 3 Time (s)  100 80 60 40 20  (g) Depressed: Rate = 15.0 bpm; Bias = 25%; Vol = 0–70%.  4  1 2 3 Time (s)  (h) Sleepy: Rate = 15.0 bpm; Bias = 37%; Vol = 0–70%.  4  (f) Pleased: Rate = 42.5 bpm; Bias = 50%; Vol = 20–85%.  100 80 60 40 20 1 2 3 Time (s)  100 80 60 40 20  4  (e) Neutral: Rate = 42.5 bpm; Bias = 37%; Vol = 20–85%.  Vol (%)  Vol (%)  (d) Miserable: Rate = 42.5 bpm; Bias = 25%; Vol = 20–85%.  1 2 3 Time (s)  100 80 60 40 20  4  4  (c) Excited: Rate = 70.0 bpm; Bias = 50%; Vol = 30–90%.  Vol (%)  100 80 60 40 20 1 2 3 Time (s)  1 2 3 Time (s)  (b) Aroused: Rate = 70.0 bpm; Bias = 37%; Vol = 30–90%.  Vol (%)  Vol (%)  (a) Distressed: Rate = 70.0 bpm; Bias = 25%; Vol = 30–90%.  100 80 60 40 20  4  Vol (%)  1 2 3 Time (s)  100 80 60 40 20  Vol (%)  100 80 60 40 20  Vol (%)  Vol (%)  5.1. Affect Display Design  4  100 80 60 40 20 1 2 3 Time (s)  4  (i) Relaxed: Rate = 15.0 bpm; Bias = 50%; Vol = 0–70%.  Figure 5.3: Change in lung volume over four-second time period for key expressions in Table 5.1. Shaded regions highlight breath inhalation phase — bias > 50% favors inhalation; bias = 50% is symmetric; and bias < 50% favors exhalation. 73  100 80 60 40 20  Ampl (%)  Ampl (%)  5.1. Affect Display Design  1 2 3 Time (s)  4  (a) Pleased: Wave = Sine; On / Off = 706 / 706 ms; Ampl = 0–26%.  100 80 60 40 20 1 2 3 Time (s)  4  (b) Excited: Wave = Sine; On / Off = 728 / 128 ms; Ampl = 0–33%.  Figure 5.4: Change in purr amplitude over four-second time period for key expressions in Table 5.1. The valence component of the lung display was determined by the symmetry of breathing: equal durations (50% bias) for inhalation and exhalation corresponded to positive valence, while a quicker inhalation (down to 25% bias) signified negative valence. Domestic animal respiration is actually the opposite: for a negative state, such as stress or disease, inhalation will be notably slower. We chose to diverge from the animal models because a quick motion outward by the rib cage striking the human’s hand was intended to impart a negative feeling. A graphical representation of the change in lung volume for the various key expressions can be seen in Figure 5.3.  5.1.3  Purr Box  The main intent of purring was to convey positive valence, as in a cat in a pleased state, though with only the vibratory component. A purr was present in the pleasant and pleasant-activated conditions. Purring was originally in the pleasantdeactivated condition but piloting exposed a confound with arousal: participants consistently ranked the arousal dimension much higher whenever purring was present, especially in the low arousal case. The Hapticat prototype described in Chapter 3 was able to convey negative emotions through its purr. Intended to represent the vibration of a growl, it had a 74  5.2. User Study staccato-like pulse wave of higher amplitude than its positive valence purr. Though the purr box in both versions are mechanically related, the physical composition of their bodies differ enough such that using similar parameters in the current Haptic Creature did not appear to convey negative valence: both types of purring were interpreted as positive. As a result, it was decided to focus only on using purring for positive emotions for this study; however, investigation of a negative valence purr is a topic for future work (see Section 8.4.2). The purr was also used to convey arousal, though with less priority. An increase in arousal was manifested by a slightly increased amplitude for the purr wave along with a marked decrease in the delay between waves. Too great an amplitude, however, was found unpleasant by pilot participants with smaller body types, so the intensity was iteratively tuned to a noticeable range that was not overpowering. A graphical representation of the change in purr volume for the various key expressions can be seen in Figure 5.4.  5.2  User Study  Our study was designed to assess the overall effectiveness of the Haptic Creature’s affect display while providing insight towards specific areas for improvement. Its approach evolved from a succession of pilot studies as briefly described in Section 5.1. We initially employed Barrett and Russell’s affect measure [8], which asks participants to rank twelve emotion adjectives on a five-level Likert scale. This measure proved to be effective at capturing the perceived arousal and valence of the robot; however, the nuances of the data made it difficult to discern if participants were perceiving the specific state intended — e.g., excited, depressed. Traditional studies on recognition of emotion in facial expression, on the other hand, administer forced-choice responses from a list of emotion labels. These have the advantage of pinpointing a specific emotion, however, they tend to focus on the discrete nature of the emotion [35]. We developed a hybrid approach that uses both forced-choice emotion labeling as well as assessment of perceived levels of arousal and valence. The intent was to allow coarse grain categorization from the labels while also provide fine grain data 75  5.2. User Study  Figure 5.5: Setup for robot affect display study. from the dimensional responses (see Section 5.2.4), with the added advantage that the dual responses provide confirmation of each other.  5.2.1  Participants  Data from 32 individuals (50% female) were used in the study. Recruited via fliers, online classifieds, and mailing lists, each was compensated CAD$10 for participation. Ages ranged from 19 to 50 (M = 27.5, SD = 9.37), and all self-identified as native English speakers (81% from North America). None had previously participated in studies with the Haptic Creature.  5.2.2  Study Setup  The study was conducted in a soundproof observation studio that housed a desk and an adjustable office chair. Atop the desk was a 17-inch (1280 × 1024 pixels) LCD monitor, a keyboard, and a computer mouse. All study software, including control of the Haptic Creature, was written in Java and executed on an Intel-based PC running the Gentoo [58] Linux operating system (Section 4.3).  76  5.2. User Study The study participant sat in the chair and faced the monitor on the desk. The mouse was placed on the side that she self-identified as her mouse hand. The Haptic Creature initially was situated in the participant’s lap with the robot’s backside initially facing the participant’s non-mouse hand; however, the participant was allowed to adjust the Haptic Creature’s position throughout the study, as she saw fit. The participant wore earmuffs to mask any extraneous sounds that may be generated by the robot (Figure 5.5).  5.2.3  Stimuli  The Haptic Creature presented nine different emotional renderings in the study, which corresponded directly with the nine key expressions of the affect display design (Figure 5.2, diamonds). These stimuli were chosen because they provide good separation by displaying minimum, maximum, and average states for both arousal and valence.  5.2.4  Response Format  The participant provided two categories of responses each time she assessed the robot’s emotional state: (1) a specific emotion label, and (2) the perceived valence and arousal levels. In addition, the participant also reported a separate confidence score for both responses, which was recorded on a five-level Likert scale that ranged from not at all confident (guessed) to very confident. For the emotion label selection, the participant made a forced-choice response from a provided list of 16 items (Table 5.2). Six options were Ekman’s basic emotions [35]: afraid, angry, disgusted, happy, sad, and surprised. Nine were from Russell’s circumplex model of affect [8, 130] and Affect Grid [136]: aroused, depressed, distressed, excited, miserable, neutral, pleased, relaxed, and sleepy. The emotion words were presented in alphabetized order with a final option, none of these to address shortcomings of forced-choice responses for perceived emotions [52, 131]. The decision to include both Ekman and Russell emotion labels was to increase the overall richness of available choices by combining words from research on  77  5.2. User Study Table 5.2: Emotion label list for assessing the Haptic Creature’s emotional state. Afraid∗ Disgusted∗ Miserable Sad∗  Angry∗ Distressed Neutral Sleepy  Aroused Excited Pleased Surprised∗  Unmarked labels are from Russell; † avoids artificial agreement.  Depressed Happy∗ Relaxed None Of These† ∗  from Ekman;  discrete emotions (Ekman) with those from research on the dimensional nature of emotions (Russell). To specify perception of the robot’s valence and arousal, the participant made selections on a seven-level version of Lang’s Self-Assessment Manikin (SAM) rating scales [96]. Instructions for using the SAM scales were adapted from Bradley and Lang (2007) [15]; however, the order of each scale was reversed such that the valence scale was labeled “Unhappy versus Happy” and the arousal scale was labeled “Calm versus Excited”. This adjustment in ordering ensured consistency among all scales used in the study, which were all ordered negative-to-positive or low-to-high. Furthermore, during pilot testing, the original ordering resulted in occasional data entry errors, while the reversed version appeared to present participants a more natural ordering. The SAM images were from PXLab [78] and measured 69x74 pixels. To increase visibility of the facial expressions, we used the portrait versions of the valence images [164, p. 105], rather than the more traditional full figure. An example of the SAM images used can be seen in Section C.5 (p. 267). The SAM format proved more efficient to administer compared to our original use of the Barrett-Russell measure, and its pictorial representation of affect avoided confusion with the emotional labeling response.  5.2.5  Procedure  The study took approximately 60 minutes for the participant to complete. The facilitator was not present in the room with the participant while the study was being conducted. This section presents details of the various steps in the study. 78  5.2. User Study Instructions Instructions provided an overview of the research being conducted; an explanation of the Haptic Creature and information on interacting with it; and the study protocol, including a detailed explanation of the response format. The complete instructions are documented in Section C.4. Practice Session During a short (approximately 180 seconds) familiarization session, all nine stimuli were demonstrated by the robot in a different random order for each participant. Each stimulus was presented for 20 seconds along with a visual countdown timer. The participant was instructed to interact with the Haptic Creature but was not required to assess its emotional state. Haptic Creature Affect Assessment The main portion of the study consisted of the Haptic Creature rendering the nine simulated emotional states and, for each, the participant recording her emotion label and arousal/valence SAM scale assessments. No time restriction was imposed, and the robot displayed its current emotional state until a response was recorded. Stimuli were presented in three sets, with each set consisting of the nine stimuli repeated two times. Thus, each unique stimulus appeared six times for a total of 54 trials — 9 stimuli x 2 repetitions x 3 sets. The order of stimuli in each set was randomized for each participant, and a two minute rest break came between sets. Participant Affect Report Before the initial set and upon completion of each set the participant reported her own current emotional state. Responses were collected by means of the SAM scales — no emotion word choices were presented. Post-Study Questionnaire A questionnaire collected participant demographic information, background with various animal types, general feedback on the Haptic Creature, and strategies em79  5.3. Results ployed in assessing its emotional state. The Participant ranked her experience interacting with a variety of animal types on a five-level scale: none, up to 1 year, 2–3 years, 4–5 years, and more than 5 years. The complete questionnaire is documented in Section C.6.  5.3  Results  We first present results related to the ability of the Haptic Creature to successfully communicate specific intended emotions as demonstrated by (a) participants’ choice of emotion labels as best descriptors of particular states, and (b) their ratings of valence and arousal. We then describe our data with regards to participants’ self-reported affect states and implied changes thereof. Finally, we present select results from the post-study questionnaire.  5.3.1  Recognition Scoring  As presented in Section 5.2.4, for each emotion presented by the Haptic Creature, participants made a forced-choice response from among 16 items (Table 5.2) — 15 emotion labels plus none of these. Russell’s emotion labels are dimensional in nature so have direct mappings to the stimuli presented (Figure 5.2). Ekman’s labels, on the other hand, do not have a direct mapping but may overlap with Russell’s labels. To address this, the perceived arousal and valence ratings were analyzed for each label choice. Labels where both the arousal and valence did not statistically differ were considered equivalent. Figure 5.6 displays the mean perceived arousal and valence ratings broken down by each emotion label choice. The emotion label equivalencies are shown in Table 5.3 — only Ekman’s surprised was not found to be equivalent with any of Russell’s emotion labels. A recognition score was then computed by counting all the occurrences when an emotion label choice matched the intended emotion presented by the Haptic Creature. Any Ekman label that corresponded to a Russell label was counted as if the Russell label was chosen. This recognition score, in turn, was converted to a percentage. 80  Mean  5.3. Results Arousal Valence  7 6 5 4 3 2 1 None Of These Surprised Sleepy Sad Relaxed Pleased Neutral Miserable Happy Excited Distressed Disgusted Depressed Aroused Angry Afraid  Emotion Label Figure 5.6: Mean perceived arousal and valence ratings by emotion label chosen. Overall recognition scores for the 32 participants ranged from 17% to 52% (M = 30%, SD = 10%). A 2x3 between-groups ANOVA was conducted to examine differences in recognition scores between gender and experience with animals. There was no statistically significant interaction effect, F(2, 26) = 1.11, p = .35, η p2 = .08. Both the main effect for gender, F(1, 26) = .57, p = .46, η p2 = .02, and the main effect for animal experience, F(2, 26) = .01, p = .99, η p2 = .00, were not statistically significant. The animal experience factor was computed through the animal background information gathered in the post-study questionnaire as described in Section 5.2.5. The relevant animal types for the present analysis were cats, dogs, and rabbits because they most closely resemble the morphology and interaction of the Haptic Creature. The ratings of the three animal types were summed, and participants were then ranked into one of three experience categories: low (3–6), moderate (7–11), and extensive (12-15). In addition to participants’ recognition scores, the frequency of emotion label choices was examined for each stimulus presented. The binomial statistical test was conducted for each condition in order to determine that the expected choice was selected at a frequency significantly greater than chance. We set chance at 81  5.3. Results Table 5.3: Equivalency mappings between Russell and Ekman emotion labels. Russell  Ekman  Depressed  Sad  Distressed  Afraid Angry Disgusted  Pleased  Happy  —  Surprised  25% following from Hertenstein et al. (2006) [75], in which participant emotion selection was considered to be differentiated among positive and negative valence as well as high and low arousal. These results are presented in Table 5.4. As with the recognition scores, any Ekman label that corresponded to a Russell label (Table 5.3) was counted as if the Russell label was chosen. For reference, Table 5.5 presents the frequency breakdown for any of these aggregate emotion labels presented in Table 5.4.  5.3.2  Perceived Arousal and Valence Ratings  Participants also ranked the perceived arousal and valence for each rendering presented by the Haptic Creature. Figure 5.7 charts the means for each by condition. These data were evaluated by means of one-way repeated measures ANOVA for the condition factor. Through post hoc analysis, we examined any resultant homogeneous subsets. For arousal, the one-way repeated measures ANOVA with Greenhouse-Geisser correction for violations of sphericity yielded a statistically significant difference among the nine conditions, F(6.37, 1216.87) = 839.29, p < .0005, η p2 = .82. Multiple comparisons via Tukey’s HSD computed five homogeneous subsets as shown in Table 5.6. The expected outcome, however, was for three: one for each level of arousal.  82  5.3. Results  Table 5.4: Frequency of emotion label chosen for each condition. Condition Label  Condition %  Label  Condition %  Label  Distressed  Aroused  Excited  Distressed† 70b Excited 10 Aroused 8 Surprised 7  Distressed† 56 Excited 17 Aroused 15 Pleased‡ 6  Distressed† Excited Aroused Pleased‡  Miserable§  Neutral  Pleased  Distressed† Pleased‡ Aroused Neutral Excited  23 18 16 15 9  Depressed Sleepy Relaxed Depressed∗ Neutral  Neutral Pleased‡ Distressed Aroused Relaxed  25 24 14 13 8  Sleepy 33 31 14 11  % 43 28 12 11  Pleased‡ 44b Distressed† 26 Aroused 9 Excited 6 Relaxed  42b  Sleepy Relaxed 30 Depressed∗ 17 Neutral 7  Sleepy 49 Relaxed 31a ∗ Depressed 10  Conditions ordered in correspondence with the Haptic Creature’s affect space (Figure 4.8). Labels in boldface represent expected choice for respective condition. Only frequencies greater than 5% are listed. ∗ Depressed includes Sad. † Distressed includes Afraid, Angry, and Disgusted. ‡ Pleased includes Happy. § The expected label for Unpleasant was Miserable (2%). a p < .05. b p < .0005.  83  5.3. Results  Table 5.5: Frequency breakdown for aggregate emotion labels in Table 5.4. Condition  Label  Distressed  Distressed (70%)  =  Angry (29%) + Distressed (24%) + Afraid (17%) + Disgusted (0%)  Aroused  Distressed (56%)  =  Distressed (20%) + Afraid (18%) + Angry (16%) + Disgusted (2%)  Pleased (6%)  =  Pleased (3%) + Happy (3%)  Distressed (43%)  =  Afraid (19%) + Distressed (14%) + Angry (9%) + Disgusted (1%)  Pleased (11%)  =  Happy (7%) + Pleased (4%)  Distressed (23%)  =  Distressed (11%) + Angry (6%) + Afraid (4%) + Disgusted (2%)  Pleased (18%)  =  Pleased (11%) + Happy (7%)  Neutral  Pleased (24%)  =  Happy (15%) + Pleased (9%)  Pleased  Pleased (44%)  =  Pleased (25%) + Happy (19%)  Distressed (26%)  =  Afraid (16%) + Distressed (8%) + Angry (2%) + Disgusted (0%)  Depressed  Depressed (14%)  =  Sad (7%) + Depressed (7%)  Sleepy  Depressed (17%)  =  Sad (9%) + Depressed (8%)  Relaxed  Depressed (10%)  =  Depressed (6%) + Sad (4%)  Excited  Miserable  Aggregation  84  5.3. Results  7  Mean Valence  Mean Arousal  7 6 5 4 3 2  6 5 4 3 2  1  1 Dis  Exc Neu Dep Rel Aro Mis Ple Sle  Condition (a) Dark bars are high arousal conditions, light are medium, and white are low.  Dis  Dep Neu Exc Rel Mis Aro Sle Ple  Condition (b) Dark bars are negative valence conditions, light are neutral, and white are positive.  Figure 5.7: Mean ratings for perceived arousal (a) and perceived valence (b). Inspection of the table reveals that subset 1 contains all the low arousal conditions; while subsets 2–3 contain all medium conditions and overlap on pleasant; and subsets 4–5 contain all high conditions and overlap on unpleasant-activated. We computed the effect size for the non-overlapping conditions of subsets 2–3 (d = .27) and subsets 4–5 (d = .38). Both produce “small” effect sizes, implying a statistical but not a practical difference. Therefore, we consider subsets 2–3 to be one as we also consider subsets 4–5. For valence, the one-way repeated measures ANOVA with Greenhouse-Geisser correction for violations of sphericity also yielded a statistically significant difference among the nine conditions, F(5.79, 1105.63) = 29.62, p < .0005, η p2 = .13. Multiple comparisons via Tukey’s HSD computed the expected outcome of three homogeneous subsets (Table 5.7); however, they do not represent one for each level of valence. The only discernible pattern from table is that the first three conditions are high arousal conditions, not at all related to valence conditions.  85  5.3. Results Table 5.6: Homogeneous subsets for mean rating of perceived arousal. Subset Condition  N  1  2  Relaxed Sleepy Depressed  192  1.43 1.55 1.67  Neutral Pleased Miserable  3  3.77 4.04  5  4.04 4.09  Aroused Distressed Excited Sig  4  5.39 5.67 .34  .18  1.00  .15  5.67 5.75 1.00  Subset for α = .05. Rating scale ranged from 1–7 (calm–excited). Subsets 2 and 3 overlap at 4.04. Subsets 4 and 5 overlap at 5.67. Table 5.7: Homogeneous subsets for mean rating of perceived valence. Subset Condition  N  1  Distressed  192  2.67  Aroused  2 3.35  Excited Miserable Depressed Neutral Sleepy Pleased Relaxed Sig  3  4.05 4.11 4.24 4.33 4.38 4.50 4.54 1.00  1.00  .06  Subset for α = .05. Rating scale ranged from 1–7 (unhappy–happy). 86  5.4. Discussion Table 5.8: Participant arousal and valence self-reports at specified times. Arousal  Valence  Time  N  M  SD  η2  Baseline After Set1 After Set2 After Set3  32  3.88 3.22∗ 3.03∗ 3.09∗  1.36 1.45 1.26 1.35  — .17 .35 .28  M  SD  η2  5.16 4.88 4.78 4.78∗  .99 .98 .91 1.01  — .08 .13 .17  Arousal rating scale ranged from 1–7 (calm–excited). Valence rating scale ranged from 1–7 (unhappy–happy). ∗ Statistically significant difference (p < .05) from baseline.  5.3.3  Participant Affect State  By means of the SAM scales (Section 5.2.4), participants also reported their own emotional state four times during the study: before the initial Haptic Creature affect assessment set and upon completion of each set. Separate one-way repeated measures ANOVA for arousal, F(3, 93) = 6.12, p = .00, η p2 = .17, and for valence, F(3, 93) = 3.10, p = .03, η p2 = .09, both found a statistically significant difference among these four self-assessments. Adjusted via the Holm-Bonferroni method, multiple comparisons were conducted between the baseline measurement and each subsequent report. The means, standard deviations, effect sizes, and statistically significant differences are presented in Table 5.8.  5.4  Discussion  The primary goal of this study was to investigate how well the Haptic Creature communicates its emotional state through touch. In specific, we considered if the current settings for the robot’s rendering parameters represent their intended affective state. As highlighted in Figure 5.7 as well as in Tables 5.6 and 5.7, overall the Haptic Creature seemed capable of communicating its level of arousal but less effective at conveying valence. Details of these results provide information on ways to appro87  5.4. Discussion priately modify rendering parameters in order to improve the robot’s overall ability to communicate. In addition, we began to explore how the interaction affects the emotional state of the human, with results showing a decrease in participant arousal.  5.4.1  Emotion Label Selections  An examination of Table 5.4 shows the Haptic Creature correctly communicated four of nine conditions: unpleasant-activated (70%), pleasant (44%), deactivated (42%), and neutral (25%). The least successful condition was unpleasant as its emotion label, miserable, occurred only 2%. This has always been the most difficult for pilot participants to discern, and this also seems to be the case in the formal study. The perceived valence and arousal for miserable in Figure 5.7 appears valid for when the label was chosen; however, it was not chosen very often.  5.4.2  Effectiveness of Conveying Arousal  Visual inspection of the robot’s perceived arousal in Figure 5.7(a) shows a clear stair-step pattern from conditions of high activation down to those of low activation. The statistical analysis in Section 5.3.2 also confirms there are three homogeneous groups corresponding to the three arousal states. Breathing rate and ear stiffness were the main features meant to vary with arousal while being held constant along the valence axis. It appears that the settings for actuator rendering parameters related to arousal represented their intended affect state.  5.4.3  Ambiguity in Communicating Valence  Figure 5.7(b), on the other hand, does not reveal the same stair step pattern as for the perceived valence. We expected low ratings for negative valence conditions increasing up to those of positive valence. This ambiguity is similarly evident in the emotion label selections where, regardless of the condition’s valence, distressed dominates the high activation states and sleepy dominates the low activation ones (Table 5.4).  88  5.4. Discussion  5.4.4  Breathing’s Contribution to Valence  Breathing symmetry was one feature intended to convey valence. In the post-study questionnaire 71% of participants rated breathing symmetry as something they consciously used to assess the Haptic Creature’s emotional state while, in contrast, breathing rate and depth ranked 100% and 94% respectively. Furthermore, structured open-ended questions allowed participants to explain how they differentiated levels for arousal and valence. As expected, breathing rate predominated the answers for arousal. Surprisingly, however, it also appeared frequently in responses to valence: some mentioned fast breathing as positive valence but others felt it was negative. Inspection of the perceived valence in Figure 5.7(b) shows a decrease in all high arousal states — bars 1, 4, and 7 — when the breathing rate was fastest, and a similar pattern can be seen in the first three conditions of Table 5.7. This implies negative valence may have been inferred from rapid breathing. This aligns with models of domestic animal breathing, where increased respiration rates can imply sickness or distress; however, it is also noted that it can be the result of excitement or exercise [127]. Depth was one additional breathing factor mentioned by participants as conveying valence. For this study, however, the Haptic Creature’s depth of breathing changed based on arousal: the amount of displacement remained constant at around 70%-75% but both the minimum and maximum amplitude increased as arousal increased. Participants appear, instead, to have been using depth as cue for valence, with some suggesting shallow implied negative and deep conveyed positive valence. These responses provide useful insight for modifications to actuator rendering parameters to improve the robot’s affect display, particularly in respect to valence. Leveraging breathing rate for not only arousal is one approach. Using depth of breathing to convey valence rather than arousal is another possible modification. In addition, since participants did indicate breathing symmetry as something they considered, the related parameters can be adjusted. Controlled via the bias parameter, the current approach to symmetry always presents faster inhalation when breathing is asymmetric. It is possible, however, to also do the opposite, where the 89  5.4. Discussion inhale of a breath is slower than its exhalation. This approach could augment the current one by widening the expressive range for breathing symmetry. Any modifications to rate, depth, or symmetry of breath would, of course, require further evaluation as to their effectiveness. Finally, as noted in Table 5.1, the rest parameter was currently unused. This is yet another parameter that could be manipulated to affect the valence component.  5.4.5  Purring’s Contribution to Valence  Purring was another mechanism the Haptic Creature used to display its emotional state. Its main goal was to convey valence though, where present, the purring also varied with arousal. In particular, purring was rendered only in the pleasant and pleasant-activated conditions of this study. Inspection of Table 5.4 for these two conditions indicate that purring was effective in conveying the pleasant state but not pleasant-activated. In addition, distressed prevailed in the latter condition yet was also frequent in pleasant. Questionnaire responses reflect that some participants considered the purr, since it was vibrotactile rather than audible, to connote shaking or shivering. This is especially apparent in the pleasant-activated condition as some felt as if the purr was too strong; they noted that the increase in the intensity of the purr corresponded to an increase in excitement but also noted that if it was too strong it implied unhappy or fearful emotions. This was a surprising result since, as discussed in Section 5.1, pilot participants rarely found any purr to imply negative valence, even ones intentionally designed as such. Nonetheless, shaking or shivering provides a very useful metaphor from which to develop negative valence purring.  5.4.6  No Influence by Gender or Animal Experience  While the primary goal of this study was to examine the Haptic Creature’s effectiveness in communicating its emotional state, the study also investigated differences in recognition as a result of gender or prior experience with animals. The latter case in particular stems from the thought that humans with greater experience with animals might fare better (or, perhaps, worse) when assessing the robot’s 90  5.5. Summary emotional state. As the results in Section 5.3.1 show, however, there were no statistically significant differences noted for either gender or animal experience.  5.4.7  Interaction Decreases Participant Arousal  One of the research goals of this thesis is to investigate the influence of affective touch. The study presented in Chapter 7 directly explores this; however, this study afforded a chance to begin examining the question by asking participants to rate their own affective state at various points. Most notably, results in Section 5.3.3 show a statistically significant decrease in arousal with large effect size. It should be noted, however, that there was no control group — all those reporting interacted directly with an active Haptic Creature — so the results at this point can not completely confirm the changes were a direct result of interacting with the robot.  5.5  Summary  In this chapter, we presented the design of the Haptic Creature’s affect display behavior along with a user study that tested this design. The robot’s breathing rate and ear stiffness were successful in conveying arousal. For valence, however, breathing asymmetry was not successful and purring only partially. Recognition of the Haptic Creature’s emotional state was not influenced by gender or experience with animals. The broader implications of this work will be discussed in Section 8.1.2. We move on in the next chapter (6) to our investigation of affective touch in the opposite direction: originating from the human and directed to the robot. Combining knowledge gained from these two preceding studies, Chapter 7 examines the influence of affective touch on the human’s emotional state.  91  Chapter 6  Human Affect Display Whereas our first interaction decomposition study explored the manner in which the Haptic Creature communicates its emotional state through touch to the human, our second investigated the manner in which the human communicates emotional state through touch to the Haptic Creature (Figure 6.1, unshaded cells 1→2) as well as the human’s expectations of the robot’s reaction to the affective touch interaction (Figure 6.1, cells 2→3 highlighted arrow). In our introductory scenario from Chapter 1, this form of human affect display can be observed when Stella lifts up and nuzzles Roi when she is excited; quickly pulls him close to herself when distressed; or firmly pats Roi’s back when she becomes depressed. Expression  Recognition  1  2  4  3  HUMAN  Recognition  CREATURE  Expression  Figure 6.1: Affective touch interaction loop between human and Haptic Creature. Adapted from Figure 1.2 to highlight affect display from human as well as emotional influence on robot.  92  6.1. Touch Dictionary The work presented in this chapter encompasses the fifth phase of our research (Figure 1.1), namely, the human affect display user study. The overall focus of this study was on a robot’s ability to recognize human touch gestures and, further, determine emotional content. We wished to provide guidance for algorithmic recognition of human touch gestures while also expanding general knowledge of human affective touch. We began with a dictionary of probable touch gestures, which was then filtered by those likely to be used in human affect display. Of the likely affective touch gestures, we were interested in low-level components — points of contact, duration, intensity — as well as possible higher-order intent when gestures are used in combination. A secondary goal of this study was to investigate the human’s expectations from the affect display. When the human communicates emotion to the robot through touch gestures, what are the human’s general expectations of an appropriate emotional response from the robot. This goal is directed at our broader interest in the emotional influence of affective touch. We begin this chapter with the compilation of our touch dictionary. We continue in Section 6.2 with details of a user study where, from this dictionary, participants selected and performed touch gestures that they would likely use when conveying a variety of emotions to the Haptic Creature. Participants also predicted the emotional response of the robot as a result of the gestures they had just performed. Our principal findings regard patterns of gesture use for affect display; physical properties of the likely gestures; expectations for the Haptic Creature’s response to mirror the emotion communicated; and analysis of the human’s higher intent in communication. From the latter finding, we developed five tentative categories of “intent” that overlap emotion states: protective, comforting, restful, affectionate, and playful.  6.1  Touch Dictionary  In the context of our work, we consider a touch gesture broadly as the placement of a part or parts of one’s body in direct physical contact with another’s body, often  93  6.1. Touch Dictionary coupled with movement, in order to convey meaning or intent. As a means of shorthand, “gesture” will frequently be substituted for “touch gesture”. Our investigation required a set of plausible touch gestures for interacting with the Haptic Creature. Review of relevant literature did not yield a comprehensive list in any one source, so we set out to compile our own touch dictionary. The result has 30 items and is presented in Table 6.1. We began with literature sources from human-animal interaction [87] [86] [9], human-human touch [184], and human-human affective touch [75] [74]. We then generated three separate gesture lists, one from each of these research domains. Next, we removed impractical or inappropriate gestures. For example, the gestures high five and fingers interlock were removed because the robot possesses no hands or fingers. With the exception of kiss, we removed all mouth-related gestures, such as lick or bite, as these were deemed unsuitable or unlikely. Table 6.1: The touch dictionary. Gesture Label  Gesture Definition  Contact Without  Any undefined form of contact with the Haptic Creature that  Movement . . . .  has no movement. For example: laying one’s hand a top the Haptic Creature, or resting one’s arm alongside it.  Cradle . . . . . . . .  Hold the Haptic Creature gently and protectively.  Finger Idly . . . .  Gently and randomly pull at the hairs of the Haptic Creature’s fur with your fingers.  Grab . . . . . . . . .  Grasp or seize the Haptic Creature suddenly and roughly.  Hit . . . . . . . . . . .  Deliver a forcible blow to the Haptic Creature with either a closed fist or the side or back of your hand.  Hold . . . . . . . . .  Grasp, carry, or support the Haptic Creature with your arms or hands.  Hug . . . . . . . . . .  Squeeze the Haptic Creature tightly in your arms. Hold the Haptic Creature closely or tightly around or against part of your body.  Kiss . . . . . . . . . .  Touch the Haptic Creature with your lips. (table continues) 94  6.1. Touch Dictionary Table 6.1: Continued. Gesture Label  Gesture Definition  Lift . . . . . . . . . . .  Raise the Haptic Creature to a higher position or level.  Massage . . . . . .  Rub or knead the Haptic Creature with your hands.  Nuzzle . . . . . . .  Gently rub or push against the Haptic Creature with your nose or mouth.  Pat . . . . . . . . . . .  Gently and quickly touch the Haptic Creature with the flat of your hand.  Pick . . . . . . . . . .  Repeatedly pull at the Haptic Creature with one or more of your fingers.  Pinch . . . . . . . . .  Tightly and sharply grip the Haptic Creature’s fur between your fingers and thumb.  Poke . . . . . . . . .  Jab or prod the Haptic Creature with your finger.  Press . . . . . . . . .  Exert a steady force on the Haptic Creature with your flattened fingers or hand.  Pull . . . . . . . . . .  Exert force on the Haptic Creature by taking hold of it in order to move it towards yourself.  Push . . . . . . . . . .  Exert force on the Haptic Creature with your hand in order to move it away from yourself.  Rock . . . . . . . . .  Move the Haptic Creature gently to and fro∗ or from side to side.  Rub . . . . . . . . . .  Move your hand repeatedly to and fro∗ on the fur of the Haptic Creature with firm pressure.  Scratch . . . . . . .  Rub the Haptic Creature with your fingernails.  Shake . . . . . . . .  Move the Haptic Creature up and down or side to side with rapid, forceful, jerky movements.  Slap . . . . . . . . . .  Quickly and sharply strike the Haptic Creature with your open hand.  Squeeze . . . . . .  Firmly press the Haptic Creature between your fingers or both hands. (table continues)  95  6.1. Touch Dictionary Table 6.1: Continued. Gesture Label  Gesture Definition  Stroke . . . . . . . .  Move your hand with gentle pressure over the Haptic Creature’s fur, often repeatedly.  Swing . . . . . . . .  Move the Haptic Creature back and forth or from side to side while suspended.  Tap . . . . . . . . . . .  Strike the Haptic Creature with a quick light blow or blows using one or more fingers.  Tickle . . . . . . . .  Touch the Haptic Creature with light finger movements.  Toss . . . . . . . . . .  Throw the Haptic Creature lightly, easily, or casually.  Tremble . . . . . .  Shake against the Haptic Creature with a slight rapid motion.  Entries listed in alphabetical order of Gesture Label.  ∗  Though no partici-  pants in the present study expressed difficulty with the definition wording “to and fro”, this has been replaced with “back and forth” in subsequent studies based on pilot participant feedback. We then merged these reduced lists. Frequently, gestures from different sources overlapped in kind but not name, so we reduced each to a single, common label across all. For example, our gesture label contact without movement was referenced with slightly different wording in all works. Finally, though not mentioned in our original source materials, the gestures cradle and rock were added after informal discussions with pilot participants noted their absence from the touch dictionary. The original source materials additionally provided definitions on how to perform a small set of the touch gestures. Appropriate existing definitions were used; however, all others were adapted from The New Oxford American Dictionary [106]. In all cases, “Haptic Creature” was substituted for the receiver of the touch. Others wishing to utilize our touch dictionary need only replace the definition’s touch recipient similarly.  96  6.2. User Study  6.2  User Study  The user study was conducted as a within-subjects, single-factor design. The sole factor, the emotion communicated by the human to the robot, had nine levels: distressed, aroused, excited, miserable, neutral, pleased, depressed, sleepy, and relaxed. These emotions were taken directly from the two-dimensional model of emotion (Figure 4.8) and represent minimum, maximum, and average states for both valence and arousal. The participant answered questions about and performed touch gestures that the participant would use to convey each of the nine emotions. In each emotion communicated level, the participant also predicted the emotional response of the robot to the gestures the participant had just performed and reported any consequent change in the participant’s own emotional state.  6.2.1  Participants  Data from 30 individuals (50% female) were used. Recruited via fliers, online classifieds, and mailing lists, each was compensated CAD$10 for participation. Ages ranged from 18 to 41 (M = 24.33, SD = 6.47), and all self-identified as native English speakers (90% from North America). None had previously participated in studies with the Haptic Creature. Overall experiences with pets and general attitudes towards them are presented in Section 6.3.4.  6.2.2  Study Setup  The study was conducted in a soundproof observation studio that housed a desk and a non-adjustable office chair. Atop the desk was a 17-inch (1280 × 1024 pixels) LCD monitor, a keyboard, and a computer mouse. Also situated on the desk was a video camera mounted to a tripod positioned directly behind and above the computer monitor. All study software, including control of the Haptic Creature, was written in Java and executed on an Intel-based PC running the Gentoo [58] Linux operating system (Section 4.3). The study participant sat in the chair and faced the monitor on the desk. The mouse was placed on the side that he self-identified as his mouse hand. The Hap97  6.2. User Study  Figure 6.2: Setup for human affect display study.  98  6.2. User Study tic Creature initially was situated in the participant’s lap with the robot’s backside initially facing the participant’s non-mouse hand; however, the participant was allowed to adjust the Haptic Creature’s position throughout the study, as he saw fit. The Haptic Creature was nonactive throughout: it did not move, or in any way communicate with the participant, or respond to touch gestures. As a result, no extraneous sounds were generated by the robot. Nonetheless, the participant wore earmuffs to provide a consistent setup across all our studies (Figure 6.2).  6.2.3  Procedure  The study took approximately 60 to 75 minutes for the participant to complete. The participant was presented with a detailed set of instructions; asked to report his current emotional state; and then taken through the main part of the user study. Once completed, a questionnaire was administered. The facilitator was not present in the room with the participant while the study was being conducted. The main part of the study was composed of the following steps: • a rating of the likelihood of employing touch gestures; • the performance of select affective touch gestures; • a prediction of the Haptic Creature’s emotional response; and • a report of the participant’s current affective state. The main part of the study was repeated over the nine emotion communicated factor levels. Each participant was presented with all levels in randomized order, and a brief (30 second) rest break was given at the end of all but the final factor level. Each step of the procedure is detailed below. Instructions Instructions provided the participant with an overview of the research being conducted; an explanation of the Haptic Creature and information on interacting with it; and the study procedure, including a detailed explanation of the response formats employed. The complete instructions are documented in Section D.3. 99  6.2. User Study Touch Gestures Likelihood Rating The participant was presented with an emotion to communicate to the Haptic Creature and asked to rate the likelihood of using gestures from the touch dictionary (Table 6.1). Each gesture label and its corresponding definition was presented one at a time in randomized order. Responses were recorded on a five-point rating scale: Very Unlikely (1), Unlikely (2), Neither Unlikely nor Likely (3), Likely (4), Very Likely (5). When determining a response, the participant was asked to imagine the Haptic Creature to be his pet, one with which he had a close and comfortable relationship. He was directed to think about and imagine that he was feeling the given emotion then consider the given touch gesture. The participant was further instructed that he was not feeling the given emotion because of the Haptic Creature. Rather, he was to consider conveying the emotion as if the robot was an impartial observer or companion. Likely Touch Gestures Performance The participant physically performed a subset of the gestures on the Haptic Creature. Criterion for inclusion in this subset was any gesture the participant ranked as likely to be used for the given emotion (i.e., ≥ 4) in the previous step. Touch gestures from the subset and their corresponding definitions were presented one at a time in randomized order. As in the previous step, the participant was directed to imagine feeling the given emotion then consider the presented touch gesture. Each gesture performance was captured through video and by the robot’s touch sensors and accelerometer. An analysis of the video recordings will be presented in Section 6.3.2. The sensor data recordings are intended for future use in refinement of the Haptic Creature’s gesture recognition engine [27] and, therefore, will not be discussed here.  100  6.2. User Study Table 6.2: Emotion label list for predicting the Haptic Creature’s emotional response. Identical to list used in the robot affect display user study (Table 5.2). Afraid∗ Disgusted∗ Miserable Sad∗  Angry∗ Distressed Neutral Sleepy  Aroused Excited Pleased Surprised∗  Unmarked labels are from Russell; † avoids artificial agreement.  Depressed Happy∗ Relaxed None Of These† ∗  from Ekman;  Haptic Creature Emotional Response Prediction The participant predicted the emotional response of the Haptic Creature as a result of the gestures he had just performed. He chose one of 16 items from a provided list (Table 6.2). Six options were Ekman’s basic emotions [35]: afraid, angry, disgusted, happy, sad, and surprised. Nine were from Russell’s dimensional model of affect: aroused, depressed, distressed, excited, miserable, neutral, pleased, relaxed, and sleepy. The emotion words were presented in alphabetized order with a final option, none of these, to address shortcomings of forced-choice emotion responses [131] [52]. Consistent with the list used in our study presented in Chapter 5, the decision to include both Ekman and Russell emotion labels was to increase the overall richness of available choices by combining words from research on discrete emotions (Ekman) with those from research on the dimensional nature of emotions (Russell). Participant Affect Report At the beginning of the study and each time after predicting the robot’s emotional response, the participant reported his current emotional state. This was recorded by means of seven-level versions of Lang’s Self-Assessment Manikin (SAM) rating scales for valence and arousal [96]. Instructions for using the SAM scales were adapted from Bradley and Lang (2007) [15]; however, the order of each scale was reversed such that the valence scale was labeled “Unhappy versus Happy” and the arousal scale was labeled “Calm versus Excited”. This adjustment in ordering ensured consistency among all scales used in the study, which were all ordered 101  6.3. Results negative-to-positive or low-to-high. Furthermore, during pilot testing, the original ordering resulted in occasional data entry errors, while the reversed version appeared to present participants a more natural ordering. The SAM images were from PXLab [78] and measured 69x74 pixels. To increase visibility of the facial expressions, we used the portrait versions of the valence images [164, p. 105], rather than the more traditional full figure. An example of the SAM images used can be seen in Section D.4 (p. 298). This data was collected to inform a forthcoming study on the full affective touch interaction loop study, which will be presented in Chapter 7 and is not analyzed here as a result. Post-Study Questionnaire At the conclusion of the study, the participant completed a comprehensive questionnaire. This questionnaire collected demographic information; pet experience and attitudes; general impressions of the Haptic Creature; and details related to the emotions communicated and touch gestures performed. The complete questionnaire is documented in Section D.5.  6.3  Results  Our results begin with participants’ ratings for the likelihood that they would use various touch gestures when displaying specific emotions. Next we detail the properties of touch interactions which we observed between participants and the robot. This is followed by participants’ reported expectations of the Haptic Creature’s emotional response to the gestures they performed. We conclude with a summary of relevant responses to the post-study questionnaire.  6.3.1  Touch Gesture Likelihood  For each emotion communicated level, participants ranked the likelihood of using gestures from our touch dictionary as described in Section 6.2.3. Given the large number of conditions and gestures considered, we were precluded from conducting  102  6.3. Results statistical analysis on this dataset; however, results were incorporated in the metaanalysis for human intent (Section 6.4.3). We present the results in two tables. Precedence of Gesture Use Table 6.3 provides the mean likelihood rating for each gesture under each emotion communicated level. In addition, a total score was computed for each gesture by summing all respective mean likelihood ratings. The table is sorted in descending order of this total score: gestures at the top of the table can be considered overall more likely to be used to communicate emotion compared with those at the bottom. Furthermore, individual cells are shaded to draw attention to likely emotions for each gesture: those which are likely to communicate one or more emotions are highlighted in boldface, while the remaining gestures are not considered likely to be used. Table 6.3 therefore presents a complete view of the responses, while giving an overall sense of precedence for touch gesture use for affect display. From this, one can observe that gestures which are likely to communicate one or more emotions are predominantly affectionate in nature: stroke, hug, hold, rub, pat, cradle, massage, scratch, rock, nuzzle, tickle, squeeze, lift, kiss, swing, and toss. In addition, the two low activity gestures are included: contact without movement and finger idly. On the other hand, the remaining (unlikely) touch gestures are mostly aggressive: pull, press, tap, pick, push, poke, tremble, grab, pinch, shake, slap, and hit.  103  Table 6.3: Mean likelihood touch gestures would be used to communicate given emotions. Emotion Gesture  2.97 2.90 2.77 3.13 2.67 2.80 2.77 2.43 2.80 2.67 2.47 2.00 1.57 2.77 2.00 2.67 2.87 1.47 1.90  3.50 2.37 3.60 3.00 3.70 3.50 2.80 3.53 3.33 2.70 2.97 2.93 3.20 3.00 3.13 2.83 2.53 2.93 2.83  Excited 3.40 2.00 3.87 3.37 3.80 3.37 2.60 3.27 3.50 2.33 2.80 3.37 3.80 3.60 4.00 2.77 2.57 2.87 3.73  Miserable 3.07 3.70 3.37 3.37 3.07 2.63 3.10 2.47 2.80 2.73 2.70 2.50 1.80 2.57 1.67 2.53 2.43 1.80 1.80  Neutral  Pleased  3.93 4.57 3.00 3.83 3.47 3.73 3.23 3.27 3.27 3.80 2.83 2.67 2.77 2.33 2.53 2.07 2.57 2.10 2.07  4.13 3.10 4.30 3.80 3.97 3.87 3.70 3.43 3.40 2.90 3.10 3.50 3.87 2.67 3.37 2.37 2.33 3.37 3.00  Depressed 3.47 4.40 3.63 3.53 3.03 3.07 3.53 2.73 2.63 3.30 2.90 2.67 2.03 2.43 1.60 2.23 2.23 2.10 1.73  Sleepy  Relaxed  Total  3.73 4.60 3.57 3.60 3.03 3.10 3.80 3.17 2.63 3.07 3.00 2.93 2.63 2.27 1.53 2.27 2.13 2.40 1.73  4.33 4.63 3.47 3.80 3.70 3.83 3.93 4.03 3.67 3.73 2.90 2.87 3.33 2.33 2.43 2.07 2.13 2.73 2.10  32.53 32.27 31.58 31.43 30.44 29.90 29.46 28.33 28.03 27.23 25.67 25.44 25.00 23.97 22.26 21.81 21.79 21.77 20.89  104  (table continues)  6.3. Results  Stroke Contact Hug Hold Rub Pat Cradle Massage Scratch Finger Idly Rock Nuzzle Tickle Squeeze Lift Pull Press Kiss Swing  Distressed Aroused  Table 6.3: Continued. Emotion Gesture  2.70 2.70 2.83 2.07 1.67 2.67 2.47 2.43 2.47 1.90 1.77  2.47 2.37 1.63 2.50 2.60 2.27 2.50 2.07 2.07 1.40 1.27  Excited 2.90 2.47 1.63 2.67 3.30 2.30 2.97 2.10 2.80 1.47 1.40  Miserable 1.90 2.23 2.93 2.10 1.73 2.50 2.00 2.17 1.80 1.87 1.70  Neutral  Pleased  2.63 2.33 1.83 1.97 1.97 1.53 1.70 1.83 1.23 1.37 1.23  2.20 2.20 1.80 1.80 2.27 1.50 1.83 1.80 1.40 1.30 1.10  Depressed 2.00 2.33 2.30 1.90 1.37 2.30 1.70 1.83 1.50 1.50 1.33  Sleepy  Relaxed  Total  1.93 1.73 2.07 1.60 1.23 1.37 1.30 1.53 1.17 1.17 1.03  2.00 2.10 1.53 1.43 1.80 1.37 1.30 1.53 1.40 1.17 1.03  20.73 20.46 18.55 18.04 17.94 17.81 17.77 17.29 15.84 13.15 11.86  Gestures listed in descending order of Total score, which was computed for each gesture by summing its mean likelihood ratings. Likelihood scale ranged from Very Unlikely (1) to Very Likely (5). Gestures highlighted in boldface have at least one mean likelihood rating greater than 3.00. Emotion cell shading key: (3.00, 3.50) [3.50, 4.00) [4.00, 5.00] .  6.3. Results  Tap Pick Push Poke Toss Tremble Grab Pinch Shake Slap Hit  Distressed Aroused  105  6.3. Results Likely Gestures Within Affect Space Table 6.4, on the other hand, organizes the emotion communicated levels in correspondence with the layout of the affect space depicted in Figure 4.8. For the given emotion, only likely gestures are included and presented in descending order of their respective mean likelihood rating for that emotion. This table allows easier comparison of likely gestures both within a specific emotion communicated level as well as across the affect space’s two dimensions — valence (horizontal) and arousal (vertical). In turn, this exposes several patterns of interaction. When moving from negative valence to positive, the number of likely gestures increases for the emotion communicated. Taking the high arousal levels as the most extreme example, distressed has only one likely gesture, while aroused has eight, and excited has 13. When focused on the arousal dimension, the finger idly touch gesture is likely only for low arousal emotions, while cradle and contact without movement are likely for low-to-neutral (non-high) arousal emotions. When considering the valence dimension, the massage and scratch touch gestures are likely for neutral-to-positive (non-negative) valence emotions. Specific to positive valence emotions, the tickle gesture is likely for all three; nuzzle is likely for neutral-to-high (non-low) arousal emotions; while kiss and rock are likely only for pleased; and swing and toss are likely only for excited. Finally, emotions that are high-to-neutral in arousal while negative in valence — distressed and miserable — are dominated by sustained gestures. While the other emotion communicated levels also contain sustained touch gestures, these two have a preponderance of them.  6.3.2  Touch Gesture Profile  All touch gestures performed on the Haptic Creature were recorded on video and subsequently coded via the procedure described in Appendix D.6. Given the large number of conditions and gestures considered, we were precluded from conducting statistical analysis on this dataset; however, results were incorporated in the metaanalysis for human intent (Section 6.4.3). The resultant data is presented in two separate tables. 106  6.3. Results Table 6.4: Touch gestures likely to communicate given emotions. L is gesture’s mean likelihood rating for given emotion (Table 6.3). Emotion Gesture  Emotion L  Distressed Hold  3.13  Rub Hug Massage Stroke Pat Scratch Tickle Lift  3.70 3.37 3.37 3.10 3.07 3.07  Contact Stroke Hold Finger Idly Pat Rub Scratch Massage Cradle  3.70 3.60 3.53 3.50 3.50 3.33 3.20 3.13  Contact Cradle Stroke Hold Hug Massage Pat Finger Idly Rub  L  Lift Hug Tickle Rub Swing Squeeze Scratch Stroke Pat Nuzzle Hold Toss Massage  4.00 3.87 3.80 3.80 3.73 3.60 3.50 3.40 3.37 3.37 3.37 3.30 3.27  Pleased 4.57 3.93 3.83 3.80 3.73 3.47 3.27 3.27 3.23  Sleepy 4.40 3.63 3.53 3.53 3.47 3.30 3.07 3.03  Gesture Excited  Neutral  Depressed Contact Hug Hold Cradle Stroke Finger Idly Pat Rub  L  Aroused  Miserable Contact Hug Hold Cradle Stroke Rub  Gesture  Emotion  Hug Stroke Rub Tickle Pat Hold Cradle Nuzzle Massage Scratch Lift Kiss Rock Contact  4.30 4.13 3.97 3.87 3.87 3.80 3.70 3.50 3.43 3.40 3.37 3.37 3.10 3.10  Relaxed 4.60 3.80 3.73 3.60 3.57 3.17 3.10 3.07 3.03  Contact Stroke Massage Cradle Pat Hold Finger Idly Rub Scratch Hug Tickle  4.63 4.33 4.03 3.93 3.83 3.80 3.73 3.70 3.67 3.47 3.33  Emotions ordered in correspondence with the Haptic Creature’s affect space (Figure 4.8). Gestures for each emotion are listed in descending order of L — gestures where L ≤ 3.00 have been omitted.  107  6.3. Results Gesture Points of Contact Table 6.5 lists the frequencies for contact locations computed for each likely touch gesture. We calculated the number of times a particular body element — e.g., fingers, palm, chest — touched the robot. Similarly, for the Haptic Creature we counted the number of times a distinct part of its body was touched by participants. Although video coding distinguished between left and right side of the body, our listed frequencies combine the two. For example, touches by the left forearm and right forearm of participants were considered together simply as “forearms” without regard for side. The frequencies of contact points were then computed as a percentage of the total number of times a touch occurred for the particular gesture. From the perspective of the human (touch initiator), it is not surprising that the palm-side of the fingers and hands were employed for every likely touch gesture. Of note, though, would be that the back-side of the fingers were also employed for finger idly, scratch, and tickle, making these the most finger-centric gestures. Also of interest is that four sustained gestures — hug, hold, cradle, and contact without movement — along with the repetitive gesture, rock, all utilized the forearm. Moreover, the first three of these sustained gestures also came into contact with the chest. For the Haptic Creature (touch receiver), the back is touched for every likely gesture. With the exception of massage, the robot’s back is the sole point of contact for repetitive touch gestures where it is not picked up: finger idly, pat, rub, scratch, stroke, and tickle. While for all nine gestures where the Haptic Creature is picked up — cradle, hold, hug, kiss, lift, nuzzle, rock, swing, and toss — its underbelly was touched. Finally, the robot’s rump was only touched for the toss gesture.  108  6.3. Results Table 6.5: Human (initiator) and Haptic Creature (receiver) points of contact frequency for given touch gestures. Human  Haptic Creature  Gesture  Contact Point  %  Contact Point  %  Stroke  Fingers: Palm-Side Hands: Palm-Side  53 40  Back  72  Contact  Fingers: Palm-Side Hands: Palm-Side Arms: Fore: Rear  38 28 18  Back Side: Aft  57 12  Hug  Fingers: Palm-Side Arms: Fore: Rear Hands: Palm-Side Chest  23 18 17 13  Back Side: Aft Underbelly: Aft Underbelly: Fore  25 25 14 12  Hold  Fingers: Palm-Side Hands: Palm-Side Arms: Fore: Rear Chest  30 20 17 13  Side: Aft Back Underbelly: Aft Underbelly: Fore  28 21 17 11  Rub  Fingers: Palm-Side Hands: Palm-Side  50 42  Back  75  Pat  Fingers: Palm-Side Hands: Palm-Side  50 42  Back  73  Cradle  Fingers: Palm-Side Hands: Palm-Side Arms: Fore: Rear Chest  27 19 19 14  Side: Aft Back Underbelly: Aft  28 22 14  Massage  Fingers: Palm-Side Hands: Palm-Side  52 37  Back Side: Aft  67 14  (table continues)  109  6.3. Results Table 6.5: Continued. Human  Haptic Creature  Gesture  Contact Point  %  Contact Point  %  Scratch  Fingers: Palm-Side Fingers: Back-Side Hands: Palm-Side  41 26 25  Back  75  Finger Idly  Fingers: Palm-Side Hands: Palm-Side Fingers: Back-Side  51 26 14  Back  83  Rock  Fingers: Palm-Side Hands: Palm-Side Arms: Fore: Rear  39 24 15  Side: Aft Back Underbelly: Aft  31 19 17  Nuzzle  Fingers: Palm-Side Hands: Palm-Side  34 19  Back Side: Aft Underbelly: Aft  23 21 14  Tickle  Fingers: Palm-Side Fingers: Back-Side Hands: Palm-Side  49 24 21  Back  65  Squeeze  Fingers: Palm-Side Hands: Palm-Side  53 31  Back Side: Aft  39 33  Lift  Fingers: Palm-Side Hands: Palm-Side  62 32  Side: Aft Underbelly: Aft Back  33 25 19  Kiss  Fingers: Palm-Side Hands: Palm-Side  45 25  Side: Aft Back Underbelly: Aft  24 18 12  (table continues)  110  6.3. Results Table 6.5: Continued. Human Gesture  Contact Point  Haptic Creature %  Contact Point  %  Underbelly: Fore  11  Swing  Fingers: Palm-Side Hands: Palm-Side  56 29  Side: Aft Underbelly: Aft Back Underbelly: Fore  29 17 16 13  Toss  Fingers: Palm-Side Hands: Palm-Side  56 33  Side: Aft Underbelly: Aft Rump Back  22 19 14 13  Gestures are listed in descending order of Total score; top to bottom, left to right. Only gestures with at least one mean likelihood rating greater than 3.00 are listed. (Total scores and likelihood ratings are presented in Table 6.3.) Only frequencies greater than 10% are listed.  Gesture Duration and Intensity Table 6.6 presents the mean duration and mean pressure intensity of likely touch gestures when communicating specific emotions. Durations were calculated in seconds from the beginning to end of the touch interaction; sustained gestures, such as hug, were considered for the entirety of the interaction, whereas repetitious gestures, like stroke, compute the average for a single repetition. Pressure intensities were computed by converting the intensity coding scale (Appendix D.6) to numeric values — light (1) to strong (3) — then generating a mean. Inter-rater reliability was determined via Cronbach’s α, which yielded .97 for duration and .83 for intensity.  111  6.3. Results Table 6.6: Mean duration, D (seconds), and mean pressure intensity, I (1 [light] to 3 [strong]), of likely touch gestures when communicating given emotions. Gesture  Emotion  D  I  Gesture  Emotion  D  I  Stroke  Aroused Excited Miserable Neutral Pleased Depressed Sleepy Relaxed  1.02 0.82 1.21 1.31 1.07 1.60 1.57 1.57  2.30 2.36 2.24 2.05 2.17 2.10 1.94 1.94  Contact  Miserable Neutral Pleased Depressed Sleepy Relaxed  5.29 5.24 3.72 5.59 5.24 5.83  1.69 1.59 1.86 1.77 1.65 1.69  Hug  Aroused Excited Miserable Pleased Depressed Sleepy Relaxed  6.40 5.85 7.82 7.15 7.79 7.28 6.67  2.28 2.36 2.15 2.39 2.21 2.13 2.31  Hold  Distressed Excited Miserable Neutral Pleased Depressed Sleepy Relaxed  7.11 5.63 7.28 7.21 6.34 6.40 7.90 7.36  2.27 2.27 2.17 2.27 2.10 2.17 2.07 2.13  Rub  Aroused Excited Miserable Neutral Pleased Depressed Sleepy Relaxed  1.11 0.53 1.14 1.60 0.77 1.17 1.35 1.18  2.64 2.63 2.68 2.48 2.65 2.67 2.60 2.71  Pat  Aroused Excited Neutral Pleased Depressed Sleepy Relaxed  0.47 0.36 0.50 0.51 0.68 0.79 0.71  1.85 1.76 1.65 1.76 2.00 1.60 1.66  Cradle  Miserable Neutral Pleased Depressed Sleepy  9.29 9.25 7.61 8.39 8.22  2.05 2.08 2.22 2.08 2.01  Massage  Aroused Excited Neutral Pleased Sleepy  1.13 0.71 0.97 0.87 1.17  2.60 2.54 2.51 2.54 2.42  (table continues)  112  6.3. Results Table 6.6: Continued. Gesture  Emotion  D  I  Relaxed  8.96  2.14  Scratch  Aroused Excited Neutral Pleased Relaxed  0.65 0.36 0.68 0.45 0.74  2.38 2.43 2.20 2.22 2.19  Rock  Pleased  2.39  Tickle  Aroused Excited Pleased Relaxed  Lift  Swing  Gesture  Emotion  D  I  Relaxed  0.96  2.48  Finger Idly  Neutral Depressed Sleepy Relaxed  1.13 1.21 1.33 1.14  1.94 2.05 1.85 1.86  2.29  Nuzzle  Excited Pleased  3.12 2.78  2.38 2.25  0.42 0.64 0.45 0.52  2.13 2.08 2.11 1.89  Squeeze  Excited  2.31  2.47  Aroused Excited Pleased  4.92 4.49 4.60  2.65 2.56 2.65  Kiss  Pleased  3.50  2.19  Excited  2.12  2.56  Toss  Excited  1.94  2.56  Gestures are listed in descending order of Total score (presented in Table 6.3); left to right, top to bottom. Durations for sustained gestures are for the entirety of the touch, whereas durations for repetitious gestures represent a single repetition.  We begin by examining the general differences across the various touch gestures. The repetitive gestures tickle, pat, and scratch, generally had the shortest durations, while finger idly, rub, and stroke, overall had the longest. The repetitive touch gestures pat, finger idly, and tickle, generally had the lowest pressure intensities, whereas rub and massage had the highest. The sustained gestures lift and contact without movement overall had the shortest durations, while cradle generally 113  6.3. Results had the highest. The sustained touch gesture contact without movement generally had the lightest pressure intensity, whereas lift overall had the strongest. Next, we examine the differences within the touch gestures when considering the emotions communicated. Many patterns appear in relation to changes in either arousal or valence independently. On the other hand, some cluster in the upper-right (around excited) or bottom-left (around depressed) of the affect space (Figure 4.8). Stroke generally increased in duration and decreased in intensity as arousal decreased. Rub was shorter in duration clustered around pleased, excited, and aroused. Pat increased in duration in relation to a decrease in arousal. Massage, on the other hand, decreased in duration in relation to a positive shift in valence, while also clustered higher intensity around pleased, excited, and aroused. Scratch decreased intensity in relation to a decrease in arousal. Tickle had longer duration in positive valence emotions, while higher intensity clustered around pleased, excited, and aroused. Hug clustered longer duration and lower intensity around miserable, depressed, and sleepy, while shorter duration and higher intensity clustered around pleased, excited, and aroused. Hold had notably shorter duration for pleased and excited, while lower intensity clustered around sleepy, relaxed, and pleased. Cradle decreased in duration as arousal decreased, except for positive valence emotions.  6.3.3  Haptic Creature Emotional Response  For each emotion communicated level, participants predicted the Haptic Creature’s emotional response to the touch gestures they had just performed. Predictions were recorded through a forced choice from among 16 items (Table 6.2) — 15 emotion labels plus none of these. From this list, Russell’s nine emotion labels are dimensional in nature so have direct mappings to the emotions communicated (Figure 4.8). Ekman’s six labels, on the other hand, do not have a direct mapping but may overlap with Russell’s labels. As a result, we applied an equivalency mapping determined from the previous study (Table 5.3).  114  6.3. Results We computed the frequency with which each emotion label was chosen for each emotion communicated level. The binomial statistical test was conducted for each condition in order to determine that the top predicted emotional response was selected at a frequency significantly greater than chance. We set chance at 25% following from Hertenstein et al. (2006) [75], in which participant emotion selection was considered to be differentiated among positive and negative valence as well as high and low arousal. These results are presented in Table 6.7. Any Ekman label that corresponded to a Russell label was counted as if the Russell label was chosen. For reference, Table 6.8 presents the frequency breakdown for any of these aggregate emotion labels presented in Table 6.7.  6.3.4  Questionnaire Responses  Here we summarize the results of participants’ responses to pertinent parts of the post-study questionnaire: experience with pets and attitudes towards them; difficulty understanding emotion words and touch gestures; intensity level when touching the robot; and expectations of the robot’s response. Unless otherwise noted, all participants (N = 30) responded to each question. Pet Experience and Attitudes General experience with pets was determined via the Companion Animal Bonding Scale (CABS) [126], which has a range of 8–40 — higher scores correlate with higher degrees of bonding. Overall, 9 participants (30%) had no pets; 8 (27%) completed only the retrospective scale, which measures childhood experience; 1 (3%) completed only the contemporary scale; and 12 (40%) completed both. Participants completing the retrospective CABS had scores that ranged from 15 to 40 (N = 20, M = 25.20, SD = 6.78), while those completing the contemporary CABS had scores that ranged from 17 to 39 (N = 13, M = 27.08, SD = 6.54). General attitudes towards pets was determined through the Pet Attitude Scale– Modified (PAS–M) [116], which has an overall range of 18–126 — higher scores correlate with more positive attitudes towards pets. Participants’ scores ranged from 44 to 126 (M = 96.83, SD = 19.83). 115  6.3. Results  Table 6.7: Frequency of emotional response predicted for Haptic Creature based on emotion communicated. Communicated  Communicated  Communicated  Predicted  Predicted  Predicted  %  Distressed  Aroused  Distressed† 35a Surprised 14  Pleased‡ Aroused Excited  Miserable  Neutral  Depressed∗  Distressed† Pleased‡  31 31 17  Relaxed Neutral Pleased‡  Depressed  Sleepy  Depressed∗ 37a Relaxed 20 Neutral 17  Sleepy Relaxed Neutral  %  %  Excited 30 23 23  Excited Aroused Pleased‡  47b 20 20  Pleased 53b 13 13  Pleased‡ Excited  57c 20  Relaxed 43a 33 17  Relaxed Pleased‡ Sleepy  50b 23 13  Communicated emotions ordered in correspondence with the Haptic Creature’s affect space (Figure 4.8). Predictions for corresponding emotions communicated are highlighted in boldface. Only frequencies greater than 10% are listed. ∗ Depressed includes Sad. † Distressed includes Afraid, Angry, and Disgusted. ‡ Pleased includes Happy. a p < .05. b p < .01. c p < .0005.  116  6.3. Results  Table 6.8: Frequency breakdown for aggregate Predicted emotion labels in Table 6.7. Communicated  Predicted  Aggregation  Distressed  Distressed (35%)  =  Distressed (21%) + Afraid (7%) + Angry (7%) + Disgusted (0%)  Aroused  Pleased (30%)  =  Pleased (20%) + Happy (10%)  Excited  Pleased (20%)  =  Happy (13%) + Pleased (7%)  Miserable  Depressed (31%)  =  Sad (17%) + Depressed (14%)  Distressed (31%)  =  Distressed (14%) + Afraid (7%) + Angry (7%) + Disgusted (3%)  Neutral  Pleased (13%)  =  Pleased (10%) + Happy (3%)  Pleased  Pleased (57%)  =  Happy (37%) + Pleased (20%)  Depressed  Depressed (37%)  =  Sad (23%) + Depressed (14%)  Relaxed  Pleased (23%)  =  Pleased (16%) + Happy (7%)  117  6.3. Results Emotion Label and Gesture Definition Difficulties Participants were presented with the list of emotions words they were asked to communicate during the study and asked if they had any difficulty understanding them. The results were 21 participants (70%) reported No and 9 (30%) Yes. Of those expressing difficulty, aroused was overwhelmingly reported as being ambiguous, often in relation to excited. Similarly, participants were presented with the list of gestures they were asked to perform during the study and asked if they had any difficulty understanding the words or their definitions. The results were 26 participants (87%) reported No and 4 (13%) Yes. Interaction Intensity Participants were asked to reflect on their general intensity when interacting with the Haptic Creature: When physically performing touch gestures to the Haptic Creature, do you feel that generally you either held back or were more intense than if it was a living creature? For example, when you performed Hit or Shake or Hug generally were you either less intense or more intense than if it was a living creature? Participants’ responses regarding the overall intensity of their touch with the robot were 12 (42%) Held Back; 13 (42%) Same; and 5 (16%) More Intense. These responses did not directly influence any other analysis of touch intensity (e.g., Section 6.3.2). Rather, the data allows a high-level view as to how participants approached touching the robot. Robot Emotional Response Expectations Participants were asked about their overall expectations for the robot’s change in emotional state based on the emotions they were communicating. The results were 12 participants (40%) reported Response Similar To What I Was Communicating; 13 (43%) Response Sympathetic To What I Was Communicating; and 5 (17%) Not Sure. 118  6.4. Discussion  6.4  Discussion  The overall goal of the present study was to gain a deeper understanding of affective touch when it originates from the human. In this section, we discuss the result of our user study. We begin by reflecting on the overall design of the study. This is followed by comments on how the Haptic Creature itself influenced participant responses. We continue with a combined analysis of the various results that we use to generalize into categories of human intent. We proceed with discussion about participants’ overall expected emotional response of the Haptic Creature. Finally, we conclude with comments related to how we might apply knowledge gained from the study towards improving the robot’s hardware and software.  6.4.1  Reflections on Study Design  Overall, this first effort to quantitatively and qualitative assess human affective touch produced a dataset of gesture frequencies and physical characteristics which will be highly useful for our own further research as well as others. Our triangulating approach combined self-reported choices from a well-validated collection of touch terms, with unbiased and systematic observation of actual gesture performance, giving us additional confidence in data reliability. The study design, however, could be further improved in terms of efficiency, participant effort, and granularity of results. First, this study could have been conducted as two studies: the gesture likelihood rating alone, then, separately, performance of likely gestures and specifying the robot’s expected emotional response. The present study could potentially run long depending on how the participant responded to the likelihood ratings: the more gestures rated likely or very likely, the more gestures that would have to be performed. Dividing the study in two parts would remove the dependency; separate participant pools would be acceptable, and the result would reduce the time of participation. Also, from the standpoint of statistical strength, the set of gestures performed in the second study would have been the same for all participants, having emerged as a net result of the first study.  119  6.4. Discussion The second issue regards the compromise between resource expense of video analysis and useful granularity. We found video coding extremely useful for determining contact points, especially for the human; however, measurement of contact intensity scaling was too coarse-grained (3 levels) and often difficult to accurately determine visually. Similarly, the time granularity (1 second) was also too coarse. For sustained touches — e.g., contact without movement or hug — this often was not an issue. On the other hand, information for repetitive gestures — e.g., stroke or rub — has the potential of incomplete capture. A time window much less than 1 second would obviate this latter issue, but would require a much greater time investment for video coding. A potential alternate approach to simplify pressure intensity ratings would be to view a touch gesture performance as a whole, then make an overall interpretation. Furthermore, accurate and reliable touch sensor data could augment or even replace the video coding procedures — the Haptic Creature’s sensors, however, were not yet at a state where they could be solely relied upon for this purpose.  6.4.2  Influence of Robot Context and Morphology  Two key properties of the Haptic Creature likely influenced participant responses. First, the context of the robot in the study was that of a close pet. Participants, not surprisingly, gravitated toward friendlier gestures and away from aggressive ones as a result of this imagined relationship. Second, the size and form factor of the robot allowed for some gestures — e.g., lift or swing or toss — that would be difficult to imagine if the Haptic Creature was much larger (unless the touch was localized to smaller appendages). Consequently, this rendered unlikely other gestures that might be natural for smaller or larger robots. Similarly, the manner of interaction might have varied accordingly. For example, a pat or massage might vary in intensity and location for robots of notably different sizes and morphology.  6.4.3  Human Intent through Affective Touch  In our discussion of background research on social touch (Section 2.2.1), we presented a study by Jones and Yarbrough [82]. They examined human-human touch 120  6.4. Discussion in daily interactions and, from the results, developed 12 “characteristics of meaning”. While the scope and focus of our research differs somewhat, we nonetheless have been guided in spirit by their work as we also seek to infer greater meaning from the touch gestures. Our results provide two different perspectives on affective touch originating from the human. Tables 6.3 and 6.4 give insight into the likelihood of touch gesture use, while Tables 6.5 and 6.6 provide details on the manner of this interaction. Through comparing these views, it is possible to move towards a higher-level understanding of the human’s expressive intent. To that end, we performed a metaanalysis of the results. Our first step was to examine likely gestures shared among emotions, with a specific focus on proximity to one another within the affect space. This was taken from Table 6.4 and detailed in Section 6.3.1 with respect to likely gestures within the affect space. For example, the massage gesture was not likely to be used in negative valence emotions since it only occurs for neutral and positive valence emotions. The next step inspected commonalities and difference in the duration and intensity of these gestures. This was taken from Table 6.6 and discussed in detail Section 6.3.2 with respect to duration and intensity. For example, the massage gesture has a shorter duration for positive valence emotions when compared with its neutral valence counterparts. We also took into consideration if proximate gestures were repetitious versus sustained as well as if they had similarities in points of contact (Table 6.5). Our examination produced five tentative categories of “intent” which overlap emotion states: protective, comforting, restful, affectionate, and playful. These are individually designated in Figure 6.3 and each described here in turn. Protective This intent corresponds to emotions that are high-to-neutral in arousal while negative in valence: distressed and miserable (Figure 6.3, wavy). Unlike the other intents, it is dominated by sustained gestures, many of which require the human to hold the Haptic Creature enclosed in the forearms and in close proximity to the chest: hold, hug, cradle.  121  6.4. Discussion  A r o u s a l  "distressed"  "aroused"  "excited"  "miserable"  "neutral"  "pleased"  "depressed"  "sleepy"  "relaxed"  Valence Figure 6.3: Human intent through affective touch. The regions are (counterclockwise from upper-left): protective (wavy); comforting (shaded); restful (small dots); affectionate (stripes); playful (large dots). Neutral emotion was not considered in analysis.  122  6.4. Discussion Comforting This intent corresponds to emotions that are both neutral-to-low in arousal while negative in valence: miserable and depressed (Figure 6.3, shaded). It has sustained gestures similar to protective; however, comforting also includes several repetitious ones: stroke, rub, finger idly, and pat. With the exception of stroke, these repetitious gestures display higher pressure intensities here than other intents in which they also exist. On the other hand, the sustained gesture hug has lower intensity, along with longer durations, compared with other intents. Restful This intent corresponds to low arousal emotions: depressed, sleepy, and relaxed (Figure 6.3, small dots). It has sustained gestures similar to both the protective and comforting intents but differs in two ways. First, when moving from negative valence to positive, restful includes the repetitive massage gesture followed then by scratch and tickle. Second, when compared with higher arousal states, common gestures generally have lower intensities — stroke, massage, scratch, tickle — or longer durations — stroke, rub, pat, scratch. Affectionate This intent corresponds to positive valence emotions: relaxed, pleased, and excited (Figure 6.3, stripes). Distinguished by their strong reliance on the use of fingers, tickle, scratch, and massage, exist predominantly in this intent. Also included are the more intimate nuzzle, kiss, and rock gestures. When compared with other intents, the durations for hug, hold, and massage, were generally lower, while the intensity for hug was greater. Playful This intent corresponds to the emotions pleased, excited, and aroused (Figure 6.3, large dots). Overlapping with affectionate, this intent differs in that it places greater emphasis on the gestures lift, swing, and toss, which correspond to the Haptic Creature being extensively moved in space. Additionally, squeeze, a gesture of relatively high intensity, exists solely in the excited emotional state. Gestures common among other intents often have shorter durations — stroke, rub, pat, scratch — or higher pressure intensities — stroke, massage, scratch, tickle — in this intent.  123  6.4. Discussion It is encouraging that some of these intents bear resemblance to categories from Jones and Yarbrough. For example, their “support” category is similar to our protective and comforting intents, while their “affectionate” and “playful” categories have direct counterparts in our intents — though they differentiate “playful” through “playful affection” and “playful aggression”. Regardless, the advantages of finding a higher-level interpretation of touch data are considerable. First, the process of higher-level categorization helps to illuminate the human’s general nature when choosing these gestures for these emotions: it not only implies the how but also the what. For example, the human might choose to communicate either miserable or depressed through comforting, using a set of gestures that are suitable for both of those emotions — as well as other gestures that are more specific. Secondly, this knowledge can inform the ability to make sense of the human’s low-level actions. For the robot to display an appropriate reaction, it needs to be able to reason beyond “the human squeezed me” and even past the implications that “the human is excited”. Therefore, the Haptic Creature’s emotion controller must find patterns in the touch that imply intent. For example, properties of the protective or comforting intents differ from those of the playful one; not only by the set of gestures employed but, more abstractly, by the observed physical properties of the human’s touches. An intriguing practical extension of this is that, given an adequate model, it may not be necessary to fully recognize a gesture. Rather, by noting certain shared properties of the touch, the robot may directly infer the intent.  6.4.4  Mirrored Emotional Response Expected from Haptic Creature  As reflected in Figure 1.2, we are ultimately interested in the complete interaction cycle between human and robot. While the previous section discusses the human’s emotional intent when communicating with the robot through touch, here we anticipate the full affective touch interaction loop by examining the human’s expectation of the robot’s emotional response. The results in Table 6.7 show participants’ overall expectation was for the Haptic Creature to respond in-kind. That is, they expected the robot would mirror the 124  6.4. Discussion emotion they were communicating. Notable deviations are aroused and neutral, which have a pattern of shifting positive valence and lower arousal. Also, miserable has no mirrored relation: the expected emotional response is split equally between higher and lower arousal while remaining negative in valence. These general results, however, are contradicted somewhat by two additional data points. First, though the post-study questionnaire results (Section 6.3.4) somewhat confirms the in-kind response, it also shows nearly the same percentage of participants expecting a sympathetic response. This may explain the notable deviations mentioned in previous paragraph. Second, another interesting contradiction is based on participants’ specification for likely gestures, in particular for negatively valenced emotions. By always employing non-cruel, non-aggressive gestures in negative emotions, participants may not truly expect the Haptic Creature to take on the same emotional state as themselves. Rather, this actually may imply their expectation (possibly unconscious) for a sympathetic rather than a mirrored response.  6.4.5  Implication for Haptic Creature Design  Overall knowledge of which gestures are used helps advance the development of a robot wishing to interact with humans through touch. For example, touch sensing hardware can be specified and tuned for specific touch gestures, and recognition software similarly can concentrate on primary gestures while having little concern for those never to be used. For this section we focus on the study results as it impacts our Haptic Creature; nonetheless, the results can be generalized to other social robots which have the possibility of utilizing touch. The Haptic Creature’s back and (aft) sides appear to be the predominate point of interaction with the human. As a result, the touch sensors need to be more densely populated in these areas in order to pick up the variety of gestures. In addition, several likely gestures exist whose motion has a shearing component — e.g., stroke, rub, massage, so the type of sensors employed must be sensitive to this type of movement. Similarly, though it is not explicitly demonstrated in the data, the robot’s curved surface, especially its back, poses added challenges for some touch sensor technologies. 125  6.5. Summary As described in Section 6.3.1, the ordering of gestures in Table 6.3 provides insight into the overall likelihood a particular gesture may be used to communicate emotion relative to the other touch gestures. The table is sorted in descending order of total likelihood score, such that gestures at the top of the table can be considered overall more likely to be used to communicate emotion compared with those further down. When examining this ordering, one surprising finding was that some of the lighter touches — e.g., finger idly, nuzzle, tickle — have a lower likelihood of communicating emotional state when compared to some of the more pronounced touches — e.g., stroke, rub — which appear near the top of the table. While we still feel that these lighter touches are important to recognize, it is beneficial to know where trade-offs may be made. Finally, as noted, the more violent gestures such as hit, slap, and shake have a very low likelihood of being used. Nonetheless, gestures with equal movement of the robot exist in likely touch gestures such as lift, swing, toss, and rock. Therefore, it is critical that the robot have the ability to sense movement in addition to pressure from touch, thereby confirming our decision to employ a three-axis accelerometer.  6.5  Summary  In this chapter, we presented a user study that investigated human affective touch displayed to the robot. We detailed the compilation of our touch dictionary, which participants subsequently used to both specify and perform likely gestures when communicating a variety of emotions to the Haptic Creature. Participants also recorded their expectations for the robot’s emotional response as recipient of the touch gestures. Overall results showed a preference for less aggressive touch gestures, even for negative valence emotions. We reported the specific gestures likely to be used when communicating each emotion, as well as the physical profile of these gestures. From these low-level details, we then developed a high-level categorization of the human’s intent. In addition, we found that participants generally expected the Haptic Creature’s emotional response to mirror that of the one they communicated. The broader implications of this work will be discussed in Section 8.1.3.  126  6.5. Summary The results of both the study we described here as well as the one from the previous chapter (5) directly influenced the design of our final user study. Presented in the next chapter (7), this last study examined the complete affective touch interaction loop and its influence on the emotional state of the human.  127  Chapter 7  Influence of Affective Touch The final of our three interaction decomposition studies explored the influence on the human’s emotional state as a consequence of affective touch communication with the robot. This study built upon the previous two, which were each intentionally focused on one-way affective touch display, either from the robot to the human (Chapter 5) or from the human to the robot (Chapter 6). Expression  Recognition  1  2  4  3  HUMAN  Recognition  CREATURE  Expression  Figure 7.1: Affective touch interaction loop between human and Haptic Creature. Adapted from Figure 1.2 to highlight emotional influence on human. The study presented here examined two-way communication, with both the human and the robot displaying as well as receiving affective touch; we then observed any resultant change in the human’s emotional state (Figure 7.1, cells 4→1 highlighted arrow). This full interaction loop is illustrated throughout our introductory scenario in Chapter 1. For example, when Roi senses a change in Stella’s breath128  Chapter 7. Influence of Affective Touch ing as she awakens, he thereby becomes excited and, in turn, renders an appropriate touch response. Similarly, as Stella becomes depressed, she firmly pats and rubs Roi’s fur; in an attempt to mitigate her emotional state, Roi becomes relaxed, which he manifests in slackened ears and slow breaths. The work presented in this chapter encompasses the last two phases of our research (Figure 1.1): a subsequent refinement of the Haptic Creature’s manner of affect display (sixth phase), and the affective touch influence user study (seventh phase). The goal of the user study was to broaden knowledge on the effects of social touch by including the consideration of emotion. Specific to social human-robot interaction, we were interested in the necessity of a response from the robot. If the robot’s emotional reaction has little effect on the human’s emotional state, then scant consideration need be made in the design of the robot’s response — if taken to the extreme, no response would be necessary at all. Conversely, if there exists a notable effect on the human’s emotions, then care must be taken in the design of the robot’s reaction. While not the direct focus of our thesis, if the full affective touch interaction loop demonstrates influence on the human’s emotional state, then a properly designed interaction can positively impact the applications of social human-robot interaction — e.g., therapy or attachment. We begin with an update to the design of the Haptic Creature’s affect display. In order to strengthen the robot’s valence communication, these changes were based on the results of the user study from Chapter 5. We continue in Section 7.2 with details of our user study. Participants performed predetermined sequences of affective touch gestures for the Haptic Creature. They then reported any changes to their emotional state that resulted from both nonactive and simulated active responses from the robot. In all cases, the Haptic Creature’s active emotional response mirrored that of the participants’ intended emotion. By a comparison between nonactive (control) and active (treatment) robot responses — when human touch gesture sequences were identical — changes in participant emotions therefore can be attributed to the robot’s reaction. We conclude with results of the study and related discussion thereof. We empirically demonstrated a change in the human’s emotional state as a result of the full affective touch interaction loop. In particular, we observed a sta129  7.1. Updated Robot Affect Display tistically significant positive shift in valence when the two-way interaction communicated pleased, but no statistically significant change for miserable. In addition, study participants expressed an average sense that Haptic Creature was responsive to their touch. After first considering the effects of demand characteristics, we suggest that the lack of a notable change in the human’s emotional state for miserable may be the result of the differences in the touch gestures employed by the human as well as the emotional responses presented by the robot.  7.1  Updated Robot Affect Display  The Haptic Creature’s emotion renderings were modified from the original affect display design presented in Section 5.1. These changes were based upon the related user study results discussed in Sections 5.3 and 5.4, then refined through informal pilot testing. The updated actuator rendering parameters are presented in Table 7.1 (cf. Table 5.1). The remainder of this section details the modifications of the actuator rendering parameters in relation to the original design.  7.1.1  Ears  The Haptic Creature’s two ears can be controlled independently of each other in the single dimension of stiffness. The original affect display design utilized ear stiffness as one means with which to convey the Haptic Creature’s state of arousal. Since this dimension was clearly communicated by the robot (Section 5.4.2), the original values were left unchanged (Table 7.1, Ears Vol).  7.1.2  Lungs  The Haptic Creature’s lungs modulate its manner of breathing through four parameters. Rate is defined as breaths-per-minute (bpm). Bias controls the symmetry of each breath by specifying the percentage that is dedicated to the inhalation phase, from 0% (all exhale) to 100% (all inhale) — for example, a bias of 25% would allocate 1/4 of each breath to the inhale and 3/4 to the exhale. Rest (milliseconds) 130  7.1. Updated Robot Affect Display  Table 7.1: Key Expressions: arousal and valence categorization, updated actuator rendering parameters. Actuator  Parameter  Key Expression Distressed  Ears  Vol  %  Lungs  Rate Bias Vol  bpm % %  Purr Box  Wave On / Off Ampl  ms %  Vol  Lungs  Rate Bias Vol  Purr Box  Wave On / Off Ampl  100  100  70.1 60 20–80  62.8 50 25–85  56.8 40 30–90 Pulse 570 / 486 0–31  Vol  Lungs  Rate Bias Vol  Purr Box  Wave On / Off Ampl  Neutral  Pleased  50  50  50  49.2 60 20–80  42.3 50 25–85  35.6 40 30–90 Pulse 909 / 775 0–31  Depressed Ears  Excited  100  Miserable Ears  Aroused  Sleepy  Relaxed  0  0  0  28.8 60 20–80  21.6 50 25–85  15.0 40 30–90  Key Expressions ordered in correspondence with the Haptic Creature’s affect space (Figure 4.8). Lungs Rest parameter, both inhalation and exhalation, is always 0 milliseconds. Highlighted key expressions are miserable and pleased factor levels of user study.  131  7.1. Updated Robot Affect Display allows for a pause at the end of inhalation and/or exhalation for each breath, and is defined independently for each. Volume defines the minimum and maximum position for each breath. In the original affect display design, the Haptic Creature’s breathing rate increased proportionally with its arousal, whereas the robot’s individual inhale / exhale rates were symmetric for positive valence, becoming increasingly asymmetric — shorter inhale — when moving towards negative valence. As the Haptic Creature was effective in conveying its state of arousal, the overall range for its breathing rate (15–70 bpm) was not modified. However, as discussed in Section 5.4.4, participants also expected faster breathing rates to convey negative valence. As a result, the robot’s rate of breathing was adjusted to vary for both emotion dimensions. Specifically, the lowest rate was set for the pleasant-deactivated key expression then incrementally increased (by ~6 bpm) moving from positive to negative valence, then from low to high arousal, until completing at the unpleasantactivated key expression. This modification can best be understood by referring to the Lungs Rate column in Table 7.1; note the systematic increase from the bottommost key expression to the top. Also discussed in Section 5.4.4 was participants’ expectations for depth of breathing to change, not as originally designed with arousal, but with valence. The updated affective display design therefore rendered shallower breathing when the Haptic Creature conveyed negative valence, becoming deeper when moving towards positive valence (Table 7.1, Lungs Vol). Finally, breath symmetry was also modified. The previous design used symmetric breathing to convey positive valence, while gradually quickening the inhale (smaller bias value) when moving to negative valence. The updated affect display design, however, used symmetric breathing to connote neutral valence, with slower inhale (higher bias value) signifying negative valence and, conversely, faster inhale representing positive valence (Table 7.1, Lungs Bias). A graphical representation of these three key expressions can be seen in Figure 7.2.  132  1 2 3 Time (s) (a) Miserable: Rate = 49.2 bpm; Bias = 60%; Vol = 20–80%.  4  100 80 60 40 20  Vol (%)  100 80 60 40 20  Vol (%)  Vol (%)  7.1. Updated Robot Affect Display  1 2 3 Time (s) (b) Neutral: Rate = 42.3 bpm; Bias = 50%; Vol = 25–85%.  100 80 60 40 20  4  1 2 3 Time (s)  4  (c) Pleased: Rate = 35.6 bpm; Bias = 40%; Vol = 30–90%.  Figure 7.2: Change in lung volume over four-second time period for key expressions miserable, neutral, and pleased in Table 7.1 (cf. Figure 5.3). Shaded regions highlight breath inhalation phase — bias > 50% favors inhalation (a); bias = 50% is symmetric (b); and bias < 50% favors exhalation (c).  7.1.3  Purr Box  The Haptic Creature’s purr box controls the presentation of a modulated vibrotactile purr. Waveform determines the type of wave generated: pulse, sawtooth, reverse sawtooth, sine, triangle, or null. On duration and off duration (milliseconds) define the wave’s duty cycle. Amplitude, specified as percentages from 0% to 100%, define the wave’s minimum and maximum amplitude. The primary use of purring in the original affect display design was to convey positive valence of non-low arousal emotions — pleased and excited. The high arousal state was originally separated from medium arousal by presenting a slightly increased amplitude for the purr wave coupled with a higher duty cycle. As discussed in Section 5.4.4, these parameters had mixed results, whereby pleased was recognized but excited was confounded with its negative valence equivalent, distressed. We continued our use of purring for positive valence emotions; however, we modified the Haptic Creature’s affect display in order to draw a better distinction between the two emotions where it was present. First, we changed the purr wave from sine to pulse. The latter wave provided a more salient vibration, as much less time was spent while the mechanism ramped 133  100 80 60 40 20  Ampl (%)  Ampl (%)  7.2. User Study  1 2 3 Time (s)  4  (a) Pleased: Wave = Pulse; On / Off = 909 / 775 ms; Ampl = 0–31%.  100 80 60 40 20 1 2 3 Time (s)  4  (b) Excited: Wave = Pulse; On / Off = 570 / 486 ms; Ampl = 0–31%.  Figure 7.3: Change in purr amplitude over four-second time period for key expressions in Table 7.1 (cf. Figure 5.4). Only pleased was utilized in this study. up to full amplitude. Second, the duty cycle was made the same for both emotions (54%); however, to differentiate between the two, the period was shortened for excited. Finally, the amplitude range was modified to be the same for both emotions; therefore, this actuator parameter no longer served as a means of differentiating arousal. While the net result of these changes was that the purr for the two emotions differed solely in the waveform period, pilot tests demonstrated that the two remained distinct without excited improperly conveying negative valence. A graphical representation of these key expressions can be seen in Figure 7.3.  7.2  User Study  Our user study was conducted as a within-subjects, single-factor design. The sole factor, the emotion communicated between human and robot, had two levels: miserable and pleased. Both of these levels, in turn, had a binary condition for the robot’s response: an active, in-kind emotional response served as the treatment, while a nonactive response represented the study control. These two factor levels correspond to emotions of opposing valence — miserable = negative and pleased = positive — but fixed at medium arousal (see Figure 4.8). Therefore, we manipulated only the valence dimension, whereas arousal remained unchanged. 134  7.2. User Study In our previous two user studies, we explored the full extent of the affect space, for both the Haptic Creature’s affect display as well as the human’s. In this study, however, we chose to focus solely on valence, which afforded a more economical study design; by greatly reducing the number of repeated trials, we reduced the chances of participant fatigue while simultaneous increasing the statistical power.  7.2.1  Participants  Data from 32 individuals (50% female) were used in the study. Recruited via fliers, online classifieds, and mailing lists, each was compensated CAD$10 for participation. Ages ranged from 19 to 50 (M = 26.53, SD = 7.22), and all self-identified as native English speakers (84% from North America). None had previously participated in studies with the Haptic Creature.  7.2.2  Study Setup  The study was conducted in a soundproof observation studio that housed a desk and a cushioned lounge chair. Atop the desk was a 17-inch (1280 × 1024 pixels) LCD monitor, a keyboard, and a computer mouse. Also situated on the desk was a video camera mounted to a tripod positioned directly behind and above the computer monitor. All study software, including control of the Haptic Creature, was written in Java and executed on an Intel-based PC running the Gentoo [58] Linux operating system (Section 4.3). The study participant sat in the chair and faced the monitor on the desk. The mouse was placed on the side that she self-identified as her mouse hand. The Haptic Creature initially rested atop a small cushion on the floor to the immediate left of the participant’s chair. Once the study began, the robot was then placed in the participant’s lap with its backside initially facing the participant’s non-mouse hand; however, the participant was allowed to adjust the Haptic Creature’s position throughout the study, as she saw fit. The participant wore earmuffs to mask any extraneous sounds that may be generated by the robot (Figure 7.4).  135  7.2. User Study  Figure 7.4: Setup for influence of affective touch study.  136  7.2. User Study Table 7.2: Miserable human touch gestures. Gesture Label  Gesture Definition  Contact Without Movement . . . . Hug . . . . . . . . . .  Moving at a medium speed, rest your hand lightly on top of the Haptic Creature. Slowly squeeze the Haptic Creature close against your chest with your arms applying moderate pressure. Slowly support the Haptic Creature with your arms or hands applying moderate pressure. Moving slowly, hold the Haptic Creature protectively applying moderate pressure. At a slow speed, repeatedly move your hand in the same direction over the Haptic Creature’s fur with moderate pressure. Repeatedly move your hand slowly back and forth over the Haptic Creature’s fur with moderate pressure.  Hold . . . . . . . . . Cradle . . . . . . . . Stroke . . . . . . . .  Rub . . . . . . . . . .  7.2.3  Human Affective Touch Gestures  During the study, the participant interacted with the Haptic Creature through sequences of affective touch gestures. The participant was instructed to perform specific touch gestures from two predefined sets; one was presented for the miserable emotion communicated factor level (Table 7.2), and a separate sequence was provided for the pleased emotion communicated factor level (Table 7.3). These two gesture sets were derived from the original touch dictionary presented in Table 6.1. Utilizing the related results from the human affective touch study in Chapter 6, we detail here the criteria used for determining these two touch gesture sets as well as the augmentation to the definitions for use in the current study. A touch gesture was included in a specific set based on its likelihood of communicating the respective emotion by the human. The top six likely touch gestures from Table 6.4 were chosen for both sets, with the exception that rock was substituted for hold in the pleased set in order to increase variety between the two sets. The definition for each gesture was augmented from the original touch dictionary in two ways. First, to lend clarity, the repetitious gestures — pat, rock, rub, 137  7.2. User Study Table 7.3: Pleased human touch gestures. Gesture Label  Gesture Definition  Hug . . . . . . . . . .  Moving at a medium speed, squeeze the Haptic Creature close against your chest with your arms applying firm pressure. At a medium speed, repeatedly move your hand in the same direction over the Haptic Creature’s fur with moderate pressure. Repeatedly move your hand at a medium speed back and forth over the Haptic Creature’s fur with firm pressure. Quickly touch the Haptic Creature repeatedly with light finger movements. At a quick speed, repeatedly touch the Haptic Creature lightly with the flat of your hand. Repeatedly move the Haptic Creature back and forth at a medium speed while supported in your arms with firm pressure.  Stroke . . . . . . . .  Rub . . . . . . . . . . Tickle . . . . . . . . Pat . . . . . . . . . . . Rock . . . . . . . . .  stroke, and tickle — all had the word “repeatedly” added. Second, both a speed component — “slow(ly)”, “medium”, or “quick(ly)” — and a pressure component — “light(ly)”, “moderate”, or “firm” — was included. Each human affective touch gesture’s speed and pressure were determined by examining the duration and intensity between the communication of miserable and pleased as presented in Table 6.6, which was an outcome of the human affective touch study from the previous chapter. Comparing the two resultant gesture sets of Tables 7.2 and 7.3, one can see that the speed was generally slower and the pressure was generally lighter for the miserable gesture set than for the pleased set. Any touch gesture included in both sets therefore differed in specification of its speed and/or pressure components. The participant was never informed as to any specific emotional display intended by any touch gesture performed.  138  7.2. User Study  48 seconds 10s  2s  Decay to Hold Neutral Emotion  36s Render Mirrored Emotion (a) Haptic Creature emotional response timing.  48 seconds 4s~10s  38s~44s  Read Gesture  Perform Gesture (b) Participant affective touch gesture performance timing.  Figure 7.5: Timing protocol for a single touch gesture interaction.  7.2.4  Stimuli  Throughout the control condition, for either factor level, the Haptic Creature remained inactive. On the other hand, when the participant performed a sequence of affective touch gestures during the treatment condition, the Haptic Creature rendered a mirrored emotional response. For example, when the participant performed the miserable touch gestures, the robot actively responded by displaying its miserable emotional state. We chose to have the Haptic Creature respond in-kind based on results from our human affect display study, where participants generally expected this form of response (Section 6.4.4). The corresponding actuator rendering parameters for each factor level are highlighted in Table 7.1. The Haptic Creature’s sensory system was not at a stage where it could accurately recognize human touch gestures in real time. For the treatment condition, however, it was a necessary that the participant had the impression the robot was directly responding to her touch. To that end, we developed a timing protocol, refined by means of a pilot study, to simulate a responsive Haptic Creature. This details of this protocol are diagrammed in Figure 7.5 (a). 139  7.2. User Study For the participant to sense a change in the active robot, the Haptic Creature would transition from its neutral emotional state to its mirrored emotional response. The total duration of a single touch gesture interaction lasted 48 seconds. Over the first 10 seconds, the robot gradually decayed from the last mirrored emotional state to neutral — for the first touch gesture of the sequence, the robot began in neutral. The Haptic Creature then remained at this emotional state for an additional two seconds. Finally, the robot shifted to the mirrored response for the remaining 36 seconds. This same cycle was repeated for each touch gesture in the sequence. This timing protocol effectively inferred to our pilot participants that the Haptic Creature was responding directly to their touch. Details of the complete protocol for the affective touch interaction are presented in Section 7.2.7.  7.2.5  Response Format  The participant provided two categories of responses during the course of the user study: (1) a self-report of her current emotional state, and (2) an assessment of the Haptic Creature’s emotional response to her touch. The participant’s affect state was recorded by means of nine-level versions of Lang’s Self-Assessment Manikin (SAM) rating scales for valence and arousal [96]. Instructions for using the SAM scales were adapted from Bradley and Lang (2007) [15]; however, the order of each scale was reversed such that the valence scale was labeled “Unhappy versus Happy” and the arousal scale was labeled “Calm versus Excited”. This adjustment in ordering ensured consistency among all scales used in the study, which were all ordered negative-to-positive or low-to-high. Furthermore, during pilot testing, the original ordering resulted in occasional participant data entry errors, while the reversed version appeared to present participants a more natural ordering. The SAM images were from PXLab [78] and measured 69x74 pixels. To increase visibility of the facial expressions, we used the portrait versions of the valence images [164, p. 105], rather than the more traditional full figure. An example of the SAM images used can be seen in Section E.4 (p. 348).  140  7.2. User Study Table 7.4: Emotion label list for assessing the Haptic Creature’s emotional response. Identical to lists used in the robot affect display and human affect display user studies (Tables 5.2 and 6.2, respectively). Afraid∗ Disgusted∗ Miserable Sad∗  Angry∗ Distressed Neutral Sleepy  Aroused Excited Pleased Surprised∗  Unmarked labels are from Russell; † avoids artificial agreement.  Depressed Happy∗ Relaxed None Of These† ∗  from Ekman;  When assessing the Haptic Creature’s emotional response, the participant selected one of 16 items from a provided list (Table 7.4). Six options were Ekman’s basic emotions [35]: afraid, angry, disgusted, happy, sad, and surprised. Nine were from Russell’s dimensional model of affect: aroused, depressed, distressed, excited, miserable, neutral, pleased, relaxed, and sleepy. The emotion words were presented in alphabetized order with a final option, none of these, to address shortcomings of forced-choice emotion responses [131] [52]. Consistent with the list used in our studies presented in Chapters 5 and 6, the decision to include both Ekman and Russell emotion labels was to increase the overall richness of available choices by combining words from research on discrete emotions (Ekman) with those from research on the dimensional nature of emotions (Russell).  7.2.6  Demand Characteristics Considerations  Significant effort was made to ensure participants were unlikely to infer the underlying purpose of our study and consequently adjust their behavior. Our overall methodology borrowed from the Directed Facial Action Task (DFA) experiments of Ekman, Friesen, and Levenson (summarized in [37]), which examined the influence of voluntary facial movements on human emotional state. In these studies, participants were not asked to pose a specific emotion. Instead, they were given detailed instructions for a facial configuration that corresponded to a specific emotion as per the Facial Action Coding System (FACS) we briefly 141  7.2. User Study introduced in Section 2.1.2. For example, rather than being asked to pose anger, participants were merely instructed to: first pull your eyebrows down and together, next raise your upper eyelids, now . . . . Furthermore, in one study, participants were asked to indicate a corresponding emotion solely from the facial configuration instructions; however, few were able to correctly infer the associated emotion. For the consideration of demand characteristics in our study, we began by intentionally limited direct references to emotion. We then explicitly checked for demand characteristics in our pilot study. Finally, during the formal study, we provided two methods that probed for the possible existence of demand characteristics. We describe these steps in further detail here. Limitation of Direct References to Emotion In compliance with ethics guidelines, some transparency was necessary for recruitment and prior consent. This took the form of one-sentence descriptions of the overall research goals — to examine the communication of emotion through touch between humans and robots — as well as of the study goals — to examine the influence of this form of interaction. During both the pilot and formal study, however, references to emotion moved from a focus on touch interaction to participants’ assessments of their affective state and that of the robot. That is to say, throughout the study procedure (Section 7.2.7), participants were never informed that the human touch gestures were in any way meant to convey emotional content nor, more importantly, the specific display of miserable or pleased. Similarly, while the preliminary instructions provided brief, general information that the Haptic Creature communicated its emotional state through touch, the familiarization session never made general mention of emotion let alone specific reference to the two factor-level emotions. Rather, the facilitator simply referred to the robot’s various renderings as “expressions”. Examination through Pilot Study While the previous step was a proactive measure to reduce the chances of demand characteristics, we used the pilot study for verification. Pilot participants were asked at the conclusion of their session if they saw any patterns in the hu142  7.2. User Study man touch gestures or any inherent emotional content. All noted the repetition of some gestures; however, this was not unexpected because the active and nonactive conditions employed the exact same gestures, and there also was some gesture commonality between the two factor levels — e.g., stroke, hug. Pilot participants, though, were only able to recognize that a few gestures were repeated but not the differences in their speed or intensity. That is to say, many were able to note that hug was presented on multiple occasions but not that some versions were moderate pressure and others firm. More importantly, none of the participants expressed any interpretations of specific emotional content — miserable, pleased, or otherwise — in the human touch gestures. Probing During Formal Study While the pilot study results presented above suggested diminished possibility of demand characteristics, we nonetheless decided to include two methods in the formal study that probed further. First, during the control condition, when the Haptic Creature was nonactive, participants predicted the Haptic Creature’s emotional response to their immediately preceding touch gesture performances. These responses provided a means to examine participant inference of specific emotional content for human touch gesture sequences during the course of the study. In order to predict the robot’s emotional response, participants would need to make an assessment of the preceding touch gestures without being directly asked. If a significant number of participants made the same prediction, then there is a high likelihood these participants shared a similar assessment as to the emotional content of the performed gestures. Furthermore, if this predicted emotion aligned with the emotion communicated factor level, then there exists the possibility for demand characteristics. Our desired outcome, therefore, is the converse. We expected no clear consensus as to the robot’s predicted response; however, if a consensus exists, then the predicted emotion differed from the emotion intended to be communicated. This manner of probing was necessarily indirect in order to avoid drawing undue attention to emotional intent: we did not want participants to actively contem143  7.2. User Study plate specific emotional content of the human touch gestures during the study. The procedure for this first method will be detailed in the next section, and the results of participants’ responses will be presented in Section 7.3.2. Our second method to probe for demand characteristics was similar to the examination used in the pilot. We provided participants of the formal study with a set of questions at the completion of the study that asked about similarity of the touch gesture sets as well as any implied emotional content in the sequences. The specific questions and subsequent results will be presented in Section 7.3.3. A full discussion of the results from these two methods will be presented in Section 7.4.2.  7.2.7  Procedure  The study took approximately 60 minutes for the participant to complete. The participant began with a detailed set of instructions; continued through a familiarization session; and then completed the main part of the user study. Once completed, a questionnaire was administered. Except during the familiarization session, the facilitator was not present in the room with the participant while the study was being conducted. The main part of the study was composed of the following steps: • the performance of a sequence of affective touch gestures; • a report of the participant’s current affective state; and • an assessment of the Haptic Creature’s emotional response. Following from our within-subjects, single-factor study design (Section 7.2), the main part of the study was conducted a total of four times: 2 emotion communicated (factor) × 2 robot response (condition). That is to say, each repetition consisted of an emotion communicated between the participant and the robot as well as the robot’s resultant response, both of which were held constant for the duration of the repetition. The emotion communicated was either miserable or pleased. In the treatment condition, the Haptic Creature 144  7.2. User Study provided an active emotional response rendering that was the same emotion as the study factor, while in the control condition, the Haptic Creature was nonactive and did not render any emotional response. Each participant completed all four separate repetitions, the order of which was counterbalanced via a balanced Latin squares design. To avoid transfer effects, a brief (70 second) rest break was given after the familiarization session and at the end of all but the final repetition. During each rest break, a random sequence of abstract shapes was displayed to the participant on the LCD monitor. This approximated the procedure for eliciting neutral affect, as developed by Gross and Levenson [62]. Details on the generation of the shape sequences can be found in Section E.6. Each step of the study procedure is detailed in the following sections. Instructions Instructions provided the participant with an overview of the research being conducted; an explanation of the Haptic Creature and information on interacting with it; and the study procedure, including a detailed explanation of the response formats employed. The complete instructions are documented in Section E.3. Familiarization Session After the participant read the study instructions, the facilitator entered the observation studio to conduct a familiarization session that was divided into two phases, each of which took approximately five minutes to complete. The participant was free to ask questions at any point and was also prompted by the facilitator at the completion of each phase. The first phase introduced the participant to the emotional renderings of the Haptic Creature. The goal of this phase was to provide an opportunity for the participant to become comfortable with the active version of the robot as well as to familiarize her with the Haptic Creature’s range of expressions to be presented throughout the study. In this phase, the robot successively rendered the following sequence of emotions: neutral, pleased, neutral (again), and miserable. The presentation of neutral 145  7.2. User Study between the other two emotions was to approximate the stimuli protocol (Section 7.2.4) whereby the Haptic Creature transitioned from neutral to the mirrored emotional response. Each emotion was presented for 20 seconds, and the exact sequence was repeated twice. The facilitator signaled whenever the expression changed but did not refer to them by their emotion labels so as not to prime the participant. The second phase provided an introduction to performing the human affective touch gestures. The goal of this phase was to provide the participant with general proprioceptive practice, while also clarifying the slight procedural difference between performing repetitious versus sustained gestures. The participant practiced two gestures: stroke (repetitious) followed by hug (sustained). For both gestures, the facilitator initially demonstrated and the participant then performed all three levels for speed and pressure. In cases where the participant’s movements appeared outside the norm, the facilitator would provide guidance, either verbally or through physical demonstration. However, the participant was instructed to always gravitate towards an interaction that felt natural. The two types of gestures differed in how the speed and pressure components were enacted. In particular, for sustained gestures, the speed component described the rate at which the participant moved the Haptic Creature into position, while the pressure component was enacted once the robot was in position. For example, when performing a hug for the miserable factor level, the participant would “slowly” bring the Haptic Creature to her chest, then, once there, would apply a “moderate” squeeze. The facilitator explained these distinctions before practicing each gesture type. Affective Touch Gestures Performance The majority of the user study was the performance of affective touch gestures by the participant for the Haptic Creature. The sequence of gestures was generated from the list for the current factor level (Table 7.2 or 7.3). The participant was presented with all six human touch gestures from the list, displayed on the computer screen one at a time and in randomized order. To highlight the speed and pressure components, both were underlined in the gesture definition (Figure 7.6 (a)). 146  7.2. User Study The participant was never instructed that the touch gestures were intended to convey emotion. Nor was she told how many touch gestures would be presented for each sequence, or that there were two gesture sets, each of which would be repeated twice over the course of the study. The overall interaction for each touch gesture lasted 48 seconds (Figure 7.5 (b)). Participants read the touch gesture label and definition when displayed. This step took between 4 seconds and 10 seconds, depending on the particular gesture and any need to reposition the Haptic Creature — e.g., from a hug on the chest to a stroke on the lap. Participants had been instructed to begin performing the touch gesture as soon as possible (without hurrying) and continue performing until the next gesture was presented. The performance, therefore, took the remainder of the interaction (38 seconds to 44 seconds). The participant was never informed of any specific timing; however, a lowattentional, visual cue signified the next touch gesture would be presented shortly. This notification, whereby the background lightened for the current gesture label (Figure 7.6 (b)), appeared during the final 8 seconds of the gesture performance. Early pilot testing enforced on the participant more rigid timing constraints; however, these often proved confusing and stressful to follow. As a result, we developed this simpler, more natural approach. The Haptic Creature’s response to a human touch gesture followed the timing protocol as described in Section 7.2.4. Participant Affect Report After the initial rest break following the familiarization session and each time after performing a sequence of touch gestures, the participant reported her current emotional state using the corresponding response format presented in Section 7.2.5. Haptic Creature Emotional Response Assessment The participant assessed the emotional response of the Haptic Creature as a result of the gestures she had just performed. In the treatment condition — when the robot was active — the participant was asked to describe the Haptic Creature’s emotional response, while in the control condition — when the robot was nonactive 147  7.2. User Study  (a) Gesture label on dark background above gesture definition with speed and pressure components underlined.  (b) Lightened gesture label background used as low-attentional visual cue to notify participant that next touch gesture will be displayed shortly.  Figure 7.6: Example of onscreen human touch gesture instructions — stroke from miserable touch gesture set. Subfigure (a) depicts instructions as presented for majority of the interaction. Subfigure (b) displays the visual cue presented near the end of the interaction. — the participant was asked to predict the Haptic Creature’s emotional response. Regardless of the condition, the response format remained the same, as presented in Section 7.2.5. Post-Study Questionnaire At the conclusion of the study, participants completed a comprehensive questionnaire. This questionnaire collected demographic information; pet experience and attitudes; general impressions of the Haptic Creature; details related to the touch gestures performed by the participant; and views on the responsiveness of the Haptic Creature. The complete questionnaire is documented in Section E.5.  148  7.3. Results  7.3  Results  We begin our presentation of the user study results with changes observed in participants’ emotional state as a result of the affective touch interactions. These results are followed by participants’ predicted and perceived emotional responses of the Haptic Creature. We then conclude with a summary of relevant responses to the post-study questionnaire.  7.3.1  Participant Affect State  For one repetition of the main part of the study, participants performed a sequence of touch gestures for the Haptic Creature. All gestures in this sequences were intended to convey the same specific emotion: either all miserable or all pleased. For the duration of a sequence, the Haptic Creature was either active, in which it presented a mirrored emotional response to each human touch gesture, or the robot was nonactive. At the end of a sequence of interactions, participants then recorded their current emotional state. To examine the emotional effect on the human from affective touch interactions with the Haptic Creature, we computed a paired sample t-test between the nonactive (control) and active (treatment) robot response conditions. By a comparison between the control and treatment robot responses — when human touch gesture sequences were identical — changes in participant emotions therefore can be attributed to the robot’s reaction. A separate statistical test was conducted for both emotion communicated factor levels: miserable and pleased. Though we recorded data and computed results for the two emotion dimensions, we only considered statistical significance for the valence measure, since this was the dimension manipulated in the study — arousal was held constant. Furthermore, to control for Type I error as a result of the multiple valence comparisons, we applied a Bonferroni correction (α = .05/2 = .025) to determine statistical significance. The complete results are presented in Table 7.5. The results demonstrate a statistically significant shift towards positive valence (with large effect size) for participants in the pleased factor level. Participants’ change in valence for the miserable level, however, was not found to be statistically significant. 149  7.3. Results Table 7.5: Change in participant emotional state for both levels of emotion communicated factor. p  η2  1.68 2.01  .102 .053  .08 .12  3.13 2.37  .004∗ .024  .24 .15  Emotion  Measure  M  SD  t(31)  Miserable  Valence Arousal  .41 .75  1.37 2.11  Pleased  Valence Arousal  .59 .56  1.07 1.34  Significant at p < .025 (two-tailed). Statistical significance only considered for Valence measure, not Arousal. ∗  To explore further, we plotted the participants’ change in valence as a function of their perception of the valence response of the Haptic Creature. The robot’s perceived valence was determined through the results presented in the next section (7.3.2): the emotion label chosen during the treatment condition was considered solely for its valence ranking. For example, participants’ selection of distressed, miserable, or depressed for the perceived emotional response of the Haptic Creature were all categorized as negative valence for our purpose. We chose to isolate this dimension because the Haptic Creature’s responses differed only in valence — miserable versus pleased. This therefore allowed us to visualize any trends in the correspondence between participants’ valence perception of the Haptic Creature’s emotional response and the actual change in valence for their own emotional state. Figure 7.7 displays two graphs, one for each factor level. Several observations may be made from these graphs. First, as will be discussed in greater detail in the next section, a majority of participants perceived positive valence being expressed by the robot regardless of factor level, whereas no participants perceived negative valence for the pleased factor level, while a handful did for miserable. Second, the same number of participants (14) reported no change in valence (yellow circles) for either factor level. On the other hand, slightly more participants (15) had a positive valence shift (green upward triangles) for pleased than for miserable (13). Conversely, slightly more participants (5) had a negative va150  7.3. Results  ∆ Valence  4 2 0 -2 -4 Negative  Neutral  Positive  Undefined  (a) Participant change in valence (y-axis) in relation to participant perceived valence response of Haptic Creature (x-axis) for miserable factor level.  ∆ Valence  4 2 0 -2 -4 Negative  Neutral  Positive  Undefined  (b) Participant change in valence in relation to participant perceived valence response of Haptic Creature (x-axis) for pleased factor level.  Figure 7.7: Participant change in valence in relation to participant perceived valence response of Haptic Creature. Subfigure (a) represents the miserable interaction. Subfigure (b) represents the pleased interaction. Each subfigure marker represents one participant: = positive valence shift; = no change; and = negative valence shift. Undefined perceived valence response is when the participant chose either Surprised or None Of These. lence shift (red downward triangles) for miserable than for pleased (3). Moreover, for miserable, the magnitude of negative valence shift was large (-3 and -2) for these additional participants. Third, for the pleased factor level, the majority of participants with a positive valence shift occurred when a corresponding positive valence response was perceived from the Haptic Creature. For the miserable factor level, however, the majority of participants with a negative valence shift surprisingly occurred when positive valence response also was perceived.  7.3.2  Haptic Creature Emotional Response  After performing a sequence of human touch gestures, participants also assessed the Haptic Creature’s emotional response to the interaction. Though these gestures performed by participants were intended to convey a specific emotion — miserable 151  7.3. Results or pleased — the participants were never provided this information; rather, for each gesture in a sequence, participants were given simply the label and corresponding definition on how the gesture was to be performed (Section 7.2.3). In the control condition, when the robot was nonactive, participants noted their expectation of the Haptic Creature’s response. In the treatment condition, when the robot was active, participants specified their perception of the actual emotional response. For both conditions, responses were recorded via a forced choice from among 16 emotion labels (Table 7.4) — 15 emotion labels plus none of these. From this list, Russell’s nine emotion labels are dimensional in nature so have direct mappings to the emotions communicated (Figure 4.8). Ekman’s six labels, on the other hand, do not have a direct mapping but may overlap with Russell’s labels. As a result, we applied the equivalency mapping determined from our first study (Table 5.3). We separately computed the frequency with which each emotion label was chosen for the two emotion communicated levels under both conditions. For each of these four computations, we then calculated frequency subtotals for the three valence categories: negative, neutral, and positive. The binomial statistical test was conducted in order to determine that the top valence category was selected at a frequency significantly greater than chance. We set chance at 33%, in consideration of a selection being from one of the three valence categories. Presented in Table 7.6 are the results for the participant prediction of the Haptic Creature emotional response, which occurred during the nonactive robot response (control) condition. Presented in Table 7.8 are the results for the participant perception of the Haptic Creature emotional response, which occurred during the active robot response (treatment) condition. Any Ekman label that corresponded to a Russell label was counted as if the Russell label was chosen. For reference, Tables 7.7 and 7.9 present the frequency breakdown for any of these aggregate emotion labels presented in the two previous tables. Here, we will highlight the predicted and perceived emotional response results separately. In both cases, our focus is again on the valence dimension, which is best reflected in the table subtotals.  152  7.3. Results  Table 7.6: Frequency of participant prediction of Haptic Creature emotional response to human touch gestures for both levels of emotion communicated factor. Assessment occurred during the nonactive robot response (control) condition. Predicted Haptic Creature Emotional Response Emotion  Label  Miserable∗  Distressed Miserable Depressed  %  Label  %  Label  %  13  Aroused Neutral Sleepy  25 22  Excited Pleased Relaxed  9 25  13 Pleased†  Distressed Miserable Depressed  9 9  47 Aroused Neutral Sleepy  18  28 3 31  34 Excited Pleased Relaxed  6 13 13 32  Emotion response labels ordered in correspondence with the Haptic Creature’s affect space (Figure 4.8). ∗ None Of These = 6%. † None Of These = 19%.  Table 7.7: Frequency breakdown for aggregate emotion labels in Table 7.6. Emotion  Label  Aggregation  Miserable  Depressed (13%)  =  Sad (13%)  Pleased  Depressed (9%)  =  Sad (6%) + Depressed (3%)  Pleased (13%)  =  Pleased (7%) + Happy (6%)  153  7.3. Results  Table 7.8: Frequency of participant perception of Haptic Creature emotional response to human touch gestures for both levels of emotion communicated factor. Assessment occurred during the active robot response (treatment) condition. Perceived Haptic Creature Emotional Response Emotion  Label  %  Label  %  Label  %  Miserable∗  Distressed Miserable Depressed  19  Aroused Neutral Sleepy  3 13 3  Excited Pleased Relaxed  6 37 16  19 Pleased  Distressed Miserable Depressed  19 Aroused Neutral Sleepy  9 6 6 21  59a Excited Pleased Relaxed  13 41 25 79b  Emotion response labels ordered in correspondence with the Haptic Creature’s affect space (Figure 4.8). ∗ Surprised = 3%. a p < .01. b p < .005.  Table 7.9: Frequency breakdown for aggregate emotion labels in Table 7.8. Emotion  Label  Miserable  Distressed (19%)  =  Distressed (13%) + Afraid (6%) + Angry (0%) + Disgusted (0%)  Pleased (37%)  =  Pleased (31%) + Happy (6%)  Pleased (41%)  =  Happy (22%) + Pleased (19%)  Pleased  Aggregation  154  7.3. Results Predicted Responses First, we examine only the predicted responses. In our human affect display study from the previous chapter, we also presented participants’ predictions of the Haptic Creature’s emotional response to the touch gestures they had just performed (Section 6.3.3). For that study, we had an interest in participants’ overall expectation for the robot’s response. Our goal here, however, is to determine if participants inferred the intended emotional content from the human touch gestures. In the previous study, the participants were instructed as to the emotions that they were intending to communicate through the touch gestures. On the other hand, in the current study, the participants were intentionally not informed of the specific emotions intended for the human touch gestures. As introduced in Section 7.2.6 and to be discussed further in Section 7.4.2, we will use these results will to probe for the effects of demand characteristics. Overall, no clear trend emerges. Participants seemed to less frequently expect the touch gestures they had just performed would evoke a negative response from the Haptic Creature; however, there is no strong preference for the remaining two valence levels either. The only exception was that neutral was selected with 47% frequency for the miserable factor level. These results seem to indicate that participants overall were unsure of the emotional content of the touch gestures sequences they had just performed, which coincides with our desired outcome. Perceived Responses Second, we consider only the perceived responses, which allows us to examine the success of the Haptic Creature in communicating its intended emotional state. As we will discuss in Section 7.4.4, these results may have implications on participants’ overall emotional response to the interaction. We can see that participants tended to interpret the physical expressions of the robot to be positive valence regardless of factor level. In the case of pleased, this was not only the expected result, but positive was selected at a very high percentage (79%). For miserable, however, only 19% perceived the expected negative valence. Though low, this percentage was somewhat encouraging because the other factor level (pleased) recorded 0%. In fact, the negative valence perceived in mis155  7.3. Results erable seems to be at the expense of positive: the neutral percentages do not differ markedly between factor levels, but positive decreased by 20 percentage points.  7.3.3  Questionnaire Responses  Here we summarize the results of participants’ responses to pertinent parts of the post-study questionnaire: experience with pets and attitudes towards them; intensity level when touching the robot; impressions of the Haptic Creature’s responsiveness; and examinations for demand characteristics. Unless otherwise noted, all participants (N = 32) responded to each question. Pet Experience and Attitudes General experience with pets was determined via the Companion Animal Bonding Scale (CABS) [126], which has a range of 8–40 — higher scores correlate with higher degrees of bonding. Overall, 7 participants (22%) had no pets; 18 (56%) completed only the retrospective scale, which measures childhood experience; 3 (9%) completed only the contemporary scale; and 4 (13%) completed both. Participants completing the retrospective CABS had scores that ranged from 10 to 34 (N = 22, M = 24.64, SD = 6.76), while those completing the contemporary CABS had scores that ranged from 16 to 37 (N = 7, M = 27.00, SD = 8.45). General attitudes towards pets was determined through the Pet Attitude Scale– Modified (PAS–M) [116], which has an overall range of 18–126 — higher scores correlate with more positive attitudes towards pets. Participants’ scores ranged from 66 to 122 (M = 97.19, SD = 14.27). Interaction Intensity Participants were asked to reflect on their general intensity when interacting with the Haptic Creature: How would you rate your overall intensity when performing touch gestures for the Haptic Creature in comparison to a living creature? For example, did you frequently hold back, or were more intense, with the Haptic Creature than if it was a living creature? 156  7.3. Results Responses were provided through a single-question Likert scale, which ranged from Much Less Intense (1) to Much More Intense (7). Participants’ scores ranged from 2 to 5 (M = 3.75, SD = 0.98). Robot Responsiveness We constructed a seven-level Likert scale to determine impressions of the Haptic Creature’s responsiveness to human touch: 1. The Haptic Creature was responsive to my touch gestures. 2. The Haptic Creature responded differently for each touch gesture I performed. 3. The Haptic Creature recognized I was interacting with it. 4. The Haptic Creature understood which touch gesture I was performing for it. 5. The Haptic Creature responded in a manner similar to the touch gestures I was performing for it. The five-question scale was composed of two subscales: questions 1, 3, and 5, were meant to gauge impressions of the robot’s general responsiveness, while questions 2 and 4 were intended to reflect impressions of the robot’s discriminant responsiveness. That is to say, the first subscale probed participants’ general sense that the Haptic Creature was interacting with them, and the second subscale examined how discerning the robot appeared to the components of the interaction. For both scales, a higher score was meant to imply a higher sense of responsiveness from the robot. The general responsiveness scale has an overall range of 3–21, and participants’ scores ranged from 3–17 (M = 12.31, SD = 3.32). The discriminant responsiveness scale has an overall range of 2–14, and participants’ scores ranged from 2–11 (M = 7.44, SD = 2.37). Overall, these results show mean scores for both responsiveness scales were near their respective midpoints. Participants were allowed to provide open-ended comments along with their responsiveness ratings. Of those that responded (12%), all stated that moving their 157  7.3. Results Table 7.10: Frequency of participant designation of similarities among the four gesture sequences performed for the Haptic Creature during the study. Gesture Sequence Similarity  %  All 4 sequences seemed different  12  3 of the sequences seemed similar to each other, while 1 seemed different  3  2 of the sequences seemed similar to each other, while the other 2 seemed different  41  2 of the sequences seemed similar to each other, and the other 2 seemed similar to each other  22  Do Not Know  22  Correct gesture sequence similarity response highlighted in boldface. hands when performing the gestures did not allow them to note any changes in the robot’s state. Participants provided similar responses during informal debriefings with the facilitator. Demand Characteristics Two sets of post-study questions probed for demand characteristics. First, participants specified any similarities among the touch gesture sequences they performed. Over the entirety of the study, a total of four sequences of human touch gestures were performed: both the miserable touch gesture set (Table 7.2) and the pleased touch gesture set (Table 7.3) were presented twice. Participants recorded any similarities they recognized among these four sequences. Their responses are summarized in Table 7.10. As can be seen, only 22% correctly inferred the repetition of both human touch gesture sets twice. Second, participants rated the implied emotional content of the two human touch gesture sequences. In the questionnaire, each touch gesture sequence was presented as a generic list, without any labeling with regards to similarity or emotional intent.  158  7.4. Discussion Participants were asked to rate the valence and arousal components for the miserable human touch gesture set, as well as noting difficulties in performing any gestures from the list. Their responses are summarized in Table 7.11. Of those participants that noted any difficulty with performing these gestures (22%), the majority of comments stated confusion in the difference between cradle and hold. In the exact same manner as the previous list, participants also rated the pleased human touch gesture set. Their responses are summarized in Table 7.12. Of those participants that noted any difficulty with performing these gestures (22%), many of the comments stated issues with the proper way to enact rock. The valence and arousal interpretation of the two human touch gesture sets demonstrate that, when presented with each list en masse, participants recorded that the miserable touch gestures seemed to convey relaxed — positive valence with low arousal — while the pleased set appeared to convey excited — positive valence with high arousal. Of note, the participants considered that the two sets differed solely by arousal: the frequency of calm and excited are flipped. The valence dimension having the greater relevance to our present study, the results imply that participants correctly recognized the valence for the pleased human touch gesture set but not the miserable one.  7.4  Discussion  The goal of the present study was to examine the influence on the human’s emotional state as a consequence of affective touch interaction with the robot. In this section, we discuss the results of our user study. We begin by reflecting on the overall design of the study. This is followed by an investigation of any demand characteristics that might have influenced the results. We then continue with a consideration of participants’ impressions of the Haptic Creature’s responsiveness to their touch gestures. We conclude by contrasting the difference in results between the study’s two emotion communicated factor levels.  159  7.4. Discussion  Table 7.11: Frequency of participant valence and arousal rating of miserable human touch gesture sequence. Dimension  Emotion Label  %  Valence  Unhappy Neutral Happy Do Not Know  3 25 69 3  Arousal  Calm Neutral Excited Do Not Know  56 31 10 3  Correct emotion label response highlighted in bold face.  Table 7.12: Frequency of participant valence and arousal rating of pleased human touch gesture sequence. Dimension  Emotion Label  %  Valence  Unhappy Neutral Happy Do Not Know  3 23 74 0  Arousal  Calm Neutral Excited Do Not Know  13 28 59 0  Correct emotion label response highlighted in bold face.  160  7.4. Discussion  7.4.1  Reflections on Study Design  The general approach for this study, as for our previous ones, was a controlled evaluation of affective touch. This approach has the great benefit of limiting confounding factors, thereby strengthening any causal relationships exhibited. However, a controlled evaluation also has the potential to render artificial the situation under investigation. Here, we reflect specifically on the environment of the user study and the experimental control of the affective touch interactions. Section 7.2.2 detailed the study setup, which was in an observation studio. This afforded a controlled environment that was quiet and free from distractions. Furthermore, based on feedback from pilot participants, we installed cushioned lounge seating to ensure comfort as well as roughly approximating sitting on a couch at home. That said, an observation studio is far from a familiar setting. Since we sought a large, diverse adult participant pool, it would have been both logistically prohibitive and methodologically problematic to situate the study in locations individually familiar to each participant. Several of the related socially interactive robotics studies discussed in Section 2.4.1 provided somewhat controlled environments but in more familiar settings by restricting the participant pool. For example, some studies with children took place in a controlled room nearby the students’ classroom [91] or, in cases of elderly populations, the studies took place directly in their care home [99, 177]. Many other studies, however, were conducted in observation studios not unlike our own — contrast these with those studies that monitored neural activity, whereby participants were in a hospital room attached by wires to medical devices [88, 110]. One distinguishing aspect of our study setup was the use of ear muffs to mask any extraneous sounds that may be generated by the robot. While participants rarely if ever complained about the ear muffs — in this study or previous ones — we readily admit that they are not natural and an unfortunate trade-off to minimize auditory artifacts in a study focused on touch. Worthy of greater scrutiny, however, is the study procedure for the affective touch interactions (Section 7.2.7). The set of human affective touch gestures and the manner in which they were performed — speed, intensity, duration — were 161  7.4. Discussion all specified for the participant. The robot’s active response was similarly timed (Section 7.2.4). The procedure, therefore, included no free-play. Considerable tuning occurred as a result of the pilot studies in order to ensure that the touch instructions were provided to participants in manner that did not distract greatly from the interaction. Timings for the interaction were similarly tuned to maximize the time a gesture was performed while minimizing fatigue. Participants generally did not have issue with performance instructions; however, a few repetitive gestures — e.g., pat — were sometimes noted as being awkward to perform for the duration. At a higher-level, the sum of the parts does not necessarily represent a wholly natural interaction between the human and the robot. That is to say, while care was taken in the individual steps — the selection of appropriate gestures, the way in which they were presented and enacted, the manner of the robot’s response — the overall direction of the participant throughout likely interfered with the interaction. This guidance was necessary to retain experimental control, allowing us to limit the factors affecting the participants emotional state. However, it also limits the naturalness of the interaction.  7.4.2  Effects of Demand Characteristics  As presented in Section 7.2.6, considerable effort was made to ensure participants were unlikely to infer the underlying purpose of the study and consequently adjust their behavior. We intentionally limited direct references to emotions throughout the study. Furthermore, we specifically tested for demand characteristics in our pilot study. Though care was taken, we further empirically validated our approach in two ways, both of which centered on the emotional content of the human touch gestures. While our intent was to remove as much outward discussion of emotion from the study, we anticipated that participants would, in some cases, make inferences. Our goal in avoiding demand characteristics, therefore, was to ensure that participants did not infer specific, intended emotions from the presented human touch gestures. That is to say, participants may believe that the touch gestures they performed convey emotions, but they should not know the specific emotions intended. 162  7.4. Discussion The discussion we present below supports our conclusion that demand characteristics likely did not play a significant role in participants’ emotional state changes. Predicted Emotional Response of Haptic Creature In the control condition, when the Haptic Creature was nonactive, participants predicted the Haptic Creature’s emotional response to their immediately preceding touch gesture performances. This response was requested in order to preserve symmetry between the control and treatment conditions: we did not want participants to ponder the presence (or absence) of this particular question after some gesture performances but not others. More importantly, though, responses provided a means to examine participant inference of specific emotional content for human touch gesture sequences during the course of the study. In our human affective touch study presented in the previous chapter, participants generally predicted the Haptic Creature to present a mirrored emotional response to their affective touch gestures (Table 6.7). Guided by this result, we expected participants in the current study to predict the robot’s response as a mirror of their assessment of the emotional content of the human touch gestures performed. That is to say, if a participant predicted the Haptic Creature would provide a negative valence response, then the participant had inferred negative valence content in the touch gestures. As noted in the results presented in Table 7.6, a near-majority (47%) of participants felt the miserable human touch gestures would result in a neutral valence response from the Haptic Creature. While this suggests the possibility of demand characteristics for this interaction, it should be noted that neutral does not match the intended valence content of the miserable human touch gesture set, which is negative valence. More importantly, however, there was no clear consensus for the robot’s valence response for the pleased human touch gesture set, which were used for the interaction that resulted in a statistically significant change in participants’ emotional state.  163  7.4. Discussion Post-Study Evaluation Two questions in the post-study survey probed for demand characteristics, the results of which were presented in Section 7.3.3. The first question asked participants about overall similarities among the human touch gesture sequences. Results demonstrated that participants did not generally detect the two distinct touch gesture sets, each repeated twice (Table 7.10). In the second question, participants rated the emotional content of each human touch gesture set. Participants did not correctly identify the negative valence component of the miserable human touch gestures (Table 7.11) but did infer positive valence in the pleased ones (Table 7.12). While this latter result initially may imply demand characteristics, when taken in context it becomes less so. In the questionnaire, each set of human touch gestures was presented in its entirety, so participants were unrestricted in considering the complete list. This, however, is not the same as their presentation during the study, which was one at a time in random order. Furthermore, as mentioned earlier in this section, when actually prompted during the study, participants showed no general agreement as to the specific emotional content of the pleased gestures. Finally, though unrelated to our preceding validations, it is worth noting that informal debriefings with the facilitator uncovered that a number of participants thought the random shapes sequence (displayed during the rest break) might be intended to influence their behavior. When the facilitator prompted these participants to provide any meanings they inferred from the display, none could be provided. This further justifies our use of an externally validated video technique: had the video sequence not been previously validated [62], there would be potential confounds for improperly altering the participant’s emotional state.  7.4.3  Middling Responsiveness Impression  As noted in the results of the robot responsiveness scales recorded in the post-study questionnaire (Section 7.3.3), the scores averaged near the midpoint for both subscales. There was a slightly favorable sense that the Haptic Creature was generally responding to the touch interaction, whereas a slightly less favorable sense that it selectively understood the various human touch gestures. Also of note, however, 164  7.4. Discussion is that the ranges in scores for each subscale included rankings at or close to their respective boundaries. Clearly, some participants felt the Haptic Creature was extremely responsive yet others felt the robot was not responsive at all. In preparation for the formal user study, we conducted several small pilot sessions, of which one aspect concentrated on development and timing of the active emotional responses from the Haptic Creature (Sections 7.1 and 7.2.4). Piloting results indicated participants perceived the robot was responding to the their touch, so our expectation was similar for participants of the formal study. Upon reflection, the rigor of the formal procedure may have interfered somewhat with participants’ ability to recognize changes in expression from the Haptic Creature. The majority of the pilot sessions were considerably more casual. Though the human touch gesture sets, timings for their performance, and robot emotional responses were generally common between the piloting and formal study, the less automated nature of the former may have allowed participants more flexibility to ponder the Haptic Creature’s expressions. (Cf. the next section, which discusses a potential lack of discernability between neutral and miserable that may also have impacted the sense of the robot’s responsiveness.) Furthermore, the somewhat noncommittal scores may have been a result, not of the Haptic Creature’s behavior, but, rather, of participants’ performance of the touch gestures. In our first interaction decomposition study (Chapter 5), participants perceived the Haptic Creature’s emotional state without actively touching with it: they simply rested their hands on the robot, which allowed for constant contact with the Haptic Creature. In the current study, however, the participants’ movements required them to repeatedly break contact and occasionally touch nonactive locations of the robot. While the Haptic Creature’s overall emotional state may be inferred throughout the touch gesture interaction, the subtleties of its transitions from neutral may have been less recognizable as a result of the participants’ active touch. Though we had hoped participants would have had a stronger sense that the robot was responsive, the results would have been more problematic had they been the opposite, where the robot did not seem in any way to be responding to the human touch gestures.  165  7.4. Discussion  7.4.4  Differences between Factor Levels  Here we reflect on the differing results in changes to participant emotional state between our two emotion communicated factor levels. In particular, the statistically significant shift to positive valence when communicating pleased, but no statistically significant change for miserable. We are greatly encouraged to have empirically demonstrated a change in the human’s emotional state as a result of affective touch interaction with the robot. Nonetheless, much can be learned through a further investigation into the success of one instance over the other. To that end, we consider those elements that differed between the two factor levels: the human touch gestures performed and the emotional response of the robot. Human Affective Touch Gestures We begin with an examination of the human touch gesture performances. The cardinality of each gesture set (six) was equal. Similarly, the timing when performing a particular gesture (48 seconds) was the same for all. The human touch gestures themselves, however, clearly differed between sets. As noted in Section 7.2.3, generally the instructed speed was slower and the instructed pressure was lighter for the miserable human touch gestures than those for pleased. Furthermore, the composition of the two sets differed: contact without movement, hold, and cradle were present only in miserable; tickle, pat, and rock were present only in pleased; while hug, stroke, and rub were present in both sets but with differing speed and pressure profiles. Given that both sets were derived from the results of our second study (Chapter 6), we have a degree of confidence in their general formulation. Moreover, from the post-study questionnaire responses, participants did not express difficulty in performing one set versus the other. However, latent influences may potentially exist as a result of the discrepancies between these two human touch gesture sets. To effectively control for the influence of the performed human touch gestures, our present study would have had to use the same set for both factor levels. This procedure, however, would have rendered the full affective touch interaction loop inconsistent. That is to say, we required that the emotion communicated from the  166  7.4. Discussion human (to the robot) to be the same as the emotion communicated from the robot (to the human). Robot Emotional Response A second influence that may have contributed to the difference in results between the two factor levels is the response of the robot, both in the rendering parameters as well as the participants’ perception thereof. As noted in the results presented in our first interaction decomposition study (Section 5.3.1), the Haptic Creature was successful when it communicated pleased (44%), whereas miserable (2%) was arguably the robot’s least recognized emotion. Based on those results, we modified the rendering parameters for this study, with the goal of increasing the robot’s overall effectiveness in emotion expression. Referring back to the rendering parameters presented in Table 7.1, the more pronounced differences between the two emotional expressions were in breathsper-minute and purring: for miserable, no purring existed and the breathing rate (49.2 bpm) was 38% faster than pleased (35.6 bpm), which presented a purr. With these updated parameters, less than 20% of participants were able to recognize the negative valence of miserable, while, in contrast, nearly 80% of participants correctly recognized the positive valence of the pleased response. Moreover, there was no perception whatsoever of negative valence for pleased: any incorrect recognition was that of neutral valence. Our modified actuator rendering parameters, consequently, were successful in representing positive but not negative valence. A likely possibility therefore exists that the participant’s emotional response was impacted by the robot’s ability to clearly communicate — or, conversely, the participant’s ability to perceive — the intended valence state. A separate consideration of the robot’s response is the affect display transitions. Intended to impart a sense that the robot was responding to the participant’s touch, the Haptic Creature modulated its emotional state between neutral and the emotion corresponding to the current factor level (Section 7.2.4). The breathing rate for neutral (42.3 bpm) is equidistant between the two communicated emotions: 19% slower than miserable and 19% faster than pleased. 167  7.4. Discussion While possible that this transition in breathing rates — slower to faster versus faster to slower — impacted the participant’s emotional response differently for each factor level, we suggest that purring may have been more substantial. In particular, the presence of a purr in pleased provided a more salient transition between neutral. Both in piloting as well as the familiarization session of the formal study, participants expressed occasional difficulty in differentiating the neutral emotion from miserable. On the other hand, pleased was more clearly separated when contrasted with either of the other two expressions. The 19% difference in breathing rate between neutral and the other two emotions is arguably less perceptible than the 38% difference between miserable and pleased. Moreover, the presence of purring for positive valence helped set apart pleased from neutral. Conversely, miserable, was potentially too similar in rendering to that of neutral. Therefore, purring likely contributed to pleased being more correctly recognized compared with miserable. Furthermore, this difference in recognition may have contributed to the difference in results between emotion communicated factor levels. The potential similarity between neutral and miserable may also have contributed to participants’ impressions of the Haptic Creature’s responsiveness. That is to say, a lack of discernability between the two expression would give little indication of a transition, thereby diminishing the sense that the robot responded directly to human touch gestures for the miserable factor level. Here we propose two alternate approaches in an attempt to control for the influence of the robot’s emotional response. One approach would be to transition not from neutral but, rather, from an appropriate low arousal expression to the current emotion communicated factor-level emotion. One example would be to transition from sleepy — the low arousal equivalent of neutral — regardless of the factor level. A second example would be to transition from the low arousal emotion corresponding to the current factor level: depressed for miserable, relaxed for pleased. As we discussed earlier when attempting to control for influences of the human touch gestures, this manner of change would alter the intent of the study. The present study manipulates the valence of the interaction but not the arousal dimen168  7.5. Summary sion. If the Haptic Creature were to transition between low arousal and medium, the impression of responsiveness may increase, but the dynamics of the interaction will have also been changed beyond its original intent. A more useful approach, however, would be to further strengthen the recognizability of the miserable emotion expression while, at the same time, its ability to be distinguished from neutral. Our improvements to pleased were successful in that no negative valence was inferred; however, miserable still has much room for improvement to clearly convey its intended valence.  7.5  Summary  In this chapter, we presented our third and final interaction decomposition study. Built upon the results of our two previous studies (Chapters 5 and 6), we investigated the influence on the human’s emotional state as a consequence of affective touch communication with the robot. We also documented an update to the Haptic Creature’s affect display from the original design presented in Section 5.1, in an attempt to increase the recognition of the expression used in our study. Overall results demonstrated a statistically significant positive shift in the human’s valence when the two-way interaction communicated pleased, but not when miserable was communicated. We also reported that participants had an average sense of the Haptic Creature’s responsiveness to their touch: there was a slightly favorable sense that the robot was responding but a slightly less favorable sense that it selectively understood the various human touch gestures. We suggested two explanations for the differences in results between the two emotions communicated. One possibility was the differences between the human touch gesture sequences performed. A second possibility was in the Haptic Creature’s emotional responses, particularly participants’ discernability of the robot’s general responsiveness as well as its ability to convey negative valence. The broader implications of this work will be discussed in Section 8.1.4. We conclude this dissertation in the next chapter (8), where we revisit our research contributions as well as discuss future directions for our work.  169  Chapter 8  Conclusion We opened this dissertation with a scenario that highlighted the interactions investigated in our thesis (Section 1.1). Stella and her furry companion, Roi, communicated with each other through touch. Through these touch interactions, each was able to sense the emotional state of the other and, in some cases, the emotion of the perceiver was altered. As noted in our discussion of related work, while touch is a unique modality, it has received limited research interest in psychology relative to vision and audition, and this relegation is particularly acute in the investigation of emotion communication. Similarly, affect display research in the field of socially interactive robotics has paralleled this approach in psychology. The overall goal of our thesis was to investigate the role of affective touch in the social interaction between human and robot. In particular, our research examined the display, recognition, and emotional influence of this form of touch. We began with the development of the Haptic Creature robot. Then, we decomposed the overall affective touch interaction into its constituent parts, which guided the development of three user studies. The first investigated the manner and success of the Haptic Creature expressing its emotional state through touch. Our second study examined affective touch originating from the human. Our final study incorporated the results from the first two to explore the emotional influence of affective touch on the human. We begin this chapter by reflecting upon the various research contributions presented in this dissertation. Then, in Section 8.2, we critique the approach that guided our research. This is followed by Section 8.3, which introduces considerations in the design of affective touch interactions. In Section 8.4, we discuss directions for future research. This dissertation then concludes with some final thoughts. 170  8.1. Research Contributions  8.1  Research Contributions  In this section, we review the research contributions claimed in this thesis.  8.1.1  Platform for the Study of Affective Touch  Chapter 4 presented the Haptic Creature robot, which we developed for use in our research. As discussed in Section 2.4, there currently exists a small set of zoomorphic social robots that, to varying degrees, utilize touch and emotion as part of the interaction with human. However, as we later noted (Section 2.4.2), none have such a singular focus on the integration of touch and emotion as our Haptic Creature. While the other robots have touch sensing capabilities, all are augmented with visual and auditory sensing as well. Moreover, all use visual and auditory means for emotion expression from the robot, and only a few include touch. In contrast, the Haptic Creature is the only robot to rely solely on the touch modality for both sensing and affect display. The Haptic Creature is unique in its use of ear stiffness, modulated breathing, and vibrotactile purring to outwardly express its emotional state through touch. Furthermore, the Haptic Creature also has a more minimalist shape than related robots. Though animal-like, the Haptic Creature’s features are less defined as others, thereby increasing the concentration on the underlying interaction by diminishing the focus on the form. The Haptic Creature was advanced iteratively, with results from pilot and formal studies being fed back into its development. Therefore, the platform served not only as a means to study affective touch in social human-robot interaction but also a testbed for the application of knowledge gained from these studies. Furthermore, none of the aforementioned related robots have been so extensively utilized in affective touch interaction studies.  8.1.2  Affective Touch Originating from the Robot  In Chapter 5, we discussed the design of the Haptic Creature’s display of affective touch. We began with animal models then modified these through successive in171  8.1. Research Contributions formal user tests. After multiple iterations on the design, we conducted a formal user study to evaluate it effectiveness. This was the first study of its kind to explore affective touch originating from the robot. Overall results demonstrated that the configuration of the robot’s affect display system was more successful at communicating arousal as compared with valence. Increased ear stiffness and increased breathing rate were correctly recognized as increased arousal. The robot’s vibrotactile purr was successful at communicating positive valence, but it had the unintended effect of conveying negative valence when its intensity was increased. Though not intended in the tested configuration, there was an indication that breathing rate could also convey valence; specifically, faster rates implying negative valence as opposed to slower rates inferring positive valence. Similarly, there was also an indication that depth of breathing could communicate valence, where shallower breathing implies negative valence and deep breathing implies positive valence. In addition, as part of our study of human-initiated affective touch (Chapter 6), we found that participants frequently expected the Haptic Creature to respond with a similar emotion to the one that they were conveying. This has important implications in the development of believable affective touch interactions between the human and robot. These results guided the reconfiguration of the Haptic Creature’s affect display prior to its use in our study of the influence of affective touch interaction (Chapter 7). In Section 8.3.2, we provide generalizations for robot affective touch gestures from the results presented here.  8.1.3  Affective Touch Originating from the Human  In Chapter 6, we presented a study of human affective touch directed towards the Haptic Creature, which was imagined by participants as a close pet. From research in psychology and human-animal interaction, we compiled a dictionary of probable touch gestures. Participants rated the likelihood of using these gestures for communicating a variety of emotions. From each emotion, participants then performed likely gestures for the Haptic Creature. 172  8.1. Research Contributions The outcomes of this study have broad implications on socially interactive robotics when touch, especially affective touch, is part of the interaction. Here, we present a brief overview of these contributions. In Section 8.3.4, we then expand upon them in consideration for the recognition of human affective touch. Our first contribution was the compilation of a set of plausible touch gestures for interacting with the robot (Table 6.1). From this touch dictionary, we were able to determine the overall ranking of gestures likely to convey emotions in human-initiated touch (Table 6.3) . With this same data, we were also able to partition the likely gestures based on the emotion communicated (Figure 6.4). Next, by observing participant performances of likely gestures, we were then able to extract features of human affective touch (Section 6.3.2) — points of contact between the human and robot; duration and pressure intensity of touch. Finally, by comparing these views, we were able to construct an understanding of the human’s higher-level intent through affective touch (Section 6.4.3).  8.1.4  Affective Touch Interactions Influence on the Human  In Chapter 7, we combined knowledge from the two preceding studies to explore the full affective touch interaction loop between the human and robot. In particular, we investigated the influence of the human’s emotional state — specifically, valence — from this interaction. Participants performed predetermined sequences of affective touch gestures for the Haptic Creature. They then reported any changes to their emotional state that resulted from both nonactive and simulated active responses from the robot. In all cases, the Haptic Creature’s active emotional response mirrored that of the participants’ intended emotion. By a comparison between nonactive (control) and active (treatment) robot responses — when human touch gesture sequences were identical — changes in participant emotions therefore can be attributed to the robot’s reaction. The results of our study empirically demonstrated a statistically significant positive shift in valence for the human when the two-way interaction was pleased. However, a statistically significant change was not observed when the emotion 173  8.1. Research Contributions communicated between the human and robot was miserable, which we believe may be the result of the differences in the touch gestures employed as well as the emotional responses presented by the robot. While research has been conducted on the influence of social touch (Section 2.2.1) — e.g., intimacy, bonding, compliance — there has been little focus on its emotional influence, and none within socially interactive robotics. The broader implications of our results from this study fall into three categories. First, the general fact that a change in emotional state was observed is encouraging. The greater influence of an active versus nonactive robot substantiates the importance of both parties, the human and the robot, in the interaction. Second, the lack of an observed emotional change in the human for the miserable interaction, though unexpected, sheds light on possible features necessary for rich interactions. One explanation we have for the lack of change in the human is the possibility of a lack of a perceived change in the robot. That is to say, the robot’s change in response to affective touch from the human may not have been as noticeable for miserable as it was for pleased. Therefore, following from our first implication in the preceding paragraph, it is not enough to simply have a active robot but one that is perceived as responsive — and assumed to be responding appropriately. Finally, there now exists the possibility to directly influence the human’s emotional state as a result of affective touch interactions with the robot. In general, this requires care be taken with the robot when designing for the interaction so that the human is not inadvertently influenced. More importantly, though, if coupled with the knowledge gained from our second study, which aids in an understanding of the human’s emotional state and higher-level intent, the robot’s behavior can be properly designed to intentionally influence the human’s emotional state. This, for example, has applications to therapy (see Section 8.4.6).  174  8.2. Reflections on Research Approach  8.2  Reflections on Research Approach  In this section, we review various aspects of the research approach that guided the development of our thesis and consider the strengths and possible limitations thereof.  8.2.1  Human-Animal Interaction  In Section 2.2.4, we introduced methodological issues inherent in the study of human social touch. As a means to mitigate these issues, we then went on in Section 2.3 to consider the interaction between humans and companion animals. This approach overall was beneficial to our thesis; however, several alternate considerations must be mentioned. However, before presenting these, it is worth noting that much of the affective touch research in social human-robot interaction has so far gravitated towards zoomorphic robots (Section 2.4). Though not explicitly stated, our work began with an assumption that issues with human-to-human social touch are similarly manifest in touch between a human and a humanoid robot. As one example, the gender of the human and the robot could be a confounding factor. While we generally believe a humanoid robot would pose similar issues, we did not actually investigate this directly. Therefore, it is possible that many of the methodological issues of human to human touch research are not necessarily as problematic with a humanoid robot. Following from this consideration is the ability to generalize our results, which utilized a zoomorphic robot, to that of a humanoid robot or, further, to non-robotic technologies. The size, weight, shape, and passive feel of the Haptic Creature directly influenced the manner of the interactions. In addition, the context of the Haptic Creature, that of a companion animal, also was an influence. In terms of the robot’s morphology, our hypothesis is that interactions with much smaller or much larger zoomorphic robots would change more in manner and less in kind. That is to say, the human’s points of contact, speed of movement, and pressure intensity for touch would change in accordance with the robot’s size, but the set of likely affective touch gestures would remain relatively similar. For example, stroke is a highly likely gesture for expressing pleased, which we believe would still be employed regardless if the robot’s size was much smaller, like a 175  8.2. Reflections on Research Approach mouse, or much larger, like a horse. However, for the mouse-sized robot, the speed might be quicker and the intensity much lighter than when interacting with a horsesized robot. Furthermore, the points of contact would differ in that the human might use a fingertip on the smaller robot, while a full hand could be expected to stroke the larger robot. We further hypothesize that these generalizations will likely hold for humanoid and even non-robotic technologies. One area for differences, however, is in the Haptic Creature’s affect display. While we are able to generalize parameters of our robot’s effective behaviors, its emotional expression is limited by its present actuation mechanisms. Most notably, the Haptic Creature’s difficulty in conveying negative valence could be less a result of poorly configured actuation parameters and due more to the need for an appropriate actuator. Furthermore, while the breathing affords the Haptic Creature the ability to push against or pull away from the human, the robot is otherwise restricted in its inability to move in relation to the human. Potential differences in results also are likely to emerge in the context of the interaction. A human’s relationship with another human companion bears some similarities to that with an animal companion, but the relationship also differs. Both, for example, offer means of social support, but in differing ways: humans consider human companions more for instrumental aid and intimacy, while pets are viewed more as reliable and in need of nurturance [147]. Therefore, interactions between a human and humanoid robot companion may bear some similarity between interactions with a zoomorphic robot companion, but it may be more difficult to construct a scenario where the human views the zoomorphic robot as anything other than a pet. That is to say, a humanoid robot could much more easily be presented in a scenario as a peer or even adversary.  8.2.2  Duration of Emotional Interaction  In Section 2.1, we differentiated emotion, which is viewed as short-lived, from mood, which is considered over a much longer period of time. This brief temporal nature of emotion is reflected in the various user studies conducted as part of this thesis.  176  8.2. Reflections on Research Approach In the robot affect display study (Chapter 5), the participant assessed the Haptic Creature’s emotional state in a matter of seconds. In the human affect display study (Chapter 6), each affective touch gesture the participant performed for the Haptic Creature was often very brief. And in the influence of affective touch user study (Chapter 7), the sequence of affective touch interactions between the participant and the Haptic Creature lasted only a few minutes. This approach generally parallels related emotion research in psychology. While time frames (unfortunately) are often not explicitly presented as part of the experimental procedures, seminal research on facial expression recognition implies that the participant made judgments after viewing the stimuli for very brief periods (e.g., [38, 79, 172]). Similar short stimuli presentation can also be inferred in the affective touch studies of Hertenstein et al. (e.g., [74, 75]). Furthermore, much of the research on the influence of social touch presented in Section 2.2.1 investigated very brief touch stimuli. While these help to justify our use of short periods of interaction, an argument could be made for increasing the interval with which the human interacted with the Haptic Creature. The nature of the studies would change if the interactions went from less than a minute to over an hour; however, increasing each interaction to several minutes likely would not affect the current study goals. Specific to our research is the nature of the interaction. The manner in which the Haptic Creature displays its emotional state was grounded in animal models and iteratively improved through both informal and formal user studies. That said, the Haptic Creature represents no specific animal and is, in fact, a robot. We felt it important to always recruit participants that had no familiarity with the Haptic Creature, but the newness of the interactions might have contributed to lower than anticipated emotion recognition scoring. If the participant was allowed longer durations of emotional interaction, then recognition potentially could be improved. As an example, the time it takes a participant to consider the Haptic Creature’s current emotional expression — the stiffness of the ears, the manner of breathing, the properties of the purr — while making comparisons to the robot’s previous expressions might be better served with longer periods of interaction.  177  8.2. Reflections on Research Approach  8.2.3  Three-Dimensional Models of Affect  In Section 2.1.1, we introduced the dimensional models of affect and focused on the subset of theories that consider emotions to be constructed specifically of two bipolar dimensions. It is worth noting, however, that three-dimensional models also exist. Prior to Russell’s two-dimensional circumplex [130], he and Mehrabian [135] proposed a model composed of pleasure, arousal, and dominance. Plutchik [125] also developed a three-dimensional circumplex model whereby placement on the circle represents degrees of similarity among the emotions, and the third axis represents their level of intensity. As discussed in Section 4.3.3, we designed the Haptic Creature’s emotion model in accordance with the two-dimensional, bipolar affect space adapted from Russell [130, 136, 192]. Utilizing only valence and arousal, Russell’s model does not consider the third dimension, dominance, as introduced in his earlier work with Mehrabian (mentioned above). Furthermore, the SAM scales by Bradley and Lang, which were used in each of our interaction decomposition studies, have versions that include measurement of dominance [14]. In our research, we chose to emphasize basic emotions inherent in two-dimensional models of affect, while also following precedent for this approach that already existed within socially interactive robotics — e.g., [18, 138, 157]. It is possible, though, that the consideration of this additional dimension could have enhanced the Haptic Creature’s ability to express its emotional state. The dominance dimension represents the emotion’s controlling aspect. For example, it differentiates sleepy (submissive) from comfortable (dominant) or distressed (submissive) from belligerent (dominant). Dominance is considered the weakest of the three dimensions in that it is highly correlated with pleasure and, to a lesser degree, arousal; on the other hand, pleasure and arousal, the two dimensions we utilize in this thesis, show very little correlation [135]. Bradley and Lang [15], however, note that dominance’s high correlation with the pleasure dimension is manifest in responses to symbolic stimuli (e.g., photos), and they speculate that dominance may be more apropos in assessments  178  8.3. Considerations in Designing for Affective Touch of social interaction. Given that social interaction is a major focus of our thesis, the inclusion of dominance therefore would be a relevant consideration.  8.2.4  Embodiment of Emotion  When introducing the face as the primary means of human affect display in Section 2.1.2, we noted that the vast majority of studies in emotion stem from this work. This is true even for emotion research focused on gestural or haptic behaviors — e.g., the affective touch studies by Hertenstein et al. discussed in Section 2.2.2. Our approach, therefore, borrowed frequently from this body of work. Much of this research, however, often does not account for more recent theories concerning embodied cognition, where “cognitive processes are deeply rooted in the body’s interactions with the world.” [189]. Of particular interest would be theories of embodied emotion, which examine the mutual relationship between the physical mannerism of affect display and the perception of emotion [119]. While these theories can be applied to affect display through facial expressions, the embodiment of emotion seems all the more relevant to affective touch given the body’s pronounced physical interaction.  8.3  Considerations in Designing for Affective Touch  In the previous two sections, we summarized the research contributions of our thesis as well as critiqued our overall approach. Mindful of these discussions, we wish to consolidate and generalize the research as a whole. We provide here considerations in the design of affective touch interactions. These are based on outcomes from our controlled user studies as well as countless informal observations and general lessons learned over the many years the research was conducted. While our thesis was clearly focused upon socially interactive robotics, we feel that many of the considerations may be extensible to humans interacting with technologies other than robots.  179  8.3. Considerations in Designing for Affective Touch  8.3.1  Interaction Context and Robot Morphology  The circumstances under which the interaction takes place and the robot’s physical form are important factors to consider when designing for affective touch. Both have an impact on the other design considerations we present. The interaction context we chose for our research was that between a human and a companion animal, and the Haptic Creature’s look and feel facilitated this relationship. As we discussed in Section 6.4.2, these two properties appeared to influence the human’s preference for affectionate touch gestures over aggressive ones as well as the manner in which these gestures expressed emotion. If, for example, the context was that of a caregiver and an infant, a parent and a child, lovers, adversaries, or peers, then the robot’s morphology should accord with the relationship. In turn, the context and form dictate the acceptable affective touch interactions. This consideration can be seen reflected in our coverage of human social touch research (Section 2.2.1); the frequency, manner, location, acceptability, and influence of touch was often dependent upon factors such as the relationship, age, gender, and familiarity of those interacting.  8.3.2  Robot Affective Touch Gestures  While unlikely that most robots will possess actuation mechanisms similar to our Haptic Creature, the results from our configuration and testing of the robot’s affect display could be extended for other robots (cf. Sections 5.1, 5.4, and 7.1). These generalization roughly fall into three categories: stiffness, vibrotactile feedback, and modulated force feedback. As shown with the Haptic Creature’s ears, stiffness in any robot effector could be a useful means of conveying arousal. Increased stiffness could imply higher arousal, while decreased stiffness could convey lower arousal. The Haptic Creature’s purr box demonstrated that vibrotactile feedback could be used as a means to convey valence and, to a lesser degree, arousal. The presence of a vibration could imply positive valence, while an increase in its frequency could correspond to an increase in arousal — e.g., differentiating pleased from excited. Furthermore, results from our first study strongly indicated that a signif-  180  8.3. Considerations in Designing for Affective Touch icant increase in amplitude, thereby connoting shaking or shivering, could convey negative valence (Section 5.4.5). The Haptic Creature’s breathing presented a limited form of repetitious, modulated force feedback. Four parameters controlled the breathing: rate, volume, bias, and rest. Generally speaking, the rate corresponds to the frequency of the repetitious movement, and the volume corresponds to the amplitude of the force. Bias and rest, in turn, can be used to introduce an abberation by modifying the symmetry of a repetition. Bias controls precedence for the force pushing out in relation to the force pulling back, while the rest parameter controls pauses independently at either end of the repetition. The frequency of the movement can be used to convey both arousal and, to a lesser extent, valence. A higher frequency could imply higher arousal states, while it also can convey greater negative valence. This is accomplished by having the magnitude of the arousal axis be larger than that of the valence axis. For example, in our configuration for the final study (Table 7.1), the difference between two arousal levels was ~20 bpm — e.g., relaxed was approximately 20 bpm slower than the higher arousal pleased — whereas the difference between two valence levels was ~6 bpm — e.g., relaxed was approximately 6 bpm slower than the less positive valence sleepy. The amplitude of the force could be used to convey valence. Increased amplitudes could imply more positive valence states, while decreased amplitudes could convey more negative valence. The symmetry of the repetitious movement was less salient in our work. In both studies, we used the bias parameter to convey valence — the rest parameter was never utilized. Our current configuration for the final study used symmetric breathing to connote neutral valence, with faster exhale (force pushing out) signifying negative valence and, conversely, faster inhale (force pulling back) representing positive valence (see Figure 7.2). As we noted in Section 8.2.1, the Haptic Creature lacked the ability to move its entire body in relation to the human. While this limited aspects of its emotional expression, it does bear some similarity to other consumer technology. With the exception of certain game controllers — e.g., joysticks, driving simulators — force feedback is an uncommon and often limited feature in many current consumer 181  8.3. Considerations in Designing for Affective Touch technologies, and it is even more rare for the device to move in relation to the user. The generalizations from the Haptic Creature’s breathing, therefore, may also apply to these other types of devices wishing to apply more limited forms of force feedback.  8.3.3  Robot Response  As we stated in Section 8.1.4, care should be taken with the robot when designing for the interaction so that the human is not inadvertently influenced. Our final study demonstrated a change in the human’s emotional state as a result of the full affective touch interaction loop. A key factor in this study was the emotional response of the robot. Therefore, there is significant responsibility in regards to the robot’s response to affective touch interactions. Here, we present three properties that should be considered: appropriateness, discernability, and latency. Following from our consideration of the interaction context mentioned above, the manner in which the robot responds must be appropriate for the affective touch interactions. Our research results demonstrated that the human generally expects the robot to respond with an emotional state that mirrors his or her own (Section 6.4.4). For example, if the human was pleased, and expressed this to the robot, then the appropriate robot response would also be pleased. This form of response, however, was within the context we had defined: the interaction between a human and companion robot. If the goal was to modify the human’s emotional state, the appropriate response likely would be different. For example, if the goal is to decrease the arousal for an excited human, then an appropriate response from the robot might be to show it is relaxed — both are positively valenced, but the latter is the low arousal equivalent. An appropriate response from the robot is only useful, however, if it is also discernible. The results from our final study suggested that the human may not have been able to adequately discern the robot’s transition to its miserable state (Section 7.4.4); it too closely resembled the preceding neutral emotion rendering. An additional observation from our final study deals with the latency of the robot’s response. That is to say, the suitable timings whereby the human does not 182  8.3. Considerations in Designing for Affective Touch sense an undue lag between when an affective touch gesture was performed and the robot rendering a response. While our research did not focus on providing specific timing guidelines, we did spend considerable effort examining acceptable latency in the development of our final study. The full affective touch interaction loop (Figure 1.2) is far from instantaneous. The expression of an emotion through touch — by either the human or the robot — often can take at least a few seconds, particularly for gestures requiring repetition. There then follows an appraisal period by the recipient, and a subsequent affective touch response. This, again, can take at least a few seconds. What we found when piloting our final study was a fair amount of leniency on the part of the human after initiating an affective touch gesture. The rule of thumb we eventually adopted was that the robot’s response needed to appear somewhere within an approximate window of 3–8 seconds after the human initiated touch. The robot was viewed as unnatural if it responded instantaneously (near the first point of contact), while it was viewed as unresponsive when it took considerable time to renderer a change in emotional state. If, however, the response was discernable within this rough time window, then the interaction generally seemed natural to the human. As a rule of thumb, we freely admit that this 3–8 second timing is likely not an ideal across all interaction contexts. Nonetheless, the observation has two practical implications: the response time for the robot is neither immediate nor is it precise. There is ample computing time — on the order of several seconds — with which the robot can reason about the human’s touch gesture. Similarly, there is also an ample window within which the robot can initiate its response.  8.3.4  Recognizing Human Affective Touch  While the focus of our thesis was not on algorithmic techniques for recognizing human affective touch, we indeed wished to guide such work. Here, we discuss considerations for the recognition of human affective touch. It is important to note, however, that the context of the affective touch interactions was between a human and robot companion; therefore, various aspects may differ in divergent situations.  183  8.3. Considerations in Designing for Affective Touch Our touch dictionary (Table 6.1) should prove useful both in the development of a touch gesture recognizer as well as in conducting future studies on humaninitiated touch, not restricted solely to affective touch. The dictionary provides a comprehensive common ground for the research and application of touch gestures. The overall ranking of gestures likely to convey emotions (Table 6.3) helps to guide the expectations of the robot by focusing recognition on likely gestures and, consequently, diminishing the concentration on those less likely to be used. Within our interaction context, the human tended to employ more affectionate gestures, while shying away from more aggressive one. Furthermore, we observed that some of the lighter touches — e.g., finger idly, nuzzle, tickle — have a lower likelihood overall of communicating emotional state when compared to some of the more pronounced touches — e.g., stroke, rub. This has implications on decisions for touch sensing technologies employed, as less sensitive sensors may prove adequate; however, they may need to be more robust. Similarly, the features of human affective touch (Section 6.3.2) — points of contact between the human and robot; duration and pressure intensity of touch — not only impact the design and positioning of sensors but also guide the recognition of gestures and their emotional content. See Section 8.3.5 for further considerations on touch sensing technologies. Finally, our categorization of the human’s higher-level intents (Section 6.4.3) can guide the robot’s overall understanding of the human’s behavior, which is of particular use for the robot’s emotion controller. The low-level knowledge of the touch gestures employed is necessary, as is the intended emotional content; however, this is linked more to the immediate interaction. The inherent overlap of touch gestures with adjacent emotions, coupled with the dynamics of the interaction changing over time, require a higher-level model of the human’s intent. There are two approaches to the use of our higher-level intent categorization. While these approaches can be employed separately, they are not mutually exclusive and, therefore, can be complimentary when used together. The first approach concerns the general features of human-initiated affective touch, particularly the duration and intensity (Section 6.3.2). This obviates the need for direct recognition of a touch gesture; rather, it focuses on commonalities of touch gesture profiles within a particular intent. As a few examples, the touch 184  8.3. Considerations in Designing for Affective Touch gestures in the restful intent often have lower intensities and longer durations; the playful intent utilizes gestures that move the robot around extensively in space; and the affectionate intent is generally comprised of gestures that are higher in intensity and shorter in duration. The second approach, on the other hand, is concerned with combinations or sequences of likely affective touch gestures (Section 6.3.1); therefore, unlike the previous approach, it requires somewhat reasonable gesture recognition. This approach leverages the commonalities among touch gestures likely employed within a particular intent. To clarify by way of an example, stroke is likely to be used in miserable, depressed, and sleepy emotional states (among others). If this gesture transitions to massage, then it becomes more probable that the human is in the restful intent as opposed to comforting, which does not include massage.  8.3.5  Touch Sensing Technologies  In Section 4.2.4, we documented the Haptic Creature’s touch sensing hardware. While the focus of our thesis was not on the robot’s ability to recognize touch gestures (Section 4.3.2), much of this groundwork was laid along with the thesis. Here we wish to highlight lessons learned in the development of the robot’s touch sensing capabilities. The touch sensors we used, FSRs, were advantageous in that they were inexpensive and easy to work with: the average cost per sensor was approximately CAD$5.00, and the supplemental circuitry was uncomplicated (Figure 4.5). However, these sensors have several shortcomings, which help to illuminate general considerations for affective touch interactions. Fitting an FSR to a curved surface is problematic. Bending the sensor imposes static load on the circuit, as if someone is constantly touching it. While the FSR may still be usable in many of these situations, the end result is that the overall range of the sensor will be diminished in compensation for the constant load. Our solution was to employ multiple smaller sensors in place of one larger sensor. The severity of the curve is therefore reduced, often completely eliminating the static load.  185  8.4. Future Directions An positive side-effect for this approach was that positional resolution increased. A single FSR provides data on force but not where this force is applied on the sensor. Positional information is obtained by knowledge of the physical location of the senor on the robot. Therefore, multiple smaller sensors replacing the coverage of one larger sensor allows for finer positional information. The downside for this approach, however, was that the number of sensors increased significantly. While one FSR was relatively inexpensive, the overall cost of the touch sensing system increased proportionally. Furthermore, the hardware setup became cumbersome as each sensor required its own wiring. The trade-off ultimately was between increased positional resolution and total coverage. In our setup, the small, round FSRs were placed in such a way that maximized spread but consequently resulted in notable gaps among sensors (Figure 4.2). Another issue with the FSRs was their appropriateness for shearing movements. The sensors work well when the applied force was orthogonal; however, FSRs are much less sensitive to lateral, shearing force. This can be problematic for common touch gestures such as stroke or rub. Finally, while the FSRs were fairly robust, we did have occasions where a few eventually failed. The sensors were mounted atop the Haptic Creature’s fiberglass shell and beneath its fur. Certain locations on the robot’s rib cage (Figure 4.2 [R]) received both significant friction from both human touch as well as the movements of the breathing mechanism rubbing against the fur. Over time, several FSRs in this location wore out and had to be replaced; however, it was not a frequent problem.  8.4  Future Directions  While our thesis contributes to the body of work on affective touch, especially in the field of socially interactive robotics, there is still considerable room to continue forward with the exploration. Here we consider various aspects of this thesis that appear ripe for future work.  186  8.4. Future Directions  8.4.1  The Haptic Creature  Our Haptic Creature robot (Chapter 4) was developed as a robust, automated platform with which to explore affective touch in human-robot interaction. While the robot demonstrated its value in our three interaction decomposition user studies (Chapters 5–7), the platform nonetheless has room for improvements in both the hardware and software components. Actuation Hardware The actuation hardware that is used to communicate the Haptic Creature’s emotional state to the human could be expanded upon while, at the same time, making the existing actuators more expressive. New touch actuation mechanisms could be added to the platform, while still taking into consideration our design constraint that the hardware work in concert with one another (Section 4.1). A localized pulse (heartbeat) prototype has already been investigated, and a full-body version to simulate veins has also been considered. Preliminary development has been conducted for an articulated neck so that the Haptic Creature may, for example, actively nudge the human. Similarly, the robot could have the ability to arch its back — the breathing mechanism loosely approximates this, but in a more rhythmic manner. In the very early design of the Hapticat prototype (Section 3.1), two actuation ideas were: the ability to actively adjust the fur stiffness to simulate, for example, the raising of hackles; and a means of actuating the underside, which is in frequent contact with the human’s lap or chest, to simulate mannerisms such as a cat’s kneading. Finally, though the Haptic Creature has a tail, its sole purpose has been aesthetic by covering the communication and power cabling, there exists the possibility of leveraging the tail as a means of touch expression. While adding new actuation mechanisms have the potential to expand the expressive capabilities of the Haptic Creature, there still exists room for improvement on the present mechanisms. The ears would benefit from a much broader range of stiffness levels in order to more clearly communicate their intent. This is of particular note for the less stiff states, as the ears still provide more resistance than  187  8.4. Future Directions desirable. The lungs could be improved by smoothing the mechanism’s movement while also dampening vibrations and sound generated from its actuation. Sensing Hardware The Haptic Creature need not only express itself through touch, it also must sense touch from the human. Our pressure sensor array was usable for our initial investigations but will become a limitation in future research on touch sensing. Any advancement of this hardware would need to have ample coverage across the extent of the robot’s body; have good resolution in order to distinguish across a continuum of light to strong pressure intensities; be able to handle a variety of touch directions, such as shear and orthogonal; be flexible enough to manage the robot’s curved surfaces yet be serviceable in the event any component fails. As currently no single solution exists, a variety of sensor technologies should be investigated — e.g., capacitive sensors, quantum tunneling composites (QTC). As a spin-off of this thesis, Flagg et al. (2012) [47] have prototyped a novel conductive fur sensor. Another area of future investigation for sensing could be in additional touch interaction points. The tail could have touch sensors added. Also, the addition of whiskers as an affordance has the potential not only of fostering more face-directed touches on Haptic Creature but providing a separate means for touch sensing. Gesture Recognizer As the Haptic Creature’s touch sensing hardware advances, so can the software system for gesture recognition. While a preliminary investigation was undertaken [27], the need still exists for a Gesture Recognizer that is fully integrated into the broader architecture (Figure 4.6). Furthermore, any advances should work towards providing timely, efficient, robust, and accurate recognition of a large set of gestures. The results of our human-initiated affective touch study (Chapter 6) provide useful guidance by, for example, narrowing the set of expected gestures as well as partitioning their emotional content and broader intent. However, assuming usable sensor data, the underlying machine learning necessary for touch gesture recognition remains an interesting thread of research. 188  8.4. Future Directions Emoter Currently, the Emoter is capable of encapsulation and communication of emotional state; however, software features that influence the Haptic Creature’s emotional state have yet to be developed. As mentioned in the previous section, the Gesture Recognizer is in a nascent state. As this component grows, the gestures that it outputs can be used, as intended, by the Emoter to update the robot’s emotional state. More importantly, the Emoter was dependent upon results from the human affective touch study (Chapter 6): the gestures used for particular emotions; the higher-level intent of the human; the expectations of the Haptic Creature’s response. Now that this information exists, the behavior of the Emoter can be advanced. Finally, internal mechanisms that influence the state of the Emoter may also be explored. For example, temporal considerations can be implemented such that the Haptic Creature’s emotional state changes as a function of time. Any software additions need not be extensive. However, the configuration of appropriate behaviors likely would require considerable user testing similar in our approach to the design of the robot’s affect display as presented in this thesis.  8.4.2  Haptic Creature Affect Display  A considerable portion of the work presented in this thesis was focused on affective touch originating from the Haptic Creature. This was a major aspect of our first user study (Chapter 5). It was relevant in respect to expectations of the robot’s response as explored in our study of human affective touch (Chapter 6). Also, affective touch from the Haptic Creature played a significant role in the final study on the emotional influence of this form of touch (Chapter 7). Nonetheless, there exists room for additional research on the robot’s affect display. As was seen in the first study, the Haptic Creature was more successful at communicating arousal as opposed to valence, particularly negative valence. Leveraging those results, this was improved somewhat in the final study, but additional work remains to advance the overall design of the robot’s affect display.  189  8.4. Future Directions Furthermore, whenever existing hardware is improved or new actuation hardware is added, user testing should validate any changes.  8.4.3  Human Intent through Affective Touch  One outcome from our study of human-initiated affective touch (Chapter 6) was the categorization of human intent (Section 6.4.3). As we discussed, this result affects both lower-level touch sensing, which has implications for the Gesture Recognizer, as well as higher-level interpretations of behavior, which has implications for the Emoter. Given the relevance of this result and its potential impact on the robot’s behavior, further investigation may be warranted. This could take form of integrating knowledge of the intents into the respective software components. Also, additional user studies could be conducted that seek to validate these categorizations.  8.4.4  Emotion Elicitation through Touch  As we proceeded with the work presented in this thesis, a great many individuals (including ourselves) performed countless touch gestures for the Haptic Creature. Sometimes these were simply random touch interactions, while on other occasions the touch was more specific and directed, as in our user studies. From our own personal experiences as well as open discussions with others who interacted with the robot, it became clear that a simple act of non-reciprocal touch had the potential for eliciting emotion in the initiator. This serendipitous discovery was reminiscent of Ekman’s (2007) [37] “directed facial action task” for emotional responses through facial expression. While it was outside the scope of our present research, touch enactment as a means of emotion elicitation could be a fruitful avenue of further exploration.  8.4.5  Ethnographic and Longitudinal Studies  The Haptic Creature was actively developed in concert with our user studies. That is to say, our robot was being designed to study affective touch, but it also was influenced by the results of studies in which it is employed.  190  8.4. Future Directions For the research presented in this dissertation, it was important to conduct tests in more controlled environments. This afforded greater management of humans interacting with a new robot and, more importantly, helped guard against a variety of confounding factors inherent in the domain under investigation. As the Haptic Creature has stabilized and our research illuminated details of affective touch, it is now possible to move towards more realistic scenarios through ethnographic and longitudinal studies — e.g., [109, 179]. These additional methodologies not only allow us to further validate our current findings but also expose new areas of investigation.  8.4.6  Robot-Assisted Therapy  While this thesis explored underlying features of affective touch in social humanrobot interaction, one extension of our research would be in its broader application. A potential area of focus is that of touch in robot-assisted therapy (RAT). In our related work discussion on the influences of human-animal interaction (Section 2.3.2), we saw how companion animals provide many positive health and social benefits in the lives of humans. We also saw, though, that pets can be problematic for reasons such as stress of caretaking as well as allergies and diseases. Therefore, the possibility exists for the Haptic Creature to approximate the benefits of companion animals in cases where the inherent problems preclude ownership or interaction. Some example domains could be with the very young, the elderly, the disabled, the hospitalized, or individuals with psychological disorders. However, given the pervasiveness of companion animals in general society, benefits could also be derived by non-at-risk groups such as tenants prohibited from having pets or individuals whose lifestyles do not allow for daily pet care. Paro, the baby harp seal robot introduced in Section 2.4, is one such robot already exploring these domains [152, 179]. Much of the future research of touch in robot-assisted therapy likely would take the form of longitudinal studies as discussed in the preceding section.  191  8.5. Closing Thoughts  8.5  Closing Thoughts  The wedding of technology and touch has been undertaken, more often than not, for utilitarian purposes. An orchestrated sequence of finger flicks on a touch screen specify discrete commands to an application. A vibration emanating from a mobile phone silently signals an incoming call. There exist, however, domains that endeavor beyond practical uses. In Section 2.2.3, we introduced mediated social touch. This field leverages the inherent personal nature of touch as a means of connecting humans through technology — humans who, ironically, are often disconnected as a result of utilityfocused technology. Our thesis is part of another such domain. What we have presented in this dissertation lays the groundwork for connecting humans and robots. This connection is first social, then emotional, and made even more personal through the use of touch. Our Haptic Creature proved a useful testbed for our research. The robot allowed for controlled and systematic investigations into how a robot might express emotional state through touch; how humans use touch to express their emotions to a robot; and the influence of human-robot affective touch interactions on human emotions. This research has direct significance in the field of socially interactive robotics. Our hope is that those wishing to incorporate touch into the interaction will be able to borrow from this work, which is particularly relevant when the interaction concerns emotion communication. Furthermore, our research has implications beyond human-robot interaction. Any domain interested in human use of affective touch — e.g., psychology, mediated social touch, human-animal interaction — may find relevance. Though the interactions in our research are with a robot, what we ultimately are able to see reflected back is what makes us human.  192  Bibliography [1] A. Ahlbom, A. Backman, J. Bakke, T. Foucard, S. Halken, N.-I. M. Kjellman, L. Malm, S. Skerfving, J. Sundell, and O. Zetterström. Pets indoors — A risk factor for or protection against sensitisation/allergy. Indoor Air, 8(4):219–235, December 1998. [2] Alexa Albert and Kris Bulcroft. Pets and urban life. Anthrozoös, 1(1):9–25, 1987. [3] Warwick P. Anderson, Christopher M. Reid, and Garry L. Jennings. Pet ownership and risk factors for cardiovascular disease. Medical Journal of Australia, 157(5):298–301, September 1992. [4] Lewis R. Arrington and Kathleen C. Kelley. Domestic Rabbit Biology and Production. The University Press of Florida, Gainseville, Florida, USA, 1976. [5] Jeremy N. Bailenson, Nick Yee, Scott Brave, Dan Merget, and David Koslow. Virtual interpersonal touch: Expressing and recognizing emotions through haptic devices. Human-Computer Interaction, 22(3):325–353, 2007. [6] Kathryn Barnett. A theoretical construct of the concepts of touch as they relate to nursing. Nursing Research, 21(2):102–110, March–April 1972. [7] Lisa Feldman Barrett. Are emotions natural kinds? Perspectives on Psychological Science, 1(1):28–58, March 2006. [8] Lisa Feldman Barrett and James A. Russell. Independence and bipolarity in the structure of current affect. Journal of Personality and Social Psychology, 74(4):967–984, April 1998. 193  Bibliography [9] Alan Beck and Aaron Honori Katcher. Between Pets and People: The Importance of Animal Companionship. Purdue University Press, West Lafayette, Indiana, USA, 1996. [10] Alan M. Beck and Aaron H. Katcher. Future directions in human-animal bond research. American Behavioral Scientist, 47(1):79–93, September 2003. [11] Marc Bekoff. Animal emotions: Exploring passionate natures. BioScience, 50(10):861–870, October 2000. [12] Pauleen Charmayne Bennett and Vanessa Ilse Rohlf. Owner-companion dog interactions: Relationships between demographic variables, potentially problematic behaviours, training engagement and shared activities. Applied Animal Behaviour Science, 102(1–2):65–84, January 2007. [13] John Bowlby. Attachment, volume 1 of Attachment and Loss. Basic Books, New York, New York, USA, 1st edition, 1969. [14] Margaret M. Bradley and Peter J. Lang. Measuring emotion: The selfassessment manikin and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry, 25(1):49–59, March 1994. [15] Margaret M. Bradley and Peter J. Lang. The International Affective Picture System (IAPS) in the study of emotion and attention. In James A. Coan and John J.B. Allen, editors, Handbook of Emotion Elicitation and Assessment, Series in Affective Science, chapter 2, pages 29–46. Oxford University Press, 2007. [16] Scott Brave and Andrew Dahley. inTouch: A medium for haptic interpersonal communication (short paper). In Extended Abstracts on Human Factors in Computing Systems, CHI EA ’97, pages 363–364, New York, New York, USA, March 1997. ACM Press. [17] Cynthia Breazeal. Emotive qualities in lip-synchronized robot speech. Advanced Robotics, 17(2):97–113, May 2003. 194  Bibliography [18] Cynthia L. Breazeal. Designing Sociable Robots. MIT Press, Cambridge, Massachusetts, USA, 2002. [19] Gordon M. Burghardt. Animal awareness: Current perceptions and historical perspective. American Psychologist, 40(8):905–919, August 1985. [20] Gordon M. Burghardt. Cognitive ethology and critical anthropomorphism: A snake with two heads and hog-nose snakes that play dead. In Carolyn A. Ristau, editor, Cognitive Ethology: The Minds of Other Animals: Essays in Honor of Donald R. Griffin, Comparative Cognition and Neuroscience, chapter 4, pages 53–90. Lawrence Erlbaum Associates, Inc., Hillsdale, New Jersey, USA, 1991. [21] Judee K. Burgoon. Relational message interpretations of touch, conversational distance, and posture. Journal of Nonverbal Behavior, 15(4):233–259, December 1991. [22] Joseph J. Campos, Donna L. Mumme, Rosanne Kermoian, and Rosemary G. Campos. A functionalist perspective on the nature of emotion. Monographs of the Society for Research in Child Development, 59(2/3):284–303, 1994. [23] Lola D. Canamero and Jakob Fredslund. How does it feel? Emotional interaction with a humanoid LEGO robot. Socially Intelligent Agents: The Human in the Loop. Papers from the AAAI Fall Symposium, pages 23–28, 2000. [24] Deena B. Case. Dog ownership: A complex web? Psychological Reports, 60(1):247–257, February 1987. [25] Angela Chang, Sile O’Modhrain, Rob Jacob, Eric Gunther, and Hiroshi Ishii. ComTouch: Design of a vibrotactile communication device. In Proceedings of the Conference on Designing Interactive Systems, DIS ’02, pages 312–320, New York, New York, USA, 2002. ACM Press. [26] Angela Chang, Ben Resner, Brad Koerner, XingChen Wang, and Hiroshi Ishii. LumiTouch: An emotional communication device (short paper). In Extended Abstracts on Human Factors in Computing Systems, CHI EA ’01, 195  Bibliography pages 313–314, New York, New York, USA, March–April 2001. ACM Press. [27] Jonathan Chang, Karon MacLean, and Steve Yohanan. Gesture recognition in the Haptic Creature. In Astrid Kappers, Jan van Erp, Wouter Bergmann Tiest, and Frans van der Helm, editors, Haptics: Generating and Perceiving Tangible Sensations - EuroHaptics 2010, volume 6191 of Lecture Notes in Computer Science, pages 385–391. Springer Berlin / Heidelberg, 2010. [28] Alan Costall. How Lloyd Morgan’s Canon backfired. Journal of the History of the Behavioral Sciences, 29(2):113–122, April 1993. [29] Floyd M. Crandall. Hospitalism. Archives of Pediatrics, 14(6):448–454, June 1897. [30] April H. Crusco and Christopher G. Wetzel. The midas touch: The effects of interpersonal touch on restaurant tipping. Personality & Social Psychology Bulletin, 10(4):512–517, December 1984. [31] W. Bruce Currie. Structure and Function of Domestic Animals. Butterworth, Stoneham, Massachusettes, USA, 1988. [32] Charles Darwin. The Expression of the Emotions in Man and Animals. John Murray, London, England, 1872. [33] Ed Diener and Ashgar Iran-Nejad. The relationship in experience between various types of affect.  Journal of Personality and Social Psychology,  50(5):1031–1038, May 1986. [34] Guillaume-Benjamin-Amand Duchenne (de Boulogne). Mécanisme de la physionomie humaine. ou, Analyse électro-physiologique de l’expression des passions des arts plastiques. Librairie J.-B. Baillière et Fils, Paris, France, 1862. [35] Paul Ekman. Are there basic emotions? Psychological Review, 99(3):550– 553, July 1992.  196  Bibliography [36] Paul Ekman. An argument for basic emotions. Cognition & Emotion, 6(3– 4):169–200, 1992. [37] Paul Ekman. The directed facial action task: Emotional responses without appraisal. In James A. Coan and John J.B. Allen, editors, Handbook of Emotion Elicitation and Assessment, Series in Affective Science, chapter 3, pages 47–53. Oxford University Press, 2007. [38] Paul Ekman and Wallace V. Friesen. Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 17(2):124–129, February 1971. [39] Paul Ekman and Wallace V. Friesen. Measuring facial movement. Environmental Psychology and Nonverbal Behavior, 1(1):56–75, Fall 1976. [40] Paul Ekman and Wallace V. Friesen. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, California, USA, 1978. [41] Paul Ekman and Wallace V. Friesen. A new pan-cultural facial expression of emotion. Motivation and Emotion, 10(2):159–168, June 1986. [42] Paul Ekman, E. Richard Sorenson, and Wallace V. Friesen. Pan-cultural elements in facial displays of emotion. Science, 164(3875):86–88, April 1969. [43] Hillary Anger Elfenbein and Nalini Ambady. On the universality and cultural specificity of emotion recognition: A meta-analysis. Psychological Bulletin, 128(2):203–235, March 2002. [44] Ylva Fernaeus, Maria Hakansson, Mattias Jacobsson, and Sara Ljungblad. How do you play with a robotic toy animal? A long-term study of Pleo. In Proceedings of the 9th International Conference on Interaction Design and Children, IDC ’10, pages 39–48, New York, New York, USA, June 2010. ACM. [45] Tiffany Field. Touch. MIT Press, Cambridge, Massachusetts, USA, 2001. 197  Bibliography [46] Jeffrey D. Fisher, Marvin Rytting, and Richard Heslin. Hands touching hands: Affective and evaluative effects of an interpersonal touch. Sociometry, 39(4):416–421, December 1976. [47] Anna Flagg, Diane Tam, Karon MacLean, and Robert Flagg. Conductive fur sensing for a gesture-aware furry robot. In Proceedings of the 2012 IEEE Haptics Symposium, HAPTICS ’12, pages 99–104, March 2012. [48] Charles A. Florez and Morton Goldman. Evaluation of interpersonal touch by the sighted and the blind. The Journal of Social Psychology, 116(2):229– 234, April 1982. [49] BJ Fogg, Lawrence D. Cutler, Perry Arnold, and Chris Eisbach. HandJive: A device for interpersonal haptic entertainment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’98, pages 57–64, New York, New York, USA, 1998. ACM Press/AddisonWesley Publishing Co. [50] Terrence Fong, Illah Nourbakhsh, and Kerstin Dautenhahn. A survey of socially interactive robots. Robotics and Autonomous Systems, 42(3–4):143– 166, March 2003. [51] Lawrence K. Frank. Tactile communication. Genetic Psychology Monographs, 56(2):209–255, November 1957. [52] Mark G. Frank and Janine Stennett. The forced-choice paradigm and the perception of facial expressions of emotion. Journal of Personality and Social Psychology, 80(1):75–85, January 2001. [53] Erika Friedmann, Aaron Honori Katcher, James J. Lynch, and Sue Ann Thomas. Animal companions and one-year survival of patients after discharge from a coronary care unit. Public Health Records, 95(4):307–312, July–August 1980. [54] Erika Friedmann and Sue A. Thomas. Pet ownership, social support, and one-year survival after acute myocardial infarction in the Cardiac Arrhyth198  Bibliography mia Suppression Trial (CAST).  The American Journal of Cardiology,  76(17):1213–1217, December 1995. [55] Nico H. Frijda. The Emotions. Studies in Emotion and Social Interaction. Cambridge University Press, Cambridge, England, UK, 1986. [56] Masahiro Fujita and Hiroaki Kitano.  Development of an autonomous  quadruped robot for robot entertainment. Autonomous Robots, 5(1):7–18, March 1998. [57] Frank A. Geldard. Some neglected possibilities of communication. Science, 131(3413):1583–1588, May 1960. [58] Gentoo. Gentoo Linux [online]. http://www.gentoo.org/, August 2009. [59] Marilyn K. Gerwolls and Susan M. Labott. Adjustment to the death of a companion animal. Anthrozoös, 7(3):172–187, 1994. [60] Elizabeth Goodman and Marion Misilim. The sensing beds (workshop position paper). In Workshop on Intimate Ubiquitous Computing of the 5th International Conference on Ubiquitous Computing, UbiComp 2003, October 2003. [61] Michael A. Goodrich and Alan C. Schultz. Human-robot interaction: A survey. Foundations and Trends in Human-Computer Interaction, 1(3):203– 275, 2007. [62] James J. Gross and Robert W. Levenson. Emotion elicitation using films. Cognition & Emotion, 9(1):87–108, January 1995. [63] David R. P. Guay. Pet-assisted therapy in the nursing home setting: Potential for zoonosis. American Journal of Infection Control, 29(3):178–186, June 2001. [64] Nicolas Guéguen. Nonverbal encouragement of participation in a course: The effect of touching. Social Psychology of Education, 7(1):89–98, March 2004. 199  Bibliography [65] Antal Haans and Wijnand IJsselsteijn. Mediated social touch: A review of current research and future directions. Virtual Reality, 9(2–3):149–159, March 2006. [66] Rebecca Hansson and Tobias Skog. The LoveBomb: Encouraging the communication of emotions in public spaces. In Extended Abstracts on Human Factors in Computing Systems, CHI EA ’01, pages 433–434, New York, New York, USA, March–April 2001. ACM Press. [67] Harry F. Harlow. The nature of love. American Psychologist, 13(12):673– 685, December 1958. [68] Harry F. Harlow and Robert R. Zimmermann. The development of affectional responses in infant monkeys. Proceedings of the American Philosophical Society, 102(5):501–509, October 1958. [69] Elaine Hatfield, John T. Cacioppo, and Richard L. Rapson. Emotional Contagion. Cambridge University Press, Cambridge, England, United Kingdom, 1994. [70] Donald O. Hebb. Emotion in man and animal: An analysis of the intuitive processes of recognition. Psychological Review, 53(2):88–106, March 1946. [71] Morton A. Heller and William Schiff, editors. The Psychology of Touch. L. Erlbaum, Hillsdale, New Jersey, USA, 1991. [72] Nancy M. Henley. Status and sex: Some touching observations. Bulletin of the Psychonomic Society, 2(2):91–93, August 1973. [73] Nancy M. Henley. Body Politics: Power, Sex, and Nonverbal Communication. Prentice-Hall, Englewood Cliffs, New Jersey, USA, 1977. [74] Matthew J. Hertenstein, Rachel Holmes, Margaret McCullough, and Dacher Keltner. The communication of emotion via touch. Emotion, 9(4):566–573, August 2009.  200  Bibliography [75] Matthew J. Hertenstein, Dacher Keltner, Betsy App, Brittany A. Bulleit, and Ariane R. Jaskolka. Touch communicates distinct emotions. Emotion, 6(3):528–533, August 2006. [76] Matthew J. Hertenstein, Julie M. Verkamp, Alyssa M. Kerestes, and Rachel M. Holmes. The communicative functions of touch in humans, nonhuman primates, and rats: A review and synthesis of the empirical research. Genetic, Social & General Psychology Monographs, 132(1):5–94, February 2006. [77] Richard Heslin and Tari Alper. Touch: A bonding gesture. In John M. Wiemann and Randall P. Harrison, editors, Nonverbal Interaction, volume 11 of Sage Annual Reviews of Communication Research, chapter 2, pages 47–75. Sage, Beverly Hills, California, USA, 1983. [78] Hans Irtel. PXLab: The psychological experiments laboratory [online]. http://www.pxlab.de/, 2007.  Mannheim (Germany): University of  Mannheim. [79] Carroll E. Izard. The Face of Emotion. Appleton-Century-Crofts, New York, New York, USA, 1971. [80] Carroll E. Izard. Human Emotions. Plenum Press, New York, New York, USA, 1977. [81] Carroll E. Izard.  Innate and universal facial expressions: Evidence  from developmental and cross-cultural research. Psychological Bulletin, 115(2):288–299, March 1994. [82] Stanley E. Jones and A. Elaine Yarbrough. A naturalistic study of the meanings of touch. Communication Monographs, 52(1):19–56, 1985. [83] Sidney M. Jourard. An exploratory study of body-accessibility. British Journal of Social and Clinical Psychology, 5:221–231, September 1966. [84] Sidney M. Jourard and Jane E. Rubin. Self-disclosure and touching: A study of two modes of interpersonal encounter and their inter-relation. Journal of Humanistic Psychology, 8(1):39–48, Spring 1968. 201  Bibliography [85] Peter H. Kahn Jr., Batya Friedman, Deanne R. Pérez-Granados, and Nathan G. Freier. Robotic pets in the lives of preschool children. Interaction Studies, 7(3):405–436, 2006. [86] Aaron H. Katcher, Erika Friedmann, Melissa Goodman, and Laura Goodman. Men, women, and dogs. California Veterinarian, 37(2):14–17, February 1983. [87] Aaron Honori Katcher. Interrelations Between People and Pets, chapter Interactions Between People and Their Pets: Form and Function, pages 41– 67. Charles C. Thomas, Springfield, Illinois, USA, June 1981. [88] Yukitaka Kawaguchi, Kazuyoshi Wada, Masako Okamoto, Takeo Tsujii, Takanori Shibata, and Kaoru Sakatani. Investigation of brain activity during interaction with seal robot by fNIRS. In Proceedings of the 20th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN ’11, pages 308–313, July–August 2011. [89] Stephen R. Kellert and Edward O. Wilson, editors. The Biophilia Hypothesis. Island Press, Washington DC, USA, 1995. [90] John S. Kennedy. The New Anthropomorphism. Cambridge University Press, Cambridge, England, UK, 1992. [91] A. Kerepesi, E. Kubinyi, G. K. Jonsson, M. S. Magnusson, and Á. Miklósi. Behavioural comparison of human-animal (dog) and human-robot (AIBO) interactions. Behavioural Processes, 7(1):92–99, July 2006. [92] Elizabeth S. Kim, Dan Leyzberg, Katherine M. Tsui, and Brian Scassellati. How people talk when teaching a robot. In Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction, HRI ’09, pages 23–30, New York, New York, USA, March 2009. ACM. [93] Chris L. Kleinke. Compliance to requests made by gazing and touching experimenters in field settings. Journal of Experimental Social Psychology, 13(3):218–223, May 1977. 202  Bibliography [94] Heather Knight, Robert Toscano, Walter D. Stiehl, Angela Chang, Yi Wang, and Cynthia Breazeal. Real-time social touch gesture recognition for sensate robots. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS ’09, pages 3715–3720, October 2009. [95] Hiroshi Kobayashi and Fumio Hara. A basic study on dynamic control of facial expressions for face robot. In Proceedings of the 4th IEEE International Workshop on Robot and Human Communication, RO-MAN ’95, pages 275–280, July 1995. [96] Peter J. Lang. Behavioral treatment and bio-behavioral assessment: Computer applications.  In Joseph B. Sidowski, James H. Johnson, and  Thomas W. Williams, editors, Technology in Mental Health Care Delivery Systems, pages 129–139. Ablex, Norwood, New Jersey, USA, 1980. [97] Josephine W. Lee and Laura K. Guerrero. Types of touch in cross-sex relationships between coworkers: Perceptions of relational and emotional messages, inappropriateness, and sexual harassment. Journal of Applied Communication Research, 29(3):197–220, January 2001. [98] Jun Ki Lee, Walter Dan Stiehl, Robert Lopez Toscano, and Cynthia Breazeal. Semi-autonomous robot avatar as a medium for family communication and education. Advanced Robotics, 23(14):1925–1949, 2009. [99] Alexander Libin and Jiska Cohen-Mansfield. Therapeutic robocat for nursing home residents with dementia: Preliminary inquiry. American Journal of Alzheimer’s Disease and Other Dementias, 19(2):111–116, March–April 2004. [100] Alexander V. Libin and Elena V. Libin. Person-robot interactions from the robopsychologists’ point of view: The robotic psychology and robotherapy approach. In Proceedings of the IEEE, volume 92, pages 1789–1803. IEEE, November 2004. [101] Paul D. MacLean. The triune brain, emotion, and scientific bias. In Francis Otto Schmitt, editor, The Neurosciences: Second Study Program, pages 336–349. Rockefeller University Press, New York, New York, USA, 1970. 203  Bibliography [102] Paul D. MacLean. The Triune Brain in Evolution: Role in Paleocerebral Functions. Plenum Press, New York, New York, USA, 1990. [103] Brenda Major and Richard Heslin. Perceptions of cross-sex and same-sex nonreciprocal touch: It is better to give than to receive. Journal of Nonverbal Behavior, 6(3):148–162, March 1982. [104] Peter Marler. On animal aggression: The roles of strangeness and familiarity. American Psychologist, 31(3):239–246, March 1976. [105] David Maulsby, Saul Greenberg, and Richard Mander. Prototyping an intelligent agent through Wizard of Oz. In Proceedings of the INTERACT ’93 and CHI ’93 Conference on Human Factors in Computing Systems, CHI ’93, pages 277–284, New York, New York, USA, 1993. ACM. [106] Erin McKean, editor. The New Oxford American Dictionary. Oxford University Press, 2nd edition, May 2005. [107] Gail F. Melson, Peter H. Kahn Jr., Alan Beck, and Batya Friedman. Robotic pets in human lives: Implications for the human-animal bond and for human relationships with personified technologies. Journal of Social Issues, 65(3):545–567, September 2009. [108] Gail F. Melson, Peter H. Kahn Jr., Alan Beck, Batya Friedman, Trace Roberts, Erik Garrett, and Brian T. Gill. Children’s behavior toward and understanding of robotic and living dogs. Journal of Applied Developmental Psychology, 30(2):92–102, March–April 2009. [109] Gail F. Melson, Peter H. Kahn Jr., Alan M. Beck, Batya Friedman, Trace Roberts, and Erik Garrett. Robots as dogs?: Children’s interactions with the robotic dog AIBO and a live Australian Shepherd. In Extended Abstracts on Human Factors in Computing Systems, CHI EA ’05, pages 1649–1652, New York, New York, USA, 2005. ACM. [110] Teruaki Mitsui, Takanori Shibata, Kazuyoshi Wada, Akihiro Touda, and Kazuo Tanie. Psychophysiological effects by interaction with mental commit robot. In Proceedings of the IEEE/RSJ International Conference on 204  Bibliography Intelligent Robots and Systems, volume 2 of IROS ’01, pages 1189–1194, October–November 2001. [111] Ashley Montagu. Touching: The Human Significance of the Skin. Columbia University Press, New York, New York, USA, 1st edition, 1971. [112] Ashley Montagu. Touching: The Human Significance of the Skin. Perennial Library, New York, New York, USA, 1986. [113] Conwy Lloyd Morgan. An Introduction to Comparative Psychology, volume XXVII of The Contemporary Science Series. Walter Scott, Limited, London, England, UK, 1894. [114] Masahiro Mori. The uncanny valley. Energy, 7(4):33–35, 1970. [115] Florian ‘Floyd’ Mueller, Frank Vetere, Martin R. Gibbs, Jesper Kjeldskov, Sonja Pedell, and Steve Howard. Hug over a distance. In Extended Abstracts on Human Factors in Computing Systems, CHI EA ’05, pages 1673–1676, New York, New York, USA, April 2005. ACM Press. [116] Kathleen L. Munsell, Merle Canfield, Donald I. Templer, Kimberly Tangan, and Hiroko Arikawa. Modification of the Pet Attitude Scale. Society and Animals, 12(2):137–142, 2004. [117] NeCoRo. “Is this a real cat?” A robot cat you can bond with like a real pet — NeCoRo is born [online]. http://www.necoro.com/, August 2004. [118] Tuan Nguyen, Richard Heslin, and Michele L. Nguyen. The meanings of touch: Sex differences. Journal of Communication, 25(3):92–103, September 1975. [119] Paula M. Niedenthal. Embodying emotion. Science, 316(5827):1002–1005, May 2007. [120] Andrew Ortony and Terence J. Turner. What’s basic about basic emotions? Psychological Review, 97(3):315–331, July 1990.  205  Bibliography [121] Steffi Paepcke and Leila Takayama. Judging a bot by its cover: An experiment on expectation setting for personal robots. In Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction, HRI ’10, pages 45–52, Piscataway, New Jersey, USA, March 2010. IEEE Press. [122] Jaak Panksepp. Affective Neuroscience: The Foundations of Human and Animal Emotions. Series in Affective Science. Oxford University Press, New York, New York, USA, 1998. [123] Jaak Panksepp. At the interface of the affective, behavioral, and cognitive neurosciences: Decoding the emotional feelings of the brain. Brain and Cognition, 52(1):4–14, June 2003. [124] Gary J. Patronek and Larry T. Glickman. Pet ownership protects against the risks and consequences of coronary heart disease. Medical Hypotheses, 40(4):245–249, April 1993. [125] Robert Plutchik. The nature of emotions: Human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. American Scientist, 89(4):344–350, July–August 2001. [126] Robert H. Poresky, Charles Hendrix, Jacob E. Hosier, and Marvin L. Samuelson. The Companion Animal Bonding Scale: Internal reliability and construct validity. Psychological Reports, 60:743–746, June 1987. [127] Willam O. Reece. Functional Anatomy and Physiology of Domestic Animals. Wiley-Blackwell, Ames, Iowa, USA, 4th edition, 2009. [128] George J. Romanes. Animal Intelligence, volume XLI of The International Scientific Series. Kegan Paul, Trench, and Company, London, England, UK, 1882. [129] A. F. Rovers and H. A. van Essen. HIM: A framework for Haptic Instant Messaging. In Extended Abstracts on Human Factors in Computing Systems, CHI EA ’04, pages 1313–1316, New York, New York, USA, April 2004. ACM Press. 206  Bibliography [130] James A. Russell. A circumplex model of affect. Journal of Personality and Social Psychology, 39(6):1161–1178, December 1980. [131] James A. Russell. Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychological Bulletin, 115(1):102–141, January 1994. [132] James A. Russell. Facial expressions of emotion: What lies beyond minimal universality? Psychological Bulletin, 118(3):379–391, November 1995. [133] James A. Russell. Core affect and the psychological construction of emotion. Psychological Review, 110(1):145–172, January 2003. [134] James A. Russell and Lisa Feldman Barrett. Core affect, prototypical emotional episodes, and other things called emotion: Dissecting the elephant. Journal of Personality and Social Psychology, 76(5):805–819, May 1999. [135] James A. Russell and Albert Mehrabian. Evidence for a three-factor theory of emotions. Journal of Research in Personality, 11(3):273–294, September 1977. [136] James A. Russell, Anna Weiss, and Gerald A. Mendelsohn. Affect Grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology, 57(3):493–502, September 1989. [137] Tomoko Saito, Takanori Shibata, Kazuyoshi Wada, and Kazuo Tanie. Relationship between interaction with the mental commit robot and change of stress reaction of the elderly. In Proceedings of the IEEE International Symposium on Computational Intelligence in Robotics and Automation, volume 1 of CIRA ’03, pages 119–124, July 2003. [138] Jelle Saldien, Kristof Goris, Bram Vanderborght, and Dirk Lefeber. On the design of an emotional interface for the huggable robot Probo. In Proceedings of the AISB 2008 Symposium on the Reign of Catz & Dogz: The Second AISB Symposium on the Role of Virtual Creatures in a Computerised Society, volume 1 of AISB 2008, pages 1–6, April 2008. 207  Bibliography [139] Jelle Saldien, Kristof Goris, Bram Vanderborght, Johan Vanderfaeillie, and Dirk Lefeber. Expressing emotions with the social robot Probo. International Journal of Social Robotics (SORO); Special Issue on Robots for Furure Societies, 2(4):377–389, December 2010. [140] Jelle Saldien, Kristof Goris, Selma Yilmazyildiz, Werner Verhelst, and Dirk Lefeber. On the design of the huggable robot Probo. Journal of Physical Agents, 2(2):3–11, June 2008. [141] Mark Scheeff, John Pinto, Kris Rahardja, Scott Snibbe, and Robert Tow. Experiences with Sparky, a social robot.  In Workshop on Interactive  Robotics and Entertainment, Pittsburgh, Pennsylvania, USA, April 2000. AAAI Press. [142] Klaus R. Scherer and Harald G. Wallbott. Evidence for universality and cultural variation of differential emotion response patterning. Journal of Personality and Social Psychology, 66(2):310–328, February 1994. [143] Matthias Scheutz, Paul Schermerhorn, James Kramer, and David Anderson. First steps toward natural human-like HRI. Autonomous Robots, 22(4):411– 423, May 2007. [144] Harold Schlosberg. The description of facial expressions in terms of two dimensions. Journal of Experimental Psychology, 44(4):229–237, October 1952. [145] James A. Serpell. In the Company of Animals: A Study of Human-Animal Relationships. Cambridge University Press, Cambridge, England, 1996. [146] James A. Serpell. Handbook on Animal-Assisted Therapy: Theoretical Foundations and Guidelines for Practice, chapter 1 — Animal Companions and Human Well-Being: An Historical Exploration of the Value of HumanAnimal Relationships, pages 3–19. Academic Press, New York, New York, USA, 2000.  208  Bibliography [147] James A. Serpell. Anthropomorphism and anthropomorphic selection— beyond the “cute response”. Society and Animals, 11(1):83–100, March 2003. [148] Michael Shaver and Karon MacLean. The Twiddler: A haptic teaching tool: Low-cost communication and mechanical design. Technical Report TR2005-09, University of British Columbia, Department of Computer Science, 2003. [149] Takanori Shibata. An overview of human interactive robots for psychological enrichment. In Proceedings of the IEEE, volume 92, pages 1749–1758. IEEE, November 2004. [150] Takanori Shibata. Therapeutic seal robot as biofeedback medical device: Qualitative and quantitative evaluations of robot therapy in dementia care. In Proceedings of the IEEE, volume 100, pages 2527–2538. IEEE, August 2012. [151] Takanori Shibata, Yukitaka Kawaguchi, and Kazuyoshi Wada. Investigation on people living with seal robot at home: Analysis of owners’ gender differences and pet ownership experience. International Journal of Social Robotics (SORO), 4(1):53–63, January 2012. [152] Takanori Shibata, Teruaki Mitsui, Kazuyoshi Wada, Akihiro Touda, Takayuki Kumasaka, Kazumi Tagami, and Kazuo Tanie. Mental commit robot and its application to therapy of children. In Proceedings of the 2001 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, volume 2, pages 1053–1058. IEEE/ASME, July 2001. [153] Takanori Shibata and Kazuo Tanie. Physical and affective interaction between human and mental commit robot. In Proceedings of the 2001 IEEE International Conference on Robotics and Automation, volume 3 of ICRA ’01, pages 2572–2577, 2001. [154] Shoshana Shiloh, Gal Sorek, and Joseph Terkel. Reduction of state-anxiety by petting animals in a controlled laboratory experiment. Anxiety, Stress, and Coping, 16(4):387–395, December 2003. 209  Bibliography [155] David E. Smith, Joseph A. Gier, and Frank N. Willis. Interpersonal touch and compliance with a marketing request. Basic and Applied Social Psychology, 3(1):35–38, 1982. [156] Joceyln Smith and Karon E. MacLean. Communicating emotion through a haptic link: Design space and methodology. International Journal of Human-Computer Studies, Special Issue on Evaluating affective interactions, 65(4):376–387, 2007. [157] Stefan Sosnowski, Ansgar Bittermann, Kolja Kühnlenz, and Martin Buss. Design and evaluation of emotion-display EDDIE.  In Proceedings of  the IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS ’06, pages 3113–3118, October 2006. [158] René A. Spitz. Hospitalism — An inquiry into the genesis of psychiatric conditions in early childhood. Psychoanalytic Study of the Child, 1:53–74, 1945. [159] J. K. Stehr-Green and P. M. Schantz. The impact of zoonotic diseases transmitted by pets on human health and the economy. The Veterinary Clinics of North America: Small Animal Practice, 17(1):1–15, January 1987. [160] Walter Dan Stiehl and Cynthia Breazeal. A sensitive skin for robotic companions featuring temperature, force, and electric field sensors. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS ’06, pages 1952–1959, October 2006. [161] Walter Dan Stiehl, Levi Lalla, and Cynthia Breazeal. A “somatic alphabet” approach to “sensitive skin”. In Proceedings of the 2004 IEEE International Conference on Robotics and Automation, ICRA ’04, pages 2865– 2870, 2004. [162] Walter Dan Stiehl, Jeff Lieberman, Cynthia Breazeal, Louis Basel, Levi Lalla, and Michael Wolf. Design of a therapeutic robotic companion for relational, affective touch. In Proceedings of the 14th IEEE International Workshop on Robot and Human Interactive Communication, RO-MAN ’05, pages 408–415, August 2005. 210  Bibliography [163] Rob Strong and Bill Gaver. Feather, scent, and shaker: Supporting simple intimacy (short paper). In Mark S. Ackerman, editor, Proceedings of the 1996 ACM Conference on Computer Supported Cooperative Work, CSCW ’96, pages 29–30, New York, New York, USA, 1996. ACM Press. [164] Hyeon-Jeong Suk. Color and Emotion — A Study on the Affective Judgment Across Media and in Relation to Visual Stimuli. PhD thesis, Universität Mannheim, Mannheim, Baden-Württemberg, Germany, October 2006. [165] Diana L. Summerhayes and Robert W. Suchner. Power implications of touch in male - female relationships. Sex Roles, 4(1):103–110, February 1978. [166] Toshihiro Tashima, Sachihiro Saito, Toshimi Kudo, Masaharu Osumi, and Takanori Shibata. Interactive pet robot with an emotion model. Advanced Robotics, 13(3):225–226, 1999. [167] Robert E. Thayer. The Biopsychology of Mood and Arousal. Oxford University Press, New York, New York, USA, 1989. [168] Niko Tinbergen. The Study of Instinct. Clarendon Press, Oxford, England, UK, 1951. [169] Silvan S. Tomkins. The Positive Affects, volume 1 of Affect Imagery Consciousness. Springer Publishing Company, 1st edition, 1962. [170] Silvan S. Tomkins. The Negative Affects, volume 2 of Affect Imagery Consciousness. Springer Publishing Company, 1st edition, 1963. [171] Silvan S. Tomkins. Affect theory. In Klaus R. Scherer and Paul Ekman, editors, Approaches to Emotion, chapter 7, pages 163–195. Lawrence Erlbaum Associates, Hillsdale, New Jersey, USA, 1984. [172] Silvan S. Tomkins and Robert McCarter. What and where are the primary affects? Some evidence for a theory. Perceptual and Motor Skills, 18(1):119– 158, February 1964.  211  Bibliography [173] Jessica L. Tracy and Richard W. Robins. The prototypical pride expression: Development of a nonverbal behavior coding system. Emotion, 7(4):789– 801, November 2007. [174] Sherry Turkle, Will Taggart, Cory D. Kidd, and Olivia Dasté. Relational artifacts with children and elders: The complexities of cybercompanionship. Connection Science, 18(4):347–361, December 2006. [175] Ugobe. PleoWorld - The home of Pleo, the robotic baby dinosaur from Ugobe Life Forms [online]. http://www.pleoworld.com/, December 2007. [176] Gary R. VandenBos, editor. APA Dictionary of Psychology. American Psychological Association, 2006. [177] Kazuyoshi Wada and Takanori Shibata. Social and physiological influences of robot therapy in a care house. Interaction Studies, 9(2):258–276, 2008. [178] Kazuyoshi Wada, Takanori Shibata, Toshimitsu Musha, and Shin Kimura. Effects of robot therapy for demented patients evaluated by EEG. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS ’05, pages 1552–1557, August 2005. [179] Kazuyoshi Wada, Takanori Shibata, Tomoko Saito, Kayoko Sakamoto, and Kazuo Tanie. Psychological and social effects of one year robot assisted activity on elderly people at a health service facility for the aged. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, ICRA ’05, pages 2785–2790, 2005. [180] David N. Walker. A dyadic interaction model for nonverbal touching behavior in encounter groups. Small Group Research, 6(3):308–324, August 1975. [181] David Watson and Auke Tellegen. Toward a consensual structure of mood. Psychological Bulletin, 98(2):219–235, September 1985. [182] John B. Watson. Psychology as the behaviourist views it. Psychological Review, 20(2):158–177, March 1913. 212  Bibliography [183] Astrid Weiss, Daniela Wurhofer, and Manfred Tscheligi. “I love this dog”— Children’s emotional attachment to the robotic dog AIBO. International Journal of Social Robotics, 1(3):243–248, August 2009. [184] Sandra J. Weiss. Measurement of the sensory qualities in tactile interaction. Nursing Research, 41(2):82–86, March–April 1992. [185] Sheryle J. Whitcher and Jeffrey D. Fisher. Multidimensional reaction to therapeutic touch in a hospital setting. Journal of Personality and Social Psychology, 37(11):87–96, January 1979. [186] Frank N. Willis Jr. and Helen K. Hamm. The use of interpersonal touch in securing compliance. Journal of Nonverbal Behavior, 5(1):49–55, September 1980. [187] Cindy C. Wilson and Sandra B. Barker. Challenges in designing humananimal interaction research. American Behavioral Scientist, 47(1):16–28, September 2003. [188] Edward O. Wilson. Biophilia. Harvard University Press, Cambridge, Massachusetts, USA, October 1984. [189] Margaret Wilson. Six views of embodied cognition. Psychonomic Bulletin & Review, 9(4):625–636, December 2002. [190] Clive D. L. Wynne. What are animals? Why anthropomorphism is still not a scientific approach to behavior. Comparative Cognition & Behavior Reviews, 2:125–135, 2007. [191] XScreenSaver. A collection of free screen savers for X11 and MacOS [online]. http://www.jwz.org/xscreensaver/, January 2011. [192] Michelle S. M. Yik, James A. Russell, and Lisa Feldman Barrett. Structure of self-reported current affect: Integration and beyond. Journal of Personality and Social Psychology, 77(3):600–619, September 1999.  213  [193] Selma Yilmazyildiz, Wesley Mattheyses, Yorgos Patsis, and Werner Verhelst. Expressive speech recognition and synthesis as enabling technologies for affective robot-child communication. In Yueting Zhuang, Shiqiang Yang, Yong Rui, and Qinming He, editors, Advances in Multimedia Information Processing - PCM 2006, volume 4261 of Lecture Notes in Computer Science, pages 1–8. Springer Berlin / Heidelberg, 2006. [194] Steve Yohanan, Mavis Chan, Jeremy Hopkins, Haibo Sun, and Karon MacLean. Hapticat: Exploration of affective touch. In Proceedings of the 7th International Conference on Multimodal Interfaces, ICMI ’05, pages 222– 229, New York, New York, USA, October 2005. ACM Press. [195] Steve Yohanan and Karon MacLean. The Haptic Creature project: Social human-robot interaction through affective touch. In Proceedings of the AISB 2008 Symposium on the Reign of Catz & Dogz: The Second AISB Symposium on the Role of Virtual Creatures in a Computerised Society, volume 1 of AISB 2008, pages 7–11, April 2008. [196] Steve Yohanan and Karon E. MacLean. A tool to study affective touch: Goals & design of the Haptic Creature. In Extended Abstracts on Human Factors in Computing Systems, CHI EA ’09, pages 4153–4158, New York, New York, USA, 2009. ACM. [197] Steve Yohanan and Karon E. MacLean. Design and assessment of the Haptic Creature’s affect display. In Proceedings of the 6th ACM/IEEE International Conference on Human-Robot Interaction, HRI ’11, pages 473–480, New York, New York, USA, March 2011. ACM. [198] Steve Yohanan and Karon E. MacLean. The role of affective touch in human-robot interaction: Human intent and expectations in touching the Haptic Creature. International Journal of Social Robotics (SORO); Special Issue on Expectations, Intentions, & Actions, 4(2):163–180, April 2012.  214  Appendix A  Haptic Creature Materials This appendix contains supplemental information regarding the Haptic Creature robot from Chapter 4: • the hardware schematics (Section A.1); • the graphical user interface (Section A.2); and • the microcontroller communications protocol (Section A.3).  A.1  Hardware Schematics  We include here the following Haptic Creature hardware schematics: • the FSR PCB schematic (Figure A.1); • the FSR PCB layout (Figure A.2); • the motor control board schematic (Figure A.3); and • the motor control board layout (Figure A.4).  215  RA3  10k 1.5k  R65  R66  RA2  C_FSR_BB6  C_FSR_BB5  C_FSR_BB4  GND +5V  R63  R64  GND  10k 1.5k  C_FSR_BB3  R6  6  5  R5  2  3  R4  6  5  R3  2  3  R2  6  5  FSR_BB1  C_FSR_BB7  C_FSR_BB9  C_FSR_TBR1  FSR_BB6  IC3B 7 To MUX C_FSR_TBR2 LM358N  FSR_BB5  LM358N  1 To MUX  FSR_BB4  IC2B 7 To MUX C_FSR_BB10 LM358N  FSR_BB3  LM358N  1 To MUX  FSR_BB2  IC1B 7 To MUX C_FSR_BB8 LM358N  +5V  19 17 15 13 11 9 7 5 3 1  RH6  P$1 RH2 P$2 RH3 P$3 RE1 P$4 RE0 P$5 RG0 P$6 RG1 P$7 RG2 P$8 RG3 P$9 MCLR P$10 RG4 P$11 NC4 P$12 GND P$13 P$14 RF7 P$15 RF6 P$16 RF5 P$17 RF4 P$18 RF3 P$19 RF2 P$20 RH7 P$21  U$4  20 18 16 14 12 10 8 6 4 2  C_FSR_BF  MA21-0.05PITCH_B  GND  R25 R26  R12  6  5  R11  2  3  R10  6  5  R9  2  3  R8  6  5  R7  2  3  C_FSR_TBR5  C_FSR_TBR7  FSR_TBR2  IC6B 7 To MUX C_FSR_TBR8 LM358N  FSR_TBR1  LM358N  1 To MUX  FSR_BB10  IC5B 7 To MUX C_FSR_TBR6 LM358N  FSR_BB9  LM358N  1 To MUX  FSR_BB8  IC4B 7 To MUX C_FSR_TBR4 LM358N  FSR_BB7  C_FSR_TBR3  1 To MUX  LM358N  +5V  C_FSR_BF1 C_FSR_BF2 C_FSR_BF3 C_FSR_BF4 C_FSR_BF5 C_FSR_BF6 C_FSR_BF7 C_FSR_BF8 C_FSR_BF9 C_FSR_BF10  19 17 15 13 11 9 7 5 3 1  20 18 16 14 12 10 8 6 4 2  C_FSR_BB  C_FSR_TBL1  C_FSR_TBL3  FSR_TBR8  IC9B 7 To MUX C_FSR_TBL4 LM358N  FSR_TBR7  LM358N  1 To MUX  FSR_TBR6  IC8B 7 To MUX C_FSR_TBL2 LM358N  FSR_TBR5  LM358N  1 To MUX  R24  6  5  R23  2  3  R22  6  5  R21  2  3  C_FSR_TBL7  C_FSR_TBL9  FSR_TBL4  IC12B 7 To MUX C_FSR_TBL10 LM358N  FSR_TBL3  LM358N  1 To MUX  FSR_TBL2  IC11B 7 To MUX C_FSR_TBL8 LM358N  FSR_TBL1  LM358N  1 To MUX  FSR_TBR10  IC10B 7 To MUX C_FSR_TBL6 LM358N  FSR_TBR9  C_FSR_TBL5  1 To MUX  LM358N  MA21-0.05PITCH_B  +5V  C_FSR_TFL 20 C_FSR_TFL1 18 C_FSR_TFL2 16 C_FSR_TFL3 14 C_FSR_TFL4 12 C_FSR_TFL5 10 C_FSR_TFL6 8 C_FSR_TFL7 6 C_FSR_TFL8 4 C_FSR_TFL9 2 C_FSR_TFL10  1 2 3  C_MOTOR  1 2 3 4 5 6  DCJ0202  J1 GND  +5V  +5V  C_FSR_TFR 20 C_FSR_TFR1 18 C_FSR_TFR2 16 C_FSR_TFR3 14 C_FSR_TFR4 12 C_FSR_TFR5 10 C_FSR_TFR6 8 C_FSR_TFR7 6 C_FSR_TFR8 4 C_FSR_TFR9 2 C_FSR_TFR10  +5V RC1 RG3 RG4 RG0 GND  19 17 15 13 11 9 7 5 3 1  19 17 15 13 11 9 7 5 3 1  R32  6  5  R31  2  3  R30  6  5  R29  2  3  R28  6  5  R27  2  3  C_FSR_TFL3  C_FSR_TFL5  FSR_TBL10  C_ACCEL  1 +5V RH4 2 RH6 3 RH7 4 GND 5  +5V  C_FSR_TFL9  C_FSR_TFR1  FSR_TFL6  1 2  FREE_PIC_IO  3 RA5 4 RF5 5 RA4 6 RC0 7 RC6 8 RC7 9 RJ4 GND 10  GND RD0 RD1 RD2 RD3  4067N  X0 X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15  INH A B C D X  MX_BF_TFR  1  FSR_TFR2  R50  6  5  R49  2  3  R48  6  5  R47  2  3  R46  6  5  R45  2  3  9 RF2 FSR_TFR6 8 FSR_TFR7 7 FSR_TFR8 6 FSR_TFR9 5 FSR_TFR10 4 FSR_TFL1 3 FSR_TFL2 2 FSR_TFL3 23 FSR_TFL4 22 FSR_TFL5 21 FSR_TFL6 20 FSR_TFL7 19 FSR_TFL8 18 FSR_TFL9 17 FSR_TFL1016  GND15 RD0 10 RD1 11 RD2 14 RD3 13  IC23B 7 To MUX C_FSR_TFR8 LM358N  FSR_TFR1  C_FSR_TFR7  1 To MUX  FSR_TFL10  IC22B 7 To MUX C_FSR_TFR6 LM358N  LM358N  9  C_FSR_TFR5 FSR_TFL9  LM358N  1 To MUX  FSR_TFL8  IC21B 7 To MUX C_FSR_TFR4 LM358N  FSR_TFL7  C_FSR_TFR3  1 To MUX  LM358N  15 10 11 14 13  R44  6  5  R43  2  3  R42  6  5  R41  2  3  R40  6  5  R39  2  3  FSR_BF1 8 FSR_BF2 7 FSR_BF3 6 FSR_BF4 5 FSR_BF5 4 FSR_BF6 3 FSR_BF7 2 FSR_BF8 23 FSR_BF9 22 FSR_BF10 21 FSR_TFR1 20 FSR_TFR2 19 FSR_TFR3 18 FSR_TFR4 17 FSR_TFR5 16  IC18B 7 To MUX C_FSR_TFR2 LM358N  FSR_TFL5  LM358N  1 To MUX  FSR_TFL4  IC17B 7 To MUX C_FSR_TFL10 LM358N  FSR_TFL3  LM358N  1 To MUX  FSR_TFL2  IC16B 7 To MUX C_FSR_TFL8 LM358N  FSR_TFL1  C_FSR_TFL7  1 To MUX  LM358N  C_FSR_TBR 20 C_FSR_TBR1 18 C_FSR_TBR2 16 C_FSR_TBR3 14 C_FSR_TBR4 12 C_FSR_TBR5 10 C_FSR_TBR6 8 C_FSR_TBR7 6 C_FSR_TBR8 4 C_FSR_TBR9 2 C_FSR_TBR10  R38  6  5  R37  2  3  R36  6  5  R35  2  3  R34  6  5  R33  2  3  PIC_+3V  19 17 15 13 11 9 7 5 3 1  IC15B 7 To MUX C_FSR_TFL6 LM358N  FSR_TBL9  LM358N  1 To MUX  FSR_TBL8  IC14B 7 To MUX C_FSR_TFL4 LM358N  FSR_TBL7  LM358N  1 To MUX  FSR_TBL6  IC13B 7 To MUX C_FSR_TFL2 LM358N  FSR_TBL5  C_FSR_TFL1  1 To MUX  LM358N  C_FSR_TBL 20 C_FSR_TBL1 18 C_FSR_TBL2 16 C_FSR_TBL3 14 C_FSR_TBL4 12 C_FSR_TBL5 10 C_FSR_TBL6 8 C_FSR_TBL7 6 C_FSR_TBL8 4 C_FSR_TBL9 2 C_FSR_TBL10  MX_BF_TFRP MX_TFRLP MX_BB_TBLP MX_TBLRP  P$21 RJ2 P$20 RJ3 P$19 RB0 P$18 RB1 P$17 RB2 P$16 RB3 P$15 RB4 P$14 RB5_ICE P$13 RB6 P$12 P$11 NC2 P$10 ICE_OSC2 P$9 ICE_OSC1 P$8 P$7 RB7 P$6 RC5 P$5 RC4 P$4 RC3 P$3 P$2 RJ7 P$1 RJ6  19 17 15 13 11 9 7 5 3 1  U$2  +5V  C_FSR_BB1 C_FSR_BB2 C_FSR_BB3 C_FSR_BB4 C_FSR_BB5 C_FSR_BB6 C_FSR_BB7 C_FSR_BB8 C_FSR_BB9 C_FSR_BB10  R19  2  3  IC7B 5 7 To MUX C_FSR_TBR10 6 LM358N R20 FSR_TBR4  FSR_TBR3  C_FSR_TBR9  1 To MUX  LM358N  PIC_+3V  R18  6  5  R17  2  3  R16  6  5  R15  2  3  R14  6  5  R13  2  3  VDD VSS  C_FSR_BB2  1k 1k  24 12  1 To MUX  P$1  VDD VSS  LM358N  U$1 MA21-0.05PITCH_B  24 12  R1  P$13 P$14  VDD VSS  2  RH5 P$2 RH4 P$3 RF1 P$4 RF0 P$5 AV_DD P$6 AV_SS P$7 RA3 P$8 RA2 P$9 RA1 P$10 RA0 P$11 NC3 P$12  24 12  3  MA21-0.05PITCH_B U$3  VDD VSS  8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 8 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 P$21 RH1 P$20 RH0 P$19 RE2 P$18 RE3 P$17 RE4 P$16 RE5 P$15 RE6 P$14 RE7 P$13 RD0 P$12 PIC_+3V P$11 NC1 P$10 GND P$9 RD1 P$8 RD2 P$7 RD3 P$6 RD4 P$5 RD5 P$4 RD6 P$3 RD7 P$2 RJ0 P$1 RJ1  RA5 P$15 RA4 P$16 RC1 P$17 RC0 P$18 RC6 P$19 RC7 P$20 RJ4 P$21 RJ5  24 12  C_FSR_BB1  C_FSR_BF1  C_FSR_BF3  4067N  X0 X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15  INH A B C D  MX_TFRL  X  1  FSR_TFR8  15 10 11 14 13  C_FSR_BF7  C_FSR_BF9  4067N  X0 X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15  INH A B C D X  1  GND RD0 RD1 RD2 RD3  15 10 11 14 13  1 To MUX  4067N  X0 X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15  INH A B C D X  1 RA0  FSR_BF10  LM358N  IC32B 7 To MUX  FSR_BF9  LM358N  1 To MUX  FSR_BF8  LM358N  IC31B 7 To MUX  FSR_BF7  LM358N  1 To MUX  FSR_BF6  LM358N  IC30B 7 To MUX  FSR_BF5  LM358N  MX_TBLR  R62  6  5  R61  2  3  R60  6  5  R59  2  3  R58  6  5  R57  2  3  9 RF6 FSR_TBL6 8 FSR_TBL7 7 FSR_TBL8 6 FSR_TBL9 5 FSR_TBL10 4 FSR_TBR1 3 FSR_TBR2 2 FSR_TBR3 23 FSR_TBR4 22 FSR_TBR5 21 FSR_TBR6 20 FSR_TBR7 19 FSR_TBR8 18 FSR_TBR9 17 FSR_TBR1016  FSR_BF4  IC29B 7 To MUX C_FSR_BF10 LM358N  FSR_BF3  LM358N  1 To MUX  FSR_BF2  IC28B 7 To MUX C_FSR_BF8 LM358N  FSR_BF1  LM358N  1 To MUX  FSR_TFR10  IC27B 7 To MUX C_FSR_BF6 LM358N  FSR_TFR9  C_FSR_BF5  1 To MUX  LM358N  MX_BB_TBL  R56  6  5  R55  2  3  R54  6  5  R53  2  3  R52  6  5  R51  2  3  9 RA1 FSR_BB1 8 FSR_BB2 7 FSR_BB3 6 FSR_BB4 5 FSR_BB5 4 FSR_BB6 3 FSR_BB7 2 FSR_BB8 23 FSR_BB9 22 FSR_BB10 21 FSR_TBL1 20 FSR_TBL2 19 FSR_TBL3 18 FSR_TBL4 17 FSR_TBL5 16  GND RD0 RD1 RD2 RD3  IC26B 7 To MUX C_FSR_BF4 LM358N  FSR_TFR7  LM358N  1 To MUX  FSR_TFR6  IC25B 7 To MUX C_FSR_BF2 LM358N  FSR_TFR5  LM358N  1 To MUX  FSR_TFR4  IC24B 7 To MUX C_FSR_TFR10 LM358N  FSR_TFR3  C_FSR_TFR9  1 To MUX  LM358N  A.1. Hardware Schematics  Figure A.1: FSR PCB schematic.  216  A.1. Hardware Schematics  R64  R26  R65  R63  R25  R1  R3  R5  R7  R2  R4  R6  R8  R10  1  C_FSR_TBR  R12  R11  R14  R16  R13  R15  R18  R17  R20  R19  20  20 R22  R29  R31  R32  R30  R27  R28  R23  R24  R21  IC11  J1 R9  C_FSR_TBL  1  MX_BB_TBL  MX_TBLR  10  1  U$3  10  10  1  MX_BF_TFR  U$1  10  MX_TFRL  20  R34  R41  R39  R40  R37  R38  R35  R36  R33  IC16 R42  1  C_FSR_TFL  C_FSR_TFR  R44  R52  R49  R50  R47  R48  R45  R46  R43  IC23 R51  1  20 R54  R53  R56  R58  R55  R60  R57  R59  R62  R61  C_MOTOR 6 1 1  IC28 IC29 IC30 IC31 IC32  U$2  IC24 IC25 IC26 IC27  U$4  IC17 IC18 IC21 IC22  10 10 1  1 10 10  R66  20  IC6 IC7 IC8  IC12 IC13 IC14 IC15  C_FSR_BB 1  IC1 IC2 IC3 IC4 IC5  IC9 IC10  C_FSR_BF 1 C_ACCEL 5  10 FREE_PIC_IO 1  20  Figure A.2: FSR PCB layout.  217  J2  5V_  2  1k  R_5V  5V 1 2 3 4 5 6  5V  C_MOTOR2  HEAT0 HEAT1 HEAT2  1 2 3  GNDC_MOTOR  SERVO0 SERVO1 SERVO2 MOTOR0  GND  GND GND  GND  1 C1 IN 3  OUT  C2  VR_1 LT1121CST-5#PBF  GND  GND  GND  VIN GND VOUT  V_THINGS  1 2 3  R_V6 GND  H2  RH2 2 1  HEAT2C  2 1  HEAT1C  8  7  6  5  4  3  HEAT2_O  GND  HEAT1_O  GND  HEAT0_O  GND  V_THINGS  HEAT2  HEAT2_O  2 1  1  HEAT1 2  5V  HEAT1_O  1k  HEAT0C  H1  GNDGND  1uF 1uF  2k LEDV6  RH1 1k  3 2 1  SERVO2  3 2 1  SERVO1  3 2 1  9  10  11  12  13  14  15  16  PRR_P  MOTOR0  5V  GND  SERVO2 V_THINGS  GND  SERVO1 V_THINGS  GND  2 1  MOTOR0  HEAT0  HEAT0_O  SERVO0 V_THINGS  3-4EN  3A  3Y  GND4  GND3  4Y  4A  VCC1  SERVO0  L293D  VCC2  2A  2Y  GND2  GND1  1Y  1A  1-2EN  PRR_P  GND  H0  C3  .1uF  GND  R1 RH0  HB_EAR1  GNDGND 5V  47k  1k  POWER_JACKPTH_LOCK +12V  A.1. Hardware Schematics  Figure A.3: Motor control board schematic.  218  GND  GND  GND  S_1  A.1. Hardware Schematics  Figure A.4: Motor control board layout.  219  A.2. Graphical User Interface  A.2  Graphical User Interface  We include here the following Haptic Creature graphical user interface (GUI) components: • the Master panel (Figure A.5); • the Master panel with state (Figure A.6); • the Creature editor (Figure A.7); • the Scheduler editor (Figure A.8); • the Recognizer editor (Figure A.9); • the Emoter editor (Figure A.10); • the Renderer editor (Figure A.11); • the Sensors editor (Figure A.12); • the Ear actuator editor (Figure A.13); • the Lung actuator editor (Figure A.14); and • the PurrBox actuator editor (Figure A.15).  220  A.2. Graphical User Interface  Figure A.5: Master panel.  221  A.2. Graphical User Interface  Figure A.6: Master panel with state.  222  A.2. Graphical User Interface  Figure A.7: Creature editor.  223  A.2. Graphical User Interface  Figure A.8: Scheduler editor.  224  A.2. Graphical User Interface  Figure A.9: Recognizer editor.  225  A.2. Graphical User Interface  Figure A.10: Emoter editor. 226  A.2. Graphical User Interface  Figure A.11: Renderer editor. 227  A.2. Graphical User Interface  Figure A.12: Sensors editor.  228  A.2. Graphical User Interface  Figure A.13: Ear actuator editor.  229  A.2. Graphical User Interface  Figure A.14: Lung actuator editor.  230  A.2. Graphical User Interface  Figure A.15: PurrBox actuator editor.  231  A.3. Microcontroller Communications Protocol  A.3  Microcontroller Communications Protocol  Following is the byte structure for the complete command set recognized by the microcontroller firmware and, where appropriate, the corresponding response returned. Each command must be exactly four bytes — though not all commands utilize all bytes. The first byte of each command and response is the command code. Each index is zero-based. The number of bytes for a response is dependent upon the initiating command. The error_status currently is not implemented. • UNDEFINED command: [ 0x00 ] • SET_MOTOR command: [ 0x01 | index | speed_HI | speed_LO ] • GET_MOTOR command: [ 0x02 | index ] response: [ 0x02 | error_status | index | speed_HI | speed_LO ] • SET_SERVO command: [ 0x03 | index | position ] • GET_SERVO command: [ 0x04 | index ] response: [ 0x04 | error_status | index | position ] • GET_FSR command: [ 0x05 | index ] response: [ 0x05 | error_status | index | fsr_value_HI | fsr_value_LO ] 232  A.3. Microcontroller Communications Protocol • GET_FSR_ALL command: [ 0x06 ] response: [ 0x06 | error_status | fsr(0)_value_HI | fsr(0)_value_LO ··· | fsr(n-1)_value_HI | fsr(n-1)_value_LO ] • GET_ACCEL command: [ 0x07 | index ] response: [ 0x07 | error_status | index | accel_value_HI | accel_value_LO ] • GET_ACCEL_ALL command: [ 0x08 ] response: [ 0x08 | error_status | accel(0)_value_HI | accel(0)_value_LO ··· | accel(n-1)_value_HI | accel(n-1)_value_LO ] • GET_SENSOR_ALL command: [ 0x09 ] response: [ 0x09 | error_status | fsr(0)_value_HI | fsr(0)_value_LO ··· | fsr(n-1)_value_HI | fsr(n-1)_value_LO | accel(0)_value_HI | accel(0)_value_LO ··· | accel(n-1)_value_HI | accel(n-1)_value_LO ]  233  Appendix B  Preliminary Investigation Materials This appendix contains supplemental information regarding the preliminary investigation from Chapter 3: • Hapticat internals (Section B.1); • the participant consent form (Section B.2); • the initial questionnaire (Section B.3); and • the post-study questionnaire (Section B.4).  234  B.1. Hapticat Internals  B.1  Hapticat Internals  S  F E P  T B W Figure B.1: The Hapticat internals. Visible are the outer shell [S], inner filling [F], tail [T], ears mechanism [E], breathing mechanism [B], purring mechanism [P], and warming element [W].  235  B.2. Participant Consent Form  B.2  Participant Consent Form  The following is the consent form that was read and signed by each participant prior to proceeding with the preliminary user study.  236  B.2. Participant Consent Form  THE UNIVERSITY OF BRITISH COLUMBIA Department of Computer Science Vancouver, B.C.,  April 12, 2005 Physical User Interface Design Course Projects (CPSC 543) Principal Investigator Dr. Karon MacLean, Professor, Department of Computer Science, University of British Columbia  Student Investigators Mavis Chan Jeremy Hopkins Haibo Sun Steve Yohanan Project Purpose and Procedures This course project is designed to investigate how people interact with certain types of interactive technology. Interactive technology includes applications that run on a standard desktop or laptop computer, such as a word processor, web browser, and email, as well as applications on handheld technology, such as the datebook on the Pocket PC, and also applications on more novel platforms such a SmartBoard (electronic whiteboard) or a Diamond Touch tabletop display. The purpose of this course project is to gather information that can help improve the design of interactive technology. You will be asked to use one or more forms of interactive technology to perform a number of tasks. We will observe you performing those tasks and analyze how the technology is used. You may be asked to complete a number of questionnaires and we may ask to interview you to find out your impressions of the technology. You will be asked to participate in at most 3 sessions, each lasting no more than 1 hour. The sessions may also be videotaped. Videotapes will be used for analysis and may also be used for class project presentations and other research presentations in the Department of Computer Science at the University of British Columbia. You have the option not to be videotaped. Although only a course project in its current form, this project may, at a later date, be extended by one or more of the student investigators to form the basis of his/her thesis research.  Reference Number: CPSC543 - Chan, Hopkins, Sun, Yohanan - v1.00 – 2005.04.12 Page 1 of 2  237  B.2. Participant Consent Form  Confidentiality The identities of all people who participate will remain anonymous and will be kept confidential. The one exception is that excerpts from the videotape may be presented as described above, and your identity may be revealed through those video excerpts. Identifiable data and videotapes will be stored securely in a locked metal filing cabinet or in a password protected computer account. All data from individual participants will be coded so that their anonymity will be protected in any reports, research papers, thesis documents, and presentations that result from this work. Remuneration/Compensation You will receive $5 as compensation for your participation. Contact Information About the Project If you have any questions or require further information about the project you may contact Karon MacLean at Contact for information about the rights of research subjects If you have any concerns about your treatment or rights as a research subject, you may contact the Research Subject Information Line in the UBC Office of Research Services at Consent We intend for your participation in this project to be pleasant and stress-free. Your participation is entirely voluntary and you may refuse to participate or withdraw from the study at any time. Your signature below indicates that you have received a copy of this consent form for your own records. Your signature indicates that you consent to participate in this project. You do not waive any legal rights by signing this consent form.  I, ________________________________, agree to participate in the project as outlined above. My participation in this project is voluntary and I understand that I may withdraw at any time.  ____________________________________________________ Participant’s Signature Date  ____________________________________________________ Student Investigator’s Signature Date  Reference Number: CPSC543 - Chan, Hopkins, Sun, Yohanan - v1.00 – 2005.04.12 Page 2 of 2  238  B.3. Initial Questionnaire  B.3  Initial Questionnaire  The following is the questionnaire administered to participants before the start of the preliminary user study.  239  B.3. Initial Questionnaire  THE UNIVERSITY OF BRITISH COLUMBIA Department of Computer Science Vancouver, B.C.,  “Affective Touch” User Study Initial Questionnaire Subject# _________  Date ___________________  Please take a moment to look at the device beside you. Without touching or interacting with the device, please fill in the blank spaces below with the requested information. For each of the actions stated below, if you were to perform this action on the device what do you believe is the response you would expect from the device? Choose one response from the following five: 1 = playing dead, 2 = sleeping, 3 = content, 4 = happy, 5 = upset  Action  Response  1. Gently petting  ______________  2. Vigorously petting  ______________  3. Rubbing ears  ______________  4. Pinching body  ______________  5. Poking body  ______________  6. Hugging  ______________  7. Tickling  ______________  8. Resting hand on top ______________ 9. Shaking  ______________  10. Leaving alone  ______________  Physical User Interface Design Course Projects (CPSC 543)  240  B.4. Post-Study Questionnaire  B.4  Post-Study Questionnaire  The following is the questionnaire administered to participants upon completion of the preliminary user study.  241  B.4. Post-Study Questionnaire  THE UNIVERSITY OF BRITISH COLUMBIA Department of Computer Science Vancouver, B.C.,  “Affective Touch” User Study Post-experiment Questionnaire Subject# ____________  Date _______________________  1. What is your age? a. 19 or below b. 20 – 24 c. 25 – 29 d. 30 – 34 e. 35 – 39 f. 40 – 44 g. 45 – 49 h. 50 or above 2. What is your gender? a. Female b. Male 3. Which is your dominant hand? a. Left b. Right 4. On a scale from 1 – 5 (1 = low, 5 = high) state your competency with computers.  5. On a scale from 1 – 5 (1 = low, 5 = high) state your familiarity with haptic (touch) interfaces.  Physical User Interface Design Course Projects (CPSC 543)  1 of 3  242  B.4. Post-Study Questionnaire  6. List one animal or creature that you think describes the device.  7. What gender did you think the device was? a. Female b. Male c. None 8. Are you a pet owner now or have you been in the past? a. Yes b. No 9. Do you come into frequent contact with animals now or in your past? a. Yes b. No 10a. If you answered “Yes” to either of the previous questions, list the pet(s) or animal(s) along with their corresponding positive and/or negative aspects. Additionally, if this was in the past (e.g., your childhood) please state when and reason why you no longer interact with the animal(s). Be as specific as you like.  10b. If you answered “No” to both of the previous questions, give a brief description as to why this might be (e.g., you don’t like animals or are allergic to them). Be as specific as you like.  Physical User Interface Design Course Projects (CPSC 543)  2 of 3  243  B.4. Post-Study Questionnaire  11. Give a description of your feelings when the device in the study first started to physically respond to your actions. Be as specific as you like.  12. Give any general comments and/or suggestions regarding the device in the user study. Be as specific as you like.  13. Give any general comments and/or suggestions regarding the overall user study and how it was conducted. Be as specific as you like.  Physical User Interface Design Course Projects (CPSC 543)  3 of 3  244  Appendix C  Robot Affect Display Study Materials This appendix contains supplemental information regarding the robot affect display user study from Chapter 5: • the participant recruitment flyer (Section C.1); • the participant registration web page (Section C.2); • the participant consent form (Section C.3); • the preliminary instructions (Section C.4); • the user study screens (Section C.5); and • the post-study questionnaire (Section C.6).  C.1  General Participant Recruitment  The following is the flyer used as general recruitment of participants for studies. It directs interested participants to a web page listing specific descriptions of active studies. The content was distributed through one or more of the following methods: • An email message sent, for example, to a University of British Columbia mailing list such as the Department of Computer Science “graduate students” list. • A printed flyer posted, for example, on the University of British Columbia campus or at Vancouver-area community centres. • An online forum posting, for example, to a site such as craigslist.org. 245  C.1. General Participant Recruitment  Participants Needed For UBC Studies With Furry Robot We are researchers in the Department of Computer Science at the University of British Columbia and are currently recruiting participants for one of several upcoming user studies. Our research is investigating the manner in which humans and robots communicate emotion through touch. For this purpose, we are developing a small, furry robot capable of expressing and recognizing emotion through the sense of touch. Depending on the particular study, you may be asked to: interact with the robot through touch; attempt to judge the robot's emotional state; answer questions about your current emotional state.  • • •  General Information: The studies take approximately 1 hour. Typical compensation for participation is $10. The studies normally will be conducted at the Vancouver campus of UBC.  • • •  General Restrictions on Participation: • You must be between the ages of 19 and 50 years old. • You must be a native English speaker, preferably from North America. • You may not participate in more than one study related to this robot. Specific details regarding the individual studies, including how to register for participation, can be found via the following link:  http://  http://  UBC Furry Robot Studies  http://  UBC Furry Robot Studies  http://  UBC Furry Robot Studies  http://  UBC Furry Robot Studies  http://  UBC Furry Robot Studies  http://  UBC Furry Robot Studies  http://  UBC Furry Robot Studies  http://  UBC Furry Robot Studies  http://  UBC Furry Robot Studies  http://  UBC Furry Robot Studies  http://  UBC Furry Robot Studies  http://  UBC Furry Robot Studies  http://  UBC Furry Robot Studies  This study has been approved by The University of British Columbia; Office of Research Services; Behavioural Research Ethics Board.  246  C.2. Participant Registration  C.2  Participant Registration  The following is the web page used for recruitment of participants for the robot affect display user study. Interested participants were directed to this page from the general recruitment presented in Section C.1.  247  C.2. Participant Registration  The Haptic Creature Project Creature Affect Display Study Thank you for your interest in participating in our study! For instructions on how to sign up, please go to the bottom of the page.  Research Overview I am a member of the SPIN research group in the Department of Computer Science at the University of British Columbia. I am recruiting participants in user studies as part of my PhD research under the supervision of Dr. Karon MacLean. Our research is investigating the manner in which humans and robots communicate emotion through touch. For this purpose, we are developing the Haptic Creature: a small, furry robot capable of expressing and recognizing emotion through the sense of touch.  Study Details The goal of this specific study is to examine the ability of humans to recognize the emotional state of a robot through touch. You will interact with the Haptic Creature through touch while trying to determine its various emotional states. You will also answer questions about your own emotional state at various points throughout the study.  General Information The study will take take approximately 1 hour to complete. You will be compensated $10 for your participation. The study will be conducted at the Vancouver campus of the University of British Columbia.  Restrictions on Participation You must be between 19 and 50 years old. You must be a native English speaker, preferably from North America. You must not have participated in any previous studies with the Haptic Creature. This study has been approved by The University of British Columbia; Office of Research Services; Behavioural Research Ethics Board (#H01 -80470). If you have any questions, do not hesitate to e-mail me: Steve Yohanan < >.  Signup Instructions — details omitted to conserve space —  248  C.3. Participant Consent Form  C.3  Participant Consent Form  The following is the consent form that was read and signed by each participant prior to proceeding with the robot affect display user study.  249  C.3. Participant Consent Form  Department of Computer Science  PARTICIPANT’S COPY CONSENT FORM  Vancouver, B.C. Canada tel: fax:  Project Title: The Haptic Creature Project – Creature Affect Display (UBC Ethics #H01-80470) Principal Investigators: Dr. Karon MacLean; Associate Professor; Dept of Computer Science; Student Investigator: Steve Yohanan; PhD Candidate; Dept of Computer Science;  The purpose of this study is to examine the ability of humans to recognize the emotional state of robots through the sense of touch. In this study you will be asked to interact with a small robot covered in a soft fur. This robotic creature, loosely resembling a small animal such as a cat, dog, or rabbit, will present a variety of synthetic emotional states. The robot will differentiate its emotions by adjusting the stiffness of its ears, modulating its breathing, and/or presenting a vibrotactile purr. You will be asked to categorize these emotional states from predefined sets. In addition, at points throughout the study you will be asked to report your current emotional state via a questionnaire. At the end of the study, you will be asked to provide general demographic information as well as feedback on your experiences during the study. You will be asked to wear ear muffs to mask external noises. During the study you may be videotaped. Videotapes will be used for analysis and may also be used research presentations in the Department of Computer Science at the University of British Columbia. We will contact you for explicit permission before using any video or still images taken here which could identify you, in presentations outside of UBC. If you are not sure about any instructions, do not hesitate to ask. REIMBURSEMENT: TIME COMMITMENT: CONFIDENTIALITY:  $10 1 × 60 minute session You will not be identified by name in any study reports. Data gathered from this experiment will be stored in a secure Computer Science account accessible only to the experimenters.  You understand that the experimenter will ANSWER ANY QUESTIONS you have about the instructions or the procedures of this study. After participating, the experimenter will answer any other questions you have about this study. Your participation in this study is entirely voluntary and you may refuse to participate or withdraw from the study at any time without jeopardy. Your signature below indicates that you have received a copy of this consent form for your own records, and consent to participate in this study. If you have any concerns about your treatment or rights as a research subject, you may contact the Research Subject Info Line in the UBC