UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The effect of socially communicative eye contact on memory Lanthier, Sophie N. 2018

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2019_february_lanthier_sophie.pdf [ 1.96MB ]
Metadata
JSON: 24-1.0373094.json
JSON-LD: 24-1.0373094-ld.json
RDF/XML (Pretty): 24-1.0373094-rdf.xml
RDF/JSON: 24-1.0373094-rdf.json
Turtle: 24-1.0373094-turtle.txt
N-Triples: 24-1.0373094-rdf-ntriples.txt
Original Record: 24-1.0373094-source.json
Full Text
24-1.0373094-fulltext.txt
Citation
24-1.0373094.ris

Full Text

THE EFFECT OF SOCIALLY COMMUNICATIVE EYE CONTACT ON MEMORY  by  Sophie N. Lanthier  Bachelor of Arts, University of Waterloo, 2009 Master of Arts, University of British Columbia, 2011  DISSERTATION SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Psychology)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  October 2018  © Sophie N. Lanthier, 2018 ii  The following individuals certify that they have read, and recommend to the Faculty of Graduate and Postdoctoral Studies for acceptance, the dissertation entitled:  THE EFFECT OF SOCIALLY COMMINICATIVE EYE CONTACT ON MEMORY   submitted by Sophie Lanthier in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Psychology  Examining Committee: Dr. Alan Kingstone, Psychology Supervisor  Dr. Rebecca Todd, Psychology Supervisory Committee Member  Dr. Frances Chen, Psychology Supervisory Committee Member Dr. Susan Birch, Psychology University Examiner Dr. Nicola Hodges, Kinesiology University Examiner  iii  Abstract Just by looking at someone’s eyes, we can quickly infer how they feel, what interests them, and whether we’ve met them. Because of their value as a socially communicative cue, researchers have strived to understand how the gaze of other people influences a variety of cognitive processes. However, recent work in the field of social attention suggests that socially communicative aspects of eye gaze are not tested effectively in laboratory studies that use images of people. As attention affects many other cognitive processes, it is likely that social attention between real individuals could also affect other cognitive processes, such as memory. From previous work alone, it is unclear whether, and if so how, socially communicative eye gaze affects memory. The studies presented in this document address this issue. The first two chapters establish that socially communicative eye contact can improve verbal memory, though only in females. Chapter 3 confirms that socially communicative aspects, rather than perceptual aspects, of eye gaze drive improvements in memory. The next three chapters explored which communicative signals are responsible for the memory benefits observed in female participants. Chapter 4 eliminates the possibility that observing a head-lift is responsible for the memory effects, and confirms that eye contact is the key factor. Chapter 4 also reveals that 'social exclusion' (i.e., not being looked at) can hinder memory. Chapters 5 and 6 determine that other socially communicative signals, both non-verbal and verbal, can also modulate verbal memory. This demonstrates that a communicative signal in general, rather than one specific to the eyes, is modulating memory performance. However, Chapter 6 demonstrates that a non-gaze referential cue can influence memory in male participants; which stands in contrast to the original finding that eye contact did not. Thus, males appear to process eye gaze differently from other social cues. Collectively, the results of this thesis reveal the importance of using social cues that are communicative in nature (e.g., real people) iv  when studying human memory. While the mechanisms through which different communicative signals affect memory are, at least partially shared, their effects appear to vary with the gender of the observer.   v  Lay Summary The present work sought to determine whether a speaker’s eye gaze could change what a listener remembers the speaker saying. The studies established that a speaker’s eye contact improves memory and seeing a speaker look at someone else hinders memory, though only in female listeners, and only when the speaker is live (i.e., appeared in person or through videoconferencing). The findings indicate that female listeners use a speaker’s eye gaze to infer when to tune out or be attentive to what a speaker says. Furthermore, other communicative signals that speakers use (i.e., pointing, naming someone) affect listeners’ memory similarly. However, unlike eye gaze, a speaker’s hand gestures can influence what male listeners remember the speaker saying. While many communicative signals alter how listeners attend to a speaker, men and women interpret some signals differently, thus the impact that these signals have on memory varies with the gender of the listener.   vi  Preface All work presented in this dissertation was conducted in the Brain and Attention Research Laboratory at the University of British Columbia, Point Grey campus. All projects and associated methods were approved by the University of British Columbia’s Research Ethics Board [Towards a More Natural Approach to Attention Research 1-200, certificate #H10-00527, & Research in Cognitive Ethology, #H04- 80767]. I was the lead investigator for all of the projects reported in this dissertation. I was primarily responsible for design conception, data analysis, and dissertation composition. Alan Kingstone acted in a supervisory capacity during project conception and editing. Samin Saddat, Crystal Byun, Paul Kealong, Mona Zhu, Brooke MacDonald, Fouziah Khairati, Oren Princz-Lebel, and Mikayla Pachkowski were involved in data collection. Michelle Jarick also acted in a co-supervisory capacity during the conception of the studies presented in Chapter 2. vii  Table of Contents  Abstract ......................................................................................................................................... iii Lay Summary .................................................................................................................................v Preface ........................................................................................................................................... vi Table of Contents ........................................................................................................................ vii List of Figures ............................................................................................................................... ix Acknowledgements .................................................................................................................... xiii Dedication ................................................................................................................................... xiv Chapter 1: Introduction ................................................................................................................1  1.1 Chapter Overview ........................................................................................................... 3 1.2 Eye gaze influencing attention in computer based-tasks ................................................ 4 1.3 Eye gaze influencing attention in natural settings ........................................................ 11 1.4 Eye gaze influencing memory in computer-based tasks ............................................... 16 1.5 Eye gaze influencing memory in natural settings ......................................................... 22 1.6 Thesis Overview ........................................................................................................... 25 Chapter 2: Does eye contact facilitate memory for verbal information? ...............................29  2.1 Study 1: Gender specific memory effects arise from eye contact ................................ 33 2.2 Study 2: Gender specific memory effects that arise from eye contact are not driven by investigator gender .................................................................................................................... 44 2.3 Study 3: Gender specific memory effects that arise from eye contact are not driven by the length of perceived eye contact ........................................................................................... 50 2.4 General discussion ........................................................................................................ 57 Chapter 3: Do perceived and actual eye contact have different effects on attention and memory? (Study 4) .......................................................................................................................61 Chapter 4: Does eye contact or a general signal associated with eye contact signal when to pay attention? ...............................................................................................................................73  4.1 Study 5: Eye contact, but not head lifts, facilitate memory task performance ............. 73 4.2 Study 6: Eye contact over skype improves memory and social exclusion hinders it ... 87 Chapter 5: Do other social cues produce and drive memory effects? .....................................97 viii   5.1 Study 7: Hand gestures produce memory task benefits and deficits ............................ 98 5.2 Study 8: Verbal signals produce memory task benefits and deficits .......................... 106 Chapter 6: Can other social cues produce memory benefits in males? (Study 9) ................115 Chapter 7: General Discussion .................................................................................................124  7.1 Chapter overview ........................................................................................................ 124 7.2 Summary of thesis....................................................................................................... 125 7.3 Implications, limitations and future directions ........................................................... 137 7.3.1 The hierarchy of social cues ................................................................................... 137 7.3.2 Exploring how cultural and individual differences affect eye gaze related memory effects.. ................................................................................................................................ 141 7.3.3 Exploring how eye contact affects different components of memory .................... 145 7.3.4 Implications for online learning environments ....................................................... 148 7.4 Conclusion .................................................................................................................. 150 References ...................................................................................................................................151 Appendices ..................................................................................................................................173  Appendix A: Word list ............................................................................................................ 173  Appendix B: A meta-analysis of all three experiments reported in Chapter 2. ...................... 174  Appendix C: A comparison of memory effects in response to a live investigator (Studies 1 and 2 in Chapter 2) and a videotaped investigator (Study 4 in Chapter 3) .................................... 176  Appendix D: A comparison between the memory effects generated in response to investigator eye gaze (Study 1) and investigator pointing (Study 9) in male participants only. ................ 179  Appendix E: A summary of observed effect sizes (Cohen's d) in the percentage correct data for the critical t-tests in each study. .............................................................................................. 181  ix  List of Figures  Figure 2.1 The depiction of the experimental setup and procedure used in Experiment 1. (a) The arrangement of the investigator, participant and laptop during the encoding phase. (b) The instructional sequence that was visible to the investigator for different trials during the encoding phase. When instructed, the investigator would lift their eyes to make eye contact with the participant as the word to read aloud appeared on screen. The investigator is depicted from the participants' perspective on each trial type. (c) The trial sequence that was presented to the participants' during the recognition phase of the experiment. ...................................................... 37  Figure 2.2 RT as a function of Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). Note that new words have been plotted in this figure as a reference point, but were not included in the analysis. ............................................... 39  Figure 2.3 Percentage correct as a function of Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). ........................................................................... 40  Figure 2.4 D prime as a function of Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). ............................................................................................ 41  Figure 2.5 Decision bias (Beta) as a function of Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). ................................................................... 42  Figure 2.6 D prime as a function of Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). ............................................................................................ 45  Figure 2.7 Percentage correct as a function of Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). ............................................... 46  Figure 2.8 D prime as a function of Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). ........................................................................... 48  Figure 2.9 Decision bias (Beta) as a function of Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). ................................................................... 48  Figure 2.10 RT as a function of Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). ........................................................................... 52  Figure 2.11 Percentage correct as a function of Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). ............................................... 53  Figure 2.12 D prime as a function of Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). ................................................................... 54  x  Figure 2.13 Decision bias (Beta) as a function of Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). ................................................................... 55  Figure 3.1 RT as a function of Investigator Gender (Female versus Male), Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). .... 65  Figure 3.2 Percentage correct as a function of Investigator Gender (Female versus Male), Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). .................................................................................................................................. 66  Figure 3.3 D prime as a function of Investigator Gender (Female versus Male), Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). .... 67  Figure 3.4 Decision bias (Beta) as a function of Investigator Gender (Female versus Male), Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). .................................................................................................................................. 68  Figure 4.1 The depiction of the experimental setup and procedure used in Study 5. (a) The arrangement of the investigator, participants and laptop during the encoding phase. In this example, the investigator is depicted looking at participant A, participant A’s partner (i.e., Participant B), or the laptop screen. Note that during the actual experiment participant A experiences eye contact with the investigator when the investigator looks at them on a given trial, while participant B simultaneously sees the investigator make eye contact with their partner. The reverse is also true. By looking at participant B, the investigator makes eye contact with participant B, and gives participant A the impression that they are making eye contact with their partner. (b) The instructional sequence that was visible to the investigator during the encoding phase. When prompted to make eye contact, the investigator initiated eye contact as soon as the word appeared on their screen. (c) The trial sequence that was presented to the participants during the recognition phase of the experiment. ............................................................................................................... 80  Figure 4.2 RT as a function of Investigator gaze (Participant, Partner, Screen). Note that new words have been plotted in this figure as a reference point, but were not included in the analysis. ....... 81  Figure 4.3 Percentage correct as a function of Investigator gaze (Participant, Partner, Screen). . 82  Figure 4.4 D prime as a function of Investigator gaze (Participant, Partner, Screen). ................. 83  Figure 4.5 Decision bias (Beta) as a function of Investigator gaze (Participant, Partner, Screen)........................................................................................................................................................ 84  Figure 4.6 The depiction of the experimental setup and procedure used in Study 6. (a) The arrangement of the investigator, participants and laptops during the encoding phase. In this example, the investigator is depicted looking at participant A, participant A’s partner (i.e., Participant B), or the laptop screen. Note that during the actual experiment when the investigator looks at participant A, they are also looking at participant B’s partner. The reverse is also true. 89 xi   Figure 4.7 RT as a function of Investigator gaze (Participant, Partner, Screen). Note that new words have been plotted in this figure as a reference point, but were not included in the analysis. ....... 91  Figure 4.8 Percentage correct as a function of Investigator gaze (Participant, Partner, Screen). . 92  Figure 4.9 D prime as a function of Investigator gaze (Participant, Partner, Screen). ................. 93  Figure 4.10 Decision bias (Beta) as a function of Investigator gaze (Participant, Partner, Screen)........................................................................................................................................................ 94  Figure 5.1 RT as a function of Investigator pointing (Participant, screen, partner). Note that new words have been plotted in this figure as a reference point, but were not included in the analysis...................................................................................................................................................... 101  Figure 5.2 Percentage correct as a function of Investigator pointing (Participant, screen, partner)...................................................................................................................................................... 102 Figure 5.3 D prime as a function of Investigator gaze (Participant, screen, partner). ................ 103  Figure 5.4 Decision bias (Beta) as a function of Investigator pointing (Participant, screen, partner)...................................................................................................................................................... 104  Figure 5.5 RT as a function of investigator naming (Participant, no one, partner). Note that new words have been plotted in this figure as a reference point, but were not included in the analysis...................................................................................................................................................... 108  Figure 5.6 Percentage correct as a function of Investigator naming (Participant, no one, partner)...................................................................................................................................................... 109  Figure 5.7 D prime as a function of Investigator naming (Participant, no one, partner). ........... 110  Figure 5.8 Decision bias (Beta) as a function of Investigator naming (Participant, no one, partner)...................................................................................................................................................... 111  Figure 6.1 RT as a function of whether the investigator pointed (without pointing versus with pointing). Note that new words have been plotted in this figure as a reference point, but were not included in the analysis. .............................................................................................................. 118  Figure 6.2 Percentage correct as a function of whether the investigator pointed (without pointing versus with pointing). Note that new words have been plotted in this figure as a reference point, but were not included in the analysis. ......................................................................................... 119  Figure 6.3 D prime as a function of whether the investigator pointed (without pointing versus with pointing). ..................................................................................................................................... 120 xii   Figure 6.4 Decision bias (Beta) as a function of whether the investigator pointed (without pointing versus without pointing). ............................................................................................................ 121   xiii  Acknowledgements I would like to express a heartfelt thanks to my PhD advisor Alan Kingstone for his support, guidance and patience over the many (many) years. Alan, I appreciate the freedom you gave me to explore and pursue what has truly inspired me. I definitely wouldn’t have made it through this program without you. Thank you! I feel so fortunate to have worked with such wonderful, kind, and caring people in the BAR lab. Thank you for making the lab such a fun place to be. I owe a special thanks to all of my research assistants - Crystal, Mona, Paul, Fouziah, Samin, Brooke, Mikayla, Oren - who contributed to this work and kept me organized all these years. Mike, Maddy and Carter, the three of you remind me every day of what really matters. I am so grateful for this.  You’ve inspired me to laugh, play and live in the moment. Thank you for all that you teach me.  Mom and Dad, thank you for encouraging (forcing?) me to “get educated” and showing me how to work hard (even though I chose not to sometimes). I am so lucky that you are my parents. You’ve helped me become a stronger and better person. Harley, thank you for never bothering me with questions about my research. Thank you to all of my friends and family who have been so loving despite the distance and (sometimes very long) time between calls. To my “Vancouver family” - you made Vancouver my home. Your love and support kept me sane all these years.   Momo, thank you for literally journeying with me to every province, city, and home I’ve had since undergrad. You are one well-traveled kitty! I am also grateful for the financial support I received to do this work from the Natural Sciences and Engineering Research Council of Canada, and the University of British Columbia.  xiv  Dedication           To all the students, whose passion for psychology has motivated me. 1  Chapter 1: Introduction From the moment a child is born, s/he is engaging in a social interaction with its mother to communicate needs and to have those needs met. This is the earliest example of the importance of social interactions, and intuitively, the importance of interpreting social cues from others continues to be important throughout one’s life. Our eyes are central to social interaction, as they convey a wealth of information about our emotional and mental states which people use to decode our behaviours and intentions (Emery, 2000). During a social interaction, people tend to look at other peoples' eyes to gauge whether they are interested (Argyle, Lefebvre, & Cook, 1974; Ellsworth & Ross, 1975), paying attention (Kleinke, Staneski, & Berger, 1975), and what their intentions may be (Baron-Cohen, 1995; Emery, 2000; Frischen & Tipper, 2006; Kleinke, 1986; Ristic et al., 2005; Shimojo, Simion, Shimojo, & Scheier, 2003). Accordingly, it has been argued that one’s ability to attend to the eyes of others plays a critical role in understanding and facilitating social interaction (Campbell, Heywood, Cowey, Regard, & Landis, 1990; Cary, 1978; Emery, 2000; Kleinke, Staneski, & Berger,1975; Perrett & Emery, 1994; Tomasello, Carpenter, Call, Behna, & Moll, 2005; Vertegaal, Slagter, Van Der Veer, & Nijholt, 2001). On the other hand, failing to properly attend to the eyes of others has been linked to deficits in social functioning in autism spectrum disorder (see Senju & Johnson, 2009a for a review) as well as social anxiety disorder (Schneier, Rodebaugh, Blanco, Lewin, & Liebowitz, 2011; Wieser, Pauli, Alpers, & Mühlberger, 2009). Indeed, researchers have theorized that eye gaze represents a special social attentional cue (Baron-Cohen, 1995) that may be processed by dedicated neural mechanisms (such as that revealed by activity in the superior temporal sulcus, Campbell, Heywood, Cowey, Regard, & Landis, 1990; Itier & Batty, 2009).  2  Researchers have attempted to study the eyes’ importance as a social attentional cue by using variants of classic visual attention paradigms in conjunction with socially-relevant stimuli (e.g., an image of a face looking at you). In these laboratory-based tasks, such a stimulus is presented on a computer screen and a person’s eye movements – and other attentional behaviours in response to the stimulus – are recorded. Using these tasks, researchers typically find that people preferentially attend to the eyes of others and are sensitive to signals they convey (e.g., people attend to where other people look, especially when they look at them (Blais, Jack, Scheepers, Fiset, & Caldara, 2008; Böckler, van der Wel, & Welsh, 2015; Conty, George, & Hietanen, 2016; Conty, Gimmig, Belletier, George, & Huguet, 2010; Conty, N’Diaye, Tijus, & George, 2007; Doi & Ueda, 2007; Doi, Ueda, & Shinohara, 2009; Freeth, Foulsham, & Kingstone, 2013; Marino, Mirabella, Actis-Grosso, Bricolo, & Ricciardelli, 2015; Nathalie George, Hugueville, Conty, Coelho, & Tijus, 2006; Itier & Batty, 2009; Mares, Smith, Johnson, & Senju, 2016; Palanica & Itier, 2011; Senju, Hasegawa, & Tojo, 2005; Senju & Johnson, 2009b; von Grunau & Anston, 1995; Vuilleumier, George, Lister, Armony, & Driver, 2005).  These findings are consistent with the idea that the eyes of others are important attentional cues. While these lab tasks have made use of a variety of different social stimuli, which vary in complexity and approximation to real social interactions, the stimuli in these tasks are seldom real people. However, recent research suggests that both the attentional behaviors and subservient neural mechanisms engaged while interacting with a real person, who can interact with you, are fundamentally different than those exhibited while viewing an image of a person (Bailenson, Blascovich, Beall, & Loomis, 2001; Kingstone, 2009; Kingstone, Smilek, & Eastwood, 2008; Risko & Kingstone, 2015; Risko, Laidlaw, Freeth, Foulsham, & Kingstone, 2012; Risko, Richardson, & Kingstone, 2016; Smilek, Birmingham, Cameron, Bischof, & Kingstone, 2006). These concerns regarding the ecological validity of using images of people 3  instead of real people while studying social behaviour have encouraged social attention researchers to study attentional behaviour in response to real people in more natural settings. Studies of interactions that are closer approximations to what one might encounter in real life suggest that attentional behaviours in response to another’s eye gaze are (a) actually much more complex than was previously assumed and (b) are influenced by the social context that differs between the lab and real life (Gallup, Hale, et al., 2012; Gallup, Chong, & Couzin, 2012; Goffman, 1963; Hietanen, Leppänen, Peltola, Linna-Aho, & Ruuhiala, 2008; Laidlaw, Foulsham, Kuhn, & Kingstone, 2011; Pfeiffer, Vogeley, & Schilbach, 2013; Pönkänen, Alhoniemi, Leppänen, & Hietanen, 2011; Pönkänen, Peltola, & Hietanen, 2011; Schilbach, 2015; Schilbach et al., 2013; Zuckerman, Miserandino, & Bernieri, 1983). As attention is the gateway for many other cognitive processes (e.g., memory, perception, etc.), it is likely that social attention between real individuals could affect other cognitive processes as well. The present thesis investigates this idea with regard to memory. 1.1 Chapter Overview The aims of the present chapter of this dissertation are two-fold. First, it provides an overview of recent lab findings on eye gaze and attention. The review will focus on three attentional behaviours that occur in response to viewing another’s eye gaze and discuss how these behaviours manifest in response to social stimuli that range from representations of faces (i.e., simplistic cartoons of faces) presented on computer screens to actual people engaging in naturally-occurring social interactions.  The review highlights the findings from the laboratory and natural contexts to illuminate the differences that exist in attentional behaviours that occur in response to eye gaze in these different contexts. Critically, the attentional literature sheds light on the fact that 4  the socially communicative aspects of eye gaze are not being tested as effectively as one might wish in laboratory studies that rely on images of people. Second, this review explores how our attention to eye gaze influences subsequent memory for information1. As attention is critical for the processing of both external information and internal thoughts, elements of attention can have a potent impact on memorability.  Thus, the same concerns regarding the ecological validity of social stimuli apply to our memory in social contexts. The review will detail how memory researchers have approached this issue focusing specifically on the memory effects associated with eye gaze, as these studies more closely approximate real people in socially communicative contexts.   1.2 Eye gaze influencing attention in computer based-tasks 1.2.1 Preferential attention to the eyes of others Our preference to attend to the eyes of others has been demonstrated in numerous laboratory studies where individuals freely view isolated images of faces while their gaze is monitored (Henderson, Williams, & Falk, 2005; Laidlaw, Risko, & Kingstone, 2012; Pelphrey et al., 2002; Walker-Smith, Gale, & Findlay, 1977; Yarbus, 1967). One criticism of this work is that faces are presented centrally and in isolation, resulting in the eyes being quite prominent (Andrews,                                                  1Typically, the term memory encompasses a variety of different components (e.g., sensory memory, long-term memory) and processes (e.g., the ability to encode, consolidate, and retrieve information) that can be indexed through many different tasks (e.g., free-recall tasks, cued-recognition tasks). The present thesis uses a recognition paradigm that manipulates eye gaze during encoding and short-term consolidation and only indexes memory though this one mode of retrieval. Thus, the term memory refers to the recognition process throughout this document. This issue is returned to, and discussed in some detail, in the final chapter of the thesis.  5  Davies-Thompson, Kingstone, & Young, 2010; Bindemann, Scheepers, & Burton, 2009). However, the tendency to look at the eyes persists even when participants freely-view static scenes (Birmingham, Bischof, & Kingstone, 2008a, 2008b, 2009; Castelhano, Wieth, & Henderson, 2007; Smilek, Birmingham, Cameron, Bischof, & Kingstone, 2006) and dynamic videos that display multiple people and objects (Cheng, Tracy, Foulsham, Kingstone, & Henrich, 2013; Coutrot & Guyader, 2014; Foulsham, Cheng, Tracy, Henrich, & Kingstone, 2010; Foulsham & Sanderson, 2013; Gustav Kuhn, Tatler, & Cole, 2009). Interestingly, this gaze preference becomes stronger as the people in the scene appear to interact with each other more (a finding that is true in both static, Birmingham et al., 2008b, and dynamic displays,  Foulsham & Sanderson, 2013), or when viewers focus on social aspects in a scene (e.g., what people might be thinking, Birmingham et al., 2008b). For example, when viewers can see a group of people in a video and hear them talking, they will look at the eyes of the people in the videos more than when the videos are presented without sound (Foulsham & Sanderson, 2013). This suggests that the tendency to look at the eyes may be even more potent when people try to explicitly interpret the social signals conveyed by others. Critically, low-level factors, such as sounds and stimulus saliency, are not driving this preference to look at the eyes, though these factors seem to enhance its’ tendency (Birmingham, Bischof, & Kingstone, 2008b, 2009b; Coutrot & Guyader, 2014; Foulsham & Sanderson, 2013).  1.2.2 Attending to where others look Once a viewer has attended to someone’s eyes, they are incredibly sensitive to where people are looking in their environment (Driver et al., 1999; Friesen & Kingstone, 1998; Langton & Bruce, 1999; Shepherd, 2010). One feature that helps us determine the direction of another’s 6  gaze comes from the physical properties of the eye itself. These properties (i.e., a dark pupil in contrast with a light sclera), along with the neural mechanisms dedicated to processing eye gaze, have evolved to enable us to quickly and easily detect the object of another’s gaze (Anderson, Risko, & Kingstone, 2011; Ando, 2004; Emery, 2000; Kobayashi & Kohshima, 1997). This is important not just because where people look indicates what they are attending to, but also because their looking behaviour is indicative of what they might do next (Land & Hayhoe, 2001; Land & Mcleod, 2000; Shepherd, 2010).   To gaze-follow is to orient attention to where or what someone else is looking at in the environment. This can be achieved by actually moving the eyes and/or head (i.e., overt attention) or attending without moving the eyes or head (covert orienting). Following another’s eye gaze is useful because it helps us to better understand not only what someone finds interesting in the environment, but also what might be interesting to us. To examine covert gaze following in the laboratory, researchers have modified the classic Posner (1980) spatial cueing task to investigate whether people will attend to where someone else is looking. Typically, observers are presented with an image of a schematic (cartoon) face on a computer screen that looks either to the left or right side of space. Next, an object appears in either the location the face is looking or in the mirror opposite location. Results from these studies demonstrate that people are faster to detect and respond to objects that are being looked at than objects that are not looked at (i.e., the gaze cueing effect; Driver et al., 1999; Friesen & Kingstone, 1998; Frischen, Bayliss, & Tipper, 2007; Langton & Bruce, 1999), even when the direction of the eyes does not reliably indicate the location of a target object (Friesen, Ristic, & Kingstone, 2004). These findings are not limited to covert attention. In gaze cueing studies where researchers have monitored overt attention (i.e., their eye movements), people are much faster to move their 7  eyes to the location that was looked at than the location that was not looked at. Observers will also make spontaneous eye movements to locations that have been looked at, even when those looks are non-predictive (Deaner & Platt, 2003; Mansfield, Farroni, & Johnson, 2003) or even counter predictive (Kuhn & Kingstone, 2009) of a target location. Taken together, behavioral studies using variations of the gaze cueing paradigm suggest that gaze following is robust and can be driven automatically in response to another’s gaze.  As in studies of gaze selection, viewers will spontaneously follow the gaze of another person in perceptually complex scenes (Castelhano et al., 2007; Palanica & Itier, 2011; Zwickel & Võ, 2010) and in videos (Kuhn & Kingstone, 2009; Kuhn, Caffaratti, Teszka, & Rensink, 2014; Kuhn & Martinez, 2012; Nuku & Bekkering, 2008; Rensink & Kuhn, 2015; Teufel, Alexis, Clayton, & Davis, 2010; Wiese, Wykowska, Zwickel, & Müller, 2012; Wykowska, Wiese, Prosser, & Müller, 2014). For example, Kuhn and colleagues recorded the eye movements of people in numerous studies as they watched videos of magic tricks (for reviews see Kuhn & Martinez, 2012; Kuhn et al., 2010; Kuhn, Caffaratti, Teszka, & Rensink, 2014; Rensink & Kuhn, 2015). Their task was to catch a magician’s slight of hand, and to do this people’s initial tendency was to look where the magician was looking. Interestingly, gaze following could be suppressed if viewers were told that watching the magician’s hands would help them to detect the trick, resulting in them preferentially attending to the magician’s hands. Thus, the tendency to follow the eye gaze of others in dynamic videos depends heavily on whether one believes that the eyes convey socially relevant information.   Furthermore, the communicative intention of people displayed in videos has a profound impact on whether a viewer will follow their gaze  (Wiese, Wykowska, Zwickel, & Müller, 2012; Wykowska, Wiese, Prosser, & Müller, 2014). For example, if viewers believe that the person 8  depicted in a video decides where to look, viewers will follow their eye gaze. However, viewers will cease to follow a person’s gaze if they believe that a computer is dictating where that person looks.  These results indicate that our willingness to follow eye movements is highly dependent on whether a viewer believes that the person in a video is actually conveying communicative signals with their eyes.  As with the preference to attend to the eyes, the idea that people follow gaze seems to scale up to more realistic stimuli, but as task stimuli become more complex, so do the mechanisms that underlie one's tendency to follow eye gaze. Indeed, as the stimuli and tasks became more complex, gaze following behavior became more sensitive to many top down factors including, but not limited to, task demands and beliefs regarding the person depicted in the task. These findings indicate that gaze following is not only about the perception of a person's gaze, but it is also about interpreting what that individual intends to communicate with their eyes.   1.2.3 Attending to people who look at us Another way that people process someone’s eyes once they are looking at them is to determine whether they are making eye contact or not. Researchers have speculated that people have a special attentional mechanism dedicated to detecting and processing when someone looks directly at them because of direct gaze’s particular importance as a social cue (e.g., eye contact can indicate that someone is intending to interact with us; Hietanen, Leppänen, Peltola, Linna-Aho, & Ruuhiala, 2008; Itier & Batty, 2009; Senju & Johnson, 2009b; von Grunau & Anston, 1995). This mechanism would enable faster processing of person’s intentions when they look directly at an individual than when they look elsewhere (e.g., eye contact can initiate a friendly interaction or a hostile one Argyle & Cook, 1976; Argyle & Dean, 1965; Argyle et al., 1974; 9  Bailenson, Blascovich, Beall, & Loomis, 2001). And in doing so, one can more quickly decide how best to respond.  Using a variant of a visual search task, this idea regarding direct gaze has been tested by studying whether faces looking towards a viewer capture attention more quickly than faces that look elsewhere (Doi & Ueda, 2007; Farroni, Csibra, Simion, & Johnson, 2002; Senju & Hasegawa, 2005; Senju & Johnson, 2009b; von Grunau & Anston, 1995). In a typical situation, participants determine whether a target schematic face (e.g., a face looking either towards the viewer or elsewhere) is present or absent among a set of distractor faces. Results show that participants are faster and more accurate at detecting the presence or absence of schematic faces that look at them than those that look elsewhere (Doi, Ueda, & Shinohara, 2009; Doi & Ueda, 2007; George et al., 2006; Senju et al., 2005; Palanica & Itier, 2011; Senju, Kikuchi, Hasegawa, Tojo, & Osanai, 2008; von Grunau & Anston, 1995).  Similarly, our attention could also be held longer by the eyes when a schematic face looks directly at us than when it looks elsewhere. Gaze cueing paradigms have reported that when a face cue exhibits direct gaze prior to the onset of a target, participants take more time to detect targets then when the face's gaze is averted (Senju & Hasegawa, 2005); and this delay from direct gaze is greater than when a face gazes in the opposite direction as the target (Vuilleumier, 2002). This suggests that direct gaze attracts and retains attention at the cued location so that orienting attention towards the target location is delayed. When static photographs of faces are used in face detection tasks, other factors can influence whether being looked at will capture one’s attention.  In these tasks, participants are presented with a face on a computer screen and are asked to make a judgement about the face via button press (e.g., whether the face is male or female, whether the eyes are looking to the left or 10  right). Viewers are often faster to make judgements about a face that looks directly at them than a face that looks elsewhere (Macrae, Hood, Milne, Rowe, & Mason, 2002; Pageler et al., 2003; Conty et al., 2007). However, the orientation of the face (Conty, Tijus, Hugueville, Coelho, & George, 2006; George, Driver, & Dolan, 2001; Itier, Van Roon, & Alain, 2011; Itier, Villate, & Ryan, 2007), the ethnicity/culture of a viewer (Blais et al., 2008; Jack, Blais, Scheepers, Fiset, & Caldara, 2007), and the gender of the face and/or viewer (Vuilleumier, George, Lister, Armony, & Driver, 2005) can influence whether direct gaze captures attention.  Some work even finds that viewers are actually slower to make judgments about faces that look at them than faces that look elsewhere (Vuilleumier, George, Lister, Armony, & Driver, 2005). The reasons for these inconsistent results in the literature are still unclear but they could be due to the use of slightly different paradigms adapted to the different methodologies used (e.g. ERPs and eye tracking in Itier, Alain, Sedore, & McIntosh, 2007; Itier, Villate, & Ryan, 2007; fMRI in Pageler et al., 2003; strictly behavioral tests in Langton, 2000; Vuilleumier, George, Lister, Armony, & Driver, 2005) which in turn creates different top-down attentional sets that modify performance.  Using dynamic video displays, several eye tracking studies have found that while watching videos, viewers spend more time looking at the faces of people that look at them than faces that look elsewhere  (Mojzisch et al., 2006; Wieser et al., 2009). Viewers who watched video sequences in virtual reality also spent more time looking at characters that simulated eye contact than characters that did not (Schrammel, Pannasch, Graupner, Mojzisch, & Velichkovsky, 2009).  Generally speaking, the research supports the idea that direct gaze displayed by people in images and videos attracts attention. Consistent with the literature reviewed on the general preferences to attend to the eyes, as well as the tendency to follow gaze, the effect of direct gaze on attention seems to persist when more complex stimuli are used. That said, more research is 11  needed to understand what factors are involved in whether direct gaze is attention grabbing in naturalistic situations.   1.2.4 Section Summary Our tendencies to attend to the eyes of others, where other people look, and others who look at us, have been demonstrated consistently across a wide variety of laboratory-based tasks. Importantly, in the main, these tendencies continue to occur when more naturalistic methods and stimuli are applied. While these data are consistent with the idea that the eyes of others are important attentional cues, the stimuli in these tasks are seldom real people. The viewer is typically a passive observer of the people depicted in scenes and videos, rather than an active participant in an interaction. During a live interaction, a participant can communicate to an observer and an observer can communicate to the participant (Gobel, Kim, & Richardson, 2015; Hayward, Voorhies, Morris, Capozzi, & Ristic, 2017; Jarick & Kingstone, 2015; Laidlaw et al., 2015; Myllyneva & Hietanen, 2016; Nasiopoulos, Risko, & Kingstone, 2015; Risko et al., 2016a; Wu, Bischof, & Kingstone, 2014). While the reviewed computer-based studies have assumed that the eyes are generating attentional effects because they are a socially communicative cue, they have done so without actually examining how people attend to the eyes of others in a truly socially interactive context.   1.3 Eye gaze influencing attention in natural settings There are studies that have used tasks that allow researchers to explore how people attend to the eye gaze of others in contexts where the viewer could potentially interact with the people around them.  In contrast to the studies reviewed thus far, these studies indicate that in truly interactive settings, the willingness to look at the eyes of others is not a given. Instead, the 12  willingness to do so is dependent upon social norms. For example, there are plenty of situations (e.g. an elevator, a waiting room) where people avoid looking at the eyes of others during a social encounter (Cary, 1978; Foulsham, Walker, & Kingstone, 2011; Freeth et al., 2013; Gallup, Chong, & Couzin, 2012; Gobel, Kim, & Richardson, 2015; Goffman, 1963; Gregory et al., 2015; Kuhn, Teszka, Tenaw, & Kingstone, 2016; Laidlaw, Foulsham, Kuhn, & Kingstone, 2011; Laidlaw, Rothwell, & Kingstone, 2016; Patterson, Webb, & Schwartz, 2002; Wesselmann, Cardoso, Slater, & Williams, 2012; Wu, Bischof, & Kingstone, 2013; Wu et al., 2014; Zuckerman, Miserandino, & Bernieri, 1983). Consistent with this idea, Laidlaw et al. (2011) compared an individual’s tendency to look at another person in a waiting room, in a live and video condition. Participants looked longer and more often at a person in a videotape than they did when the same person was live. Likewise, Foulsham et al. (2011) found that people avoid looking at other nearby pedestrians as they walk past. However, when the same participants watched videos of their walks in the lab they were as likely to look at the pedestrians who were nearby as those who were far away. These findings suggest that although people prefer to look at people in images, when a real person is nearby people avoid looking at them. There is also evidence to suggest that our tendency to follow the gaze of others in natural environments depends on whether or not the viewer can be seen. For example, Gallup and colleagues (2012a, b) hid a camera inside an object which was placed in a busy hallway. The critical measure was whether pedestrians would be more likely to look at the object if someone else was already looking at it. When pedestrians approached an individual from behind (when the individual could not see them) they were more likely to look at the object than if no one had looked at the object. However, pedestrians approaching the individual from the front were actually less likely to look at the object than when no one had looked at the object at all. Thus, in the context 13  most similar to the gaze cueing paradigm, where the eyes of the individual were fully visible, the pedestrians were actually less likely to follow gaze. This suggests that one’s likelihood of following gaze in a natural context is moderated by whether or not the pedestrian believes that they can be easily seen, and whether or not the potential for interaction under these conditions is high (i.e., facing one another). The results of Gallup and colleagues (2012a,b), Foulsham and colleagues (2011), and Laidlaw and colleagues (2011) suggest that people avoid looking at other people and their eyes in more natural settings where they could potentially interact with them. This lies in stark contrast with the idea, generated in laboratory studies when images of people are used as stimuli, that people have a strong preference to attend to people and their eyes.  Perhaps these discrepancies are not that surprising since studies from laboratories and natural contexts differ in a number of important ways that could influence social attention. Firstly, in the lab, most participants are presented with photos and videos of people that they cannot interact with. Even in lab-settings, people respond differently when they see people in photographs and in person (Hietanen et al., 2008; Itier & Batty, 2009; Pönkänen, Alhoniemi, Leppänen, & Hietanen, 2011; Pönkänen, Peltola, & Hietanen, 2011; Risko, Laidlaw, Freeth, Foulsham, & Kingstone, 2012; Schilbach et al., 2013; Teufel, Fletcher, & Davis, 2010). For example, being looked at by someone can generate neurological activity associated with approach behaviors (presumably reflecting one’s processing of a potential interaction), but only when the person is real (Hietanen et al., 2008; Pönkänen, Alhoniemi, et al., 2011; Pönkänen, Peltola, et al., 2011). Thus, eye gaze effects are at least partially dependent on whether or not you can interact with the person you are looking at. It follows that studies that use live people or imply the presence of others may better assess how a potential social interaction contributes to social behaviour (De Jaegher, Di Paolo, & Gallagher, 2010; Risko et al., 2012; Schilbach, 2010).  14  Second, in natural settings, social norms, rather than the investigator, dictate whether looking at someone is appropriate or not (Laidlaw et al., 2011; Risko & Kingstone, 2011; Wu et al., 2014; Wu et al., 2013; Goffman, 1963). Participants in most lab-based tasks are instructed by the investigator to look at the stimuli (i.e., people) that are presented on a computer screen. Consistent with this idea, when participants are instructed not to look at the eyes in a computer task, they do not look at them as much even though they will still occasionally glance at the eyes (Kuhn et al., 2008; Laidlaw et al., 2012). Even in real life it seems that most people will occasionally look at other people (e.g., when the other person is unlikely to notice being looked at) even when it may be inappropriate to do so (Foulsham et al., 2011; Gallup, Hale, et al., 2012; Gallup, Chong, et al., 2012).  There are, of course, many natural contexts where it is socially acceptable to look at other people (e.g., sharing lunch with a friend, or simply observing people as when watching actors in a play or delivering a lecture to students). With regards to the potential for interaction, these studies are analogous to the laboratory studies. Research suggests that in these contexts people do look at one another (Freeth et al., 2013; Kuhn & Tatler, 2005; Tatler & Kuhn, 2007; Wu et al., 2014; Wu et al., 2013). In a live setting where people are asked to watch a magic trick, people will follow the gaze of a magician (Kuhn & Tatler, 2005; Tatler & Kuhn, 2007). In fact, participants seemed unable to avoid looking at and following the eyes of the live magician, even when they had been instructed not to. Interestingly, while watching a video of the magician’s trick, participants could easily prevent themselves from following the magicians gaze when they were instructed not to. These studies suggest that when it is socially acceptable to look at the eyes, the attentional effects of the eyes might even be stronger when a person is real. In a more interactive context, Freeth et 15  al. (2013) monitored the eye movements of individuals as they were interviewed by an investigator in person or over video. Participants looked at the interviewers more while the interviewer made eye contact than when they looked elsewhere. However, this was not true when the investigator simulated eye contact over video. Thus, seeing someone look at you in a live setting (i.e., establishing eye contact) can capture your attention (Freeth, Foulsham, & Kingstone, 2013a; Patterson et al., 2007; Patterson et al., 2002) which is consistent with findings from lab studies (Conty et al., 2007; Doi et al., 2009; Doi & Ueda, 2007; George & Conty, 2008; George et al., 2006; Itier, Villate, et al., 2007; Palanica & Itier, 2011; Senju et al., 2005, 2008; von Grunau & Anston, 1995).  Perhaps most convincingly, Wu et al. (2013; 2014) found that while participants ate together, people tended to look at each other while they ate (even though they would look away when the other person was about to take a bite of food). When people ate alone, they tended to look at their food. Interestingly, pairs who talked with each other more also looked at each other more. Taken together, the results suggest that when people are interacting with each other it is the social norm to look at each other; and that in these contexts, people do look at the eyes of others which is consistent with findings in laboratory settings. 1.3.1 Section Summary The concerns regarding the ecological validity of using images of people instead of real people while studying social behaviour have encouraged social attention researchers to study attentional behaviour in response to real people in more natural settings. The key difference between the images and the live interaction with real people is that in a live interaction there is a two-way dialogue: a participant can communicate signals to an observer and the observer can 16  signal to the participant (Conty, George, & Hietanen, 2016; Gobel, Kim, & Richardson, 2015; Jarick & Kingstone, 2015; Myllyneva & Hietanen, 2015, 2016; Nasiopoulos, Risko, & Kingstone, 2015; Risko & Kingstone, 2015; Risko et al., 2012; Risko, Richardson, & Kingstone, 2016). Moreover, the attentional behaviors and associated neural mechanisms while interacting with a real person, who can interact with you, appear to be fundamentally different than those exhibited while viewing an image of a person. Studies of interactions that are closer approximations to what one might encounter in real life have revealed that attentional behaviours in response to another’s eye gaze are actually much more complex than was previously assumed and are influenced by socially communicative context that differs between the lab and real life.   1.4 Eye gaze influencing memory in computer-based tasks 1.4.1 Preferential attention to the eyes in face recognition tasks People prefer to attend to the eyes of faces that are presented in laboratory based memory tasks (Walker-Smith, Gale, & Findlay, 1977; Henderson, Williams, & Falk, 2005), and this preference to look at the eyes may actually help a viewer remember a face (Adolphs et al., 2005; Gosselin & Schyns, 2001; Henderson, Falk, Minut, Dyer, & Mahadevan, 2001; Henderson et al., 2005; Laidlaw & Kingstone, 2017; Mckelvie, 1976; Schyns, Bonnar, & Gosselin, 2002; Vinette, Gosselin, & Schyns, 2004). For example, when participants are asked to study different faces for a later memory test, participants will spend most of their time looking at the eyes of faces and relatively little time looking at any other features on the face (Henderson, Williams, & Falk, 2005). The same is true during the recognition portion of these tasks. Importantly, if participants are explicitly told that they will be asked whether they recognize these faces after they finish studying them, then participants will spend even more time looking at the eyes while they study the faces 17  than when they are not explicitly told about the upcoming recognition test (Henderson, Falk, Minut, Dyer & Mahadevan, 2001). This latter finding lends support to the idea that people look at the eyes of others during these tasks for the purpose of encoding and remembering faces, rather than simply because attention is drawn to the eye region because of its saliency (Althoff & Cohen, 1999; Luria & Strauss, 1978; Mertens, Siegmund, & Grüsser, 1993). Since the eyes are looked at so frequently during these recognition tasks, researchers suspected that the eyes could be more important than other facial features for remembering faces. Researchers have tested the importance of the eyes relative to other facial features, by hiding or removing the eye region of a face presented either during the encoding or recognition phase of recognition tasks. When the eyes of a face are hidden, participants are less likely to remember seeing a face than when a different feature was missing (Gosselin & Schyns, 2001; Haig, 1986; Mckelvie, 1976), which supports the idea that the eyes are more important for face recognition than other facial features. These findings converge with recent evidence that relative to other facial features, eyes improve the identification (Caldara et al., 2005), detection (Lewis & Edmonds, 2003) and discrimination of faces (Schyns et al., 2002; Vinette et al., 2004).   1.4.2 Eye gaze direction and memory for faces and semantic information The people who are most likely to interact with us are usually those with whom we make eye contact (Argyle & Dean, 1965; Itier & Batty, 2009; Kampe, Frith, & Frith, 2003; Myllyneva & Hietanen, 2015). Given that being looked at by someone is such a salient attentional cue and that people who look at us are preferentially attended, it is likely that recognition is also enhanced for faces that look at us than those that look elsewhere. Indeed, relative to averted gaze, direct gaze facilitates the identification, recognition, and perception of people's faces in simple recognition 18  tasks (Farroni et al., 2002; Hood, Macrae, Cole-Davies, & Dias, 2003; Laidlaw & Kingstone, 2017; Mason, Hood, & Macrae, 2004; Smith, Hood, & Hector, 2006). For example, faces that display direct gaze are more memorable than faces that look elsewhere (Hood, Macrae, Cole-Davies, & Dias, 2003; Macrae et al., 2002; Mason et al., 2004; Smith et al., 2006). That said, it is unclear whether these findings persist in more complex recognition tasks (Daury, 2009, 2011). Moreover, sometimes memory is only improved for faces that are oriented a certain way (Vuilleumier et al, 2005), or are observed for one gender and not the other (Vuilleumier et al, 2005; Goodman, Phelan & Johnson, 2012). Interestingly, some research suggests that gaze related face processing benefits can extend beyond facial information to semantic information. That is, people can more easily process information (i.e., letter and number strings) when it has previously been associated with a face that appears to be looking at them (Falck-Ytter, Carlström, & Johansson, 2014; Fry & Smith, 1975; Kelley & Gorham, 1988; Macrae et al., 2002). Macrae and colleagues (2002) presented participants with faces that either looked at them or away from them, followed by a string of letters. The participants' task was to indicate whether the letter string was a word or not. Critically, the participants were faster to correctly categorize letter strings as words when they were primed by a face that looked towards them rather than away from them. This study represents an important step in the field because it is the first study to suggest that when someone looks at us directly relative to when they look elsewhere, it can enhance processing for information that is not directly tied to the face or eyes.  Dodd, Weiss, McDonnell, Sarwal and Kingstone (2012) extended the idea that eye gaze could improve memory for semantic information by examining how averted gaze, rather than direct gaze, could influence memory for semantic information. Given how important gaze 19  following is as an attentional cue, it is perhaps surprising that only one study has examined whether following another’s gaze, which has been well documented in attention literature, can influence memory. Based on the findings that people pay attention to objects that someone else looks at, Dodd et al. reasoned that an object would also be more memorable if someone looked at it than if someone had not. To explore this idea, they used a typical gaze cueing paradigm where a schematic face looked either to the right or left side of the screen and words would appear in either the location that the face was looking at or in the opposite location. Rather than requiring participants to detect the word once it appeared on the screen, they were instructed to simply memorize the words for a later memory test. After completing all of the trials, participants were given 5 minutes to recall as many words as possible. In the first experiment, gaze direction had no effect on memory for words.  However, in Dodd et al.'s first experiment, the words remained on the screen for a relatively long time (i.e., 1000 ms). Gaze cues are known to affect attention very early after an item is presented (Friesen & Kingstone, 1998) and the observed attentional benefits of gaze cues are greatly reduced or eliminated when items are left on the screen for more than 100 ms following a gaze cue. Thus, presenting the word on screen for a relatively long time in Experiment 1 may have eliminated the effect that eye gaze had on attention, and any subsequent memory benefits, which would have been apparent had the words been presented on screen for less time. Indeed, in a second experiment, that was identical to the first experiment, except that words were now only left on the screen for 250 or 500 ms, participants were more likely to remember words that had been looked at. These findings were replicated in a third experiment, both when participants believed they were performing a memory task and when they believed they were performing a target detection task. Finally, the observed memory benefit appears to be specific to gaze cues as there was no memory 20  effect when an arrow cue was used in place of a gaze cue in a fourth experiment. Taken together, this series of studies suggest that eye gaze can modify memory for semantic information. However, it should be noted that the authors primarily attributed the memory effect to a decrement in memory for words that were not looked at/attended rather than an enhancement in memory for words that were looked at/attended. Thus, when someone’s eyes direct our attention away from and object, it becomes less memorable.  In contrast, other research suggests that a benefit of being looked at could be exclusive to faces, rather than a more general processing benefit of any information presented while someone appears to look at you (Conty, Gimmig, Belletier, George, & Huguet, 2010). The idea here is that eye contact automatically captures attention, and as such, cognitive resources are dedicated to processing eye gaze rather than processing other information that has been presented at the same time. According to this idea, participants should have more difficulty processing information that is presented at the same time as eyes that look towards them than eyes that look elsewhere. To test this idea, Conty et al. used a variant of the Stroop paradigm where participants are asked to identify the color of the ink that a word is written in. Critically, the words could be presented with a set of eyes that looked towards the participant, looked away from the participant, or were closed. Participants in these tasks typically take longer to identify the colour of a word that represents a different colour than the ink it was written in (e.g., the word GREEN written in blue ink) than when the word and the colour of ink represents the same colour (e.g., the word GREEN written in green ink) or the word is neutral regarding the colour of the ink (e.g., the word HOUSE written in green ink). This effect is known as Stroop interference. Consistent with the idea that being looked at by someone captures attention, participants showed more Stroop interference during the task when a set of eyes looked towards the participant than when the eyes were closed, or looking 21  elsewhere. This occurred despite being told that the eyes were not useful for completion of the task and that they should be ignored. In contrast to the finding that being looked at helps one process information not directly related to the face (Macrae et al., 2002), the finding here suggests that one's ability to process general information is hindered while being looked at. It is likely that methodological differences between these tasks could account for the conflicting results. In Macrae et al. (2002) eye gaze was presented just before the words whereas in Conty et al (2010) the eye gaze and words were presented simultaneously. It is possible that when direct eye gaze precedes information it acts as signal to pay attention thereby facilitating processing of subsequent information (Csibra & Gergely, 2009). However, when direct eye gaze is presented at the same time as other information, direct gaze may compete for attentional resources and thus hinder the selection and processing of the information (Conty, Gimmig, et al., 2010).   1.4.3 Section Summary Almost all of the research on eye gaze direction and memory has focused on how people remember faces that display direct and averted gaze, with a few notable exceptions. For example, people remember objects better when a schematic face has previously looked at these objects (Dodd et al., 2012) and people remember words better when a face-image looks at them just before the words are presented (Macrae et al., 2002). Although these findings are consistent with numerous laboratory studies that suggest people preferentially attend to objects that other people look at, our ability to generalize the present findings is limited. Unlike laboratory studies of attention that make use of a wide array of stimuli that vary in complexity, nearly all of the studies reviewed have relied on the use of static images of faces (with the exception of one study, Humphrey & Underwood, 2010), and no research to date has examined whether being looked at 22  or not plays an important role in the memory of complex static and dynamic images. Thus, while the reviewed findings are promising, there is a substantive gap in understanding whether the eyes will also play an important role in the memory when the gaze is delivered by stimuli that more closely approximate a real person.  1.5 Eye gaze influencing memory in natural settings In stark contrast to laboratory studies, the research that has been conducted in natural contexts has focused more on whether a speaker’s eye gaze could influence memory for what a speaker says, rather than what a person looks like. Importantly, this work has been conducted entirely in contexts where the social norm, either explicitly or implicitly, has encouraged observers to look at the speaker. Generally speaking, the finding from laboratory studies that direct gaze helps one remember semantic information (Dodd et al., 2012; Macrae et al., 2002) appears to extend to contexts where information is presented verbally by live speakers who make periodic eye contact with an audience (Fullwood & Doherty-Sneddon, 2006; Helminen, Pasanen, & Hietanen, 2016; Otteson & Otteson, 1979; Sherwood, 1987). For instance, Otteson and Otteson (1979) observed that male children whose teacher looked at them while reading a story remembered the story better than children who were not looked at. Sherwood (1987) also demonstrated that learning could be enhanced in a classroom when live instructors made eye contact with members in the audience, but not when the instructor is presented over video. In an experiment by Fullwood and Doherty-Sneddon (2006), participants remembered more information when speakers gazed into the camera (to simulate eye contact) than speakers who looked away from the camera. In a similar study with a live storyteller, male participants remembered more details from a story told by a male storyteller who looked at them relative to a story teller who 23  looked away, but female participants did not (Helminen et al., 2016). The general interpretation of these findings is that when eye contact accompanies information it serves to signal one’s intent to communicate important information that requires attention (Csibra & Gergely, 2009; Duncan & Niederehe, 1974; Duncan, 1972; Kampe, Frith, & Frith, 2003; Senju & Johnson, 2009; though see Helminen, Pasanen, et al., 2016, for an alternative account whereby physiological arousal mediates the relationship between eye gaze and memory).  1.5.1 Section Summary At first blush, the research from natural settings seems to suggest that eye contact enhances the processing and retention of information. However, as alluded to previously, this conclusion is contradicted by other laboratory research suggesting that eye contact actually hinders performance in some contexts (Beattie, 1981; Conty, Gimmig, Belletier, George, & Huguet, 2010; Nemeth, Turcsik, Farkas, & Janacsek; 2013). For example, Conty et al. (2010) found that displaying images of faces that looked toward a participant interfered with their performance on a standard cogntive interference task, whereas faces that displayed averted gaze did not. These studies support the idea that eye contact is a source of distraction that draws on attentional and other cognitive resources (Goldfarb et al., 1995; Nemeth et al., 2013), and as a result, information associated with eye contact is processed and subsequently remembered less.  To reconcile these contradictory findings, it plausible that the timing of when eye contact is made is of critical importance.  For example, when eye contact is made just before information is presented (Falck-Ytter, Carlström, & Johansson, 2014; Fry & Smith, 1975; Kelley & Gorham, 1988; Macrae et al., 2002), then eye contact may act as a signal to pay attention, which leads to enhanced information processing. In contrast, when eye contact is made while information is 24  presented, it may compete for attentional resources that would otherwise be dedicated to processing the information (Conty, Gimmig, et al., 2010; Helminen et al., 2016; Nemeth et al., 2013).  While this proposal is interesting, there are more profound issues to be addressed first, particularly when considering studies involving live speakers, for these investigations have not actually measured and/or systematically manipulated when a given participant actually experiences eye contact with the investigator. For example, in past studies listeners are normally exposed to either a speaker who never made eye contact with any listener in an audience or one who periodically made eye contact with some undefined subset of listeners (Fullwood & Doherty-Sneddon, 2006; Helminen, Pasanen, & Hietanen, 2016; Otteson & Otteson, 1979; Sherwood, 1987). As such, it is unclear how much eye contact a listener actually makes with a speaker (if they experience any eye contact at all). It is also unclear whether the specific information that listeners recalled was actually the information that was presented when the speaker looked at them2.  Although studies from both laboratories and natural settings converge on the general finding that eye gaze direction may modulate memory, it seems that previous work has been limited by the failure to establish a paradigm that is controlled enough to study the effect of eye gaze, without compromising the signals that eye contact provides in a natural setting. Laboratory                                                  2 In one instance the researchers confirmed that listeners looked at a speaker who never made eye contact during a story or one who periodically made eye contact (Helminen, Pasanen, & Hietanen, 2016). However, the speaker’s eye contact was not systematically controlled throughout the story nor was it monitored, and because of this it is not possible to determine whether the specific information that listeners recalled was actually the information that was presented when the speaker looked at them. 25  investigations have used rigorous paradigms to demonstrate that the human memory system is highly sensitive to the eye gaze of others. However, the socially communicative function of eye gaze has been constrained in these studies since they have relied entirely on images of people to test the effect of gaze on memory. Our ability to generalize these laboratory findings to settings where eye contact is made between real people is limited, since the socially communicative signals conveyed through eye contact are not present with images, and the communicative component may very well have different effects on memory. Though the effects of eye gaze on memory have been explored in natural settings, where eye contact is socially communicative, these investigations have not been rigorous enough to provide firm conclusions with respect to the question of the effect of eye contact and gaze direction on memory.  Investigations that bridge the gap between the research conducted in laboratories and in natural settings will be beneficial for determining whether, and if so how, socially communicative eye contact and gaze direction modulates memory. 1.6 Thesis Overview The goal of this dissertation is to explore and clarify whether socially communicative gaze (e.g., eye contact) can enhance memory for spoken information, by using a rigorous paradigm that directly tests this issue. The review presented in Chapter 1 highlights the disconnection that exists between the investigations that explore how eye gaze affects memory in the laboratory and in natural settings. Through this review, it is apparent that studies that rely on the collective strengths of laboratory studies and investigations from natural settings may clarify how eye gaze affects the human memory system in socially communicative contexts. This thesis reports a series of studies that manipulate the socially communicative eye gaze of a live speaker in the context of a traditional recognition paradigm used frequently in the laboratory. Through this investigation, I hope to gain 26  insight into how socially communicative eye gaze influences human memory. Specifics regarding this research are briefly summarized below.  The first two chapters (Chapters 2 and 3) test the assumption that socially communicative aspects of eye gaze may impact human memory, with Chapter 2 establishing that socially communicative eye contact does influence verbal memory. A female or male investigator read words aloud and varied whether eye contact was, or was not, made with a participant3. With both female and male investigators, eye contact improved word recognition only for female participants. This suggests that females are more attentive to nonverbal behavior than males.  The reasons for this gender difference are then explored. Chapter 3 reveals that socially communicative aspects, rather than perceptual aspects, of eye gaze are critical for improving memory. This was done by replicating key aspects from the previous experiments in a non-communicative situation (i.e., when a video of a speaker is used instead of a live speaker). Under these conditions, the memory improvements that were observed previously in response to socially communicative eye gaze are eliminated. Together, these chapters suggest that it is the socially communicative aspects of eye gaze that drive the memory enhancement.                                                   3 Gender is only one of many factors that could modify the impact that a speaker’s eye gaze has on memory. Indeed, there are many cultural (e.g., Blais et al., 2008; Jack, et al., 2007; Knapp, Hall, & Horgan, 2009; Patterson et al., 2007) and individual differences (e.g., Dawson, et al., 1998; Freeth, Foulsham, & Kingstone, 2013) that are known to influence how individuals pay attention to the eyes of others. Given how these factors influence attention, it is probable that these factors might also affect memory in response to eye gaze as well. While the present work does not examine cultural or individual differences as they relate to eye gaze related memory effects, this is most certainly a promising, and necessary, line of investigation for a future program of research. 27  There are many different social signals that co-occur with eye gaze. The next three chapters (Chapters 4-6) examine which communicative signals may be responsible for producing the previously observed memory effects in female participants.  Chapter 4 resolves whether eye contact improves memory performance or averted gaze reduces it. This chapter also dissociates a speaker’s eye contact from an anticipatory head movement. While reading the words in person (Study 5) or via skype (Study 6) to two participants, the investigator alternated making eye contact with one participant and then the other or looked down and away from both participants (baseline condition). Two important findings emerged. Word recognition improves when a participant makes eye contact with the investigator. Moreover, when all individuals are in the same room (Study 5) word recognition is worse than baseline for the participant for whom eye contact is not made. It is eye contact, and not merely perceiving a head-lift that improves memory; and interestingly 'social exclusion' (i.e., eye contact being made with someone else) may hinder it. Further, both of these memory effects can be generated merely by the belief that eye contact is being made over skype.  While eye contact is a special attentional cue, it is not the only socially communicative cue. Chapter 5 examines whether other socially communicative signals, both non-verbal and verbal, can also modulate memory. The findings indicate that they can. In Study 7, the investigator read words aloud to two participants while alternating between pointing at either of the participants (without eye contact), or at neither participant (baseline condition). In Study 8, the investigator called out the name of either participant, or neither participant, before reading a word from a list. Results indicate that participants performed best on the recognition task when words were previously read by an investigator pointing at them or saying their name, and worse when the investigator pointed at or called out the name of the other participant. These findings demonstrate 28  that the memory effects reported in the previous studies are not specific to eye gaze, and suggest that the key is that the cue be perceived to signal that the spoken information is for a particular participant (i.e., the cue is referential).   Chapter 6 tests the idea that a non-gaze referential cue could elicit memory benefits in male participants who may have struggled to process eye gaze rather than the referential signal. Recall in Chapter 1, females but not males, showed memory benefits from eye contact. In Study 9 male participants were read aloud words by an investigator who either pointed at the participant, or did not point (baseline), before reading a word. Male participants displayed a memory benefit for words accompanied by pointing. This finding indicates that, unlike eye gaze, a different referential cue (i.e., pointing) can elicit verbal memory benefits in males. This converges with a growing body of research suggesting that eye gaze is processed differently by men than other referential cues. Chapter 7 presents a general discussion of the findings in this thesis, with particular focus on the issue of eye gaze and its’ impact on memory. Future research directions are also considered.    29  Chapter 2: Does eye contact facilitate memory for verbal information? Researchers have explored how people attend to the eyes of others via laboratory studies.  Using different tasks (e.g., free viewing; Birmingham et al., 2008a, 2009a; Foulsham et al., 2010; Foulsham & Sanderson, 2013; Gustav Kuhn et al., 2009; Mojzisch et al., 2006; Schrammel et al., 2009; Walker-Smith et al., 1977; Yarbus, 1967; attentional cueing; Driver et al., 1999; Friesen & Kingstone, 1998, 2003; Kuhn et al., 2014; Langton & Bruce, 1999; Palanica & Itier, 2011; Rensink & Kuhn, 2015; Ristic, Friesen, & Kingstone, 2002; Senju & Hasegawa, 2005; Vuilleumier, 2002; Wiese et al., 2012; Wykowska et al., 2014; Zwickel & Võ, 2010; visual search; Doi, Ueda, & Shinohara, 2009; Doi & Ueda, 2007; Nathalie George et al., 2006; Senju et al., 2005; 2008; Palanica & Itier, 2011; von Grunau & Anston, 1995; and face detection; Conty et al., 2007, 2006; Itier, Van Roon, & Alain, 2011; Itier, Villate, et al., 2007; Macrae et al., 2002; Pageler et al., 2003; Vuilleumier et al., 2005) and a variety of stimuli (e.g., images of faces; Laidlaw et al., 2012; complex scenes; Birmingham et al., 2008a, 2009a; Vuilleumier et al., 2005; and dynamic videos, Foulsham et al., 2010; Foulsham & Sanderson, 2013; Kuhn et al., 2014; Wykowska et al., 2014), these studies show that people prefer to look at the eyes over any other feature on the face (Birmingham et al., 2008; 2009; Laidlaw, Risko, & Kingstone, 2011; Levy, Foulsham, & Kingstone, 2012) and that individuals are extremely sensitive to the signals they convey (e.g., people attend to where other people look, especially when they look at them, Conty, N’Diaye, Tijus, & George, 2007; Doi & Ueda, 2007; Freeth, Foulsham, & Kingstone, 2013; George, Hugueville, Conty, Coelho, & Tijus, 2006; Senju, Hasegawa, & Tojo, 2005; Senju & Johnson, 2009b; von Grunau & Anston, 1995; Vuilleumier, George, Lister, Armony, & Driver, 2005). More recently, researchers have asked similar questions about how we attend to the eyes in more natural settings, where the stimuli are live people instead of images (Foulsham, Walker, & Kingstone, 30  2011; Freeth, Foulsham, & Kingstone, 2013; Gallup, Hale, et al., 2012; Gallup, Chong, & Couzin, 2012; Gallup, Chong, Kacelnik, Krebs, & Couzin, 2014; Gobel, Kim, & Richardson, 2015; Gustav Kuhn & Tatler, 2005; Gustav Kuhn, Teszka, Tenaw, & Kingstone, 2016; Gustav Kuhn, Tatler, Findlay, & Cole, 2008; Laidlaw, Foulsham, Kuhn, & Kingstone, 2011; Laidlaw, Rothwell, & Kingstone, 2016; Patterson et al., 2007; Patterson, Webb, & Schwartz, 2002; Risko, Laidlaw, Freeth, Foulsham, & Kingstone, 2012; Tatler & Kuhn, 2007; Wesselmann, Cardoso, Slater, & Williams, 2012; Wu, Bischof, & Kingstone, 2013; Wu, Bischof, & Kingstone, 2014; Zuckerman, Miserandino, & Bernieri, 1983). These studies reveal that the way people respond, both behaviorally and neurologically, to a real person that they can interact with is often entirely different than the way they attend to an image of a person.  For instance, in socially communicative settings where interactions with live people can occur, people will only look at one another if it is socially acceptable to do so and will avoid looking at one another if it is not.  While these concerns regarding the ecological validity of attention to social stimuli may also apply to memory for social stimuli, images of people are used as stimuli in most of the work investigating how eye gaze affects memory. Some of these studies report that memory for an image of face (Hood, Macrae, Cole-Davies, & Dias, 2003; Macrae et al., 2002; Mason et al., 2004; Smith et al., 2006) and for words (Falck-Ytter, Carlström, & Johansson, 2014; Fry & Smith, 1975; Kelley & Gorham, 1988; Macrae et al., 2002) is improved when these stimuli are associated with direct gaze (though see, Beattie, 1981; Conty, Gimmig, Belletier, George, & Huguet, 2010; Nemeth, Turcsik, Farkas, & Janacsek, 2013). In the rare investigations that have used live people as stimuli, a speaker’s eye contact has been associated with improved memory for what the speaker has said (Fullwood & Doherty-Sneddon, 2006; Helminen, Pasanen, & Hietanen, 2016; Otteson & Otteson, 1979; Sherwood, 1987). For instance, Sherwood (1987) suggested that learning could be enhanced 31  in a classroom when instructors made eye contact with members in the audience. The research using images and live people as stimuli seems to indicate that eye contact enhances the processing and retention of information. However, other laboratory research suggests eye contact may actually hinder performance (Beattie, 1981; Conty, Gimmig, Belletier, George, & Huguet, 2010; Nemeth, Turcsik, Farkas, & Janacsek; 2013) consistent with the notion that eye contact draws attention and other cognitive resources away from the task at hand (Goldfarb et al., 1995; Nemeth et al., 2013). Furthermore, a basic limitation of past studies using live speakers is that researchers have generally not measured, but only assumed, that a given participant actually experienced eye contact with the investigator. As such, it remains unclear whether mutual eye contact actually enhances or hinders memory for verbal information. What is needed is a paradigm that is controlled enough to study the effect of eye gaze, without compromising the signal that eye contact provides in a natural setting (Conty et al., 2016; Helminen, Pasanen, & Hietanen, 2016; Jarick & Kingstone, 2015; Myllyneva & Hietanen, 2015; Nasiopoulos et al., 2015; Risko & Kingstone, 2015; Risko et al., 2016). The goal of the present work is to develop a rigorous paradigm that would avoid this limitation and enable one to investigate whether eye contact enhances or hinders memory for spoken information.  As previous laboratory work has successfully measured other gaze related memory effects using recognition tests (e.g., gaze cuing to visual word stimuli presented on a computer screen, Dodd et al., 2012; Falck-Ytter, Carlström, & Johansson, 2014; Fry & Smith, 1975; Hood, Macrae, Cole-Davies, & Dias, 2003; Kelley & Gorham, 1988; Macrae, Hood, Milne, Rowe, & Mason, 2002; Mason, Hood, & Macrae, 2004; Smith, Hood, & Hector, 2006), the studies presented in this and subsequent chapters will use a variant of these classic recognition tasks. The basic methodology is as follows.  In an initial study phase, a participant will be seated across from an 32  investigator who reads words out loud. Critically, before the investigator reads each word s/he will either look up to make eye contact with the participant or keep gaze down at the computer screen to avoid eye contact. Afterwards, the participant will perform a recognition test containing the words studied with eye contact, the words studied without eye contact, and new words. The key dependent measure will be recognition accuracy.  During the study, and in order to systematically control what information is presented with eye contact, a laptop computer screen, that is only visible to the investigator, will indicate the word to be read aloud and instructions on whether or not to make eye contact with the participant on a given trial. Participants will also be instructed to make eye contact with the investigator during the experiment and to look at the investigator’s eyes if making eye contact is not possible (i.e., the investigator was looking down at the screen rather than at the participant). The investigator will monitor whether the participant makes eye contact, and participants who fail to make eye contact throughout the experiment will be excluded. In previous work, direct eye gaze has enhanced and hindered memory performance, so the effect that gaze could have in the present studies was very much an open question.  As noted previously, gender has been suggested as a modulating factor in the effect of eye gaze. Often eye contact helps all participants recognize a face, regardless of the their gender (Hood et al., 2003; Macrae et al., 2002; Mason et al., 2004; Smith et al., 2006). However, sometimes memory is only improved for faces that are the same gender as the participant (Vuilleumier et al, 2005). Further, in some contexts one gender will benefit from eye contact, but the other will not (e.g., Goodman, Phelan, & Johnson, 2012; Otteson & Otteson, 1979). Finally, researchers have observed gender differences in how attentive participants are to the eyes (Connellan et al., 2000; Lutchmaya et al., 2002a) and the nonverbal signals of others (Hall, 1978; McClure, 2000; Rosenthal et al., 1979), as well as how responsive participants are to these signals (e.g., females 33  maintain more distance between themselves and a virtual agent that makes eye contact than males; Bailenson et al., 2001; Bayliss et al., 2007). Despite this, gender has seldom been systematically controlled as a factor that could influence how eye gaze affects performance. The experiments reported in Chapter 2 use either a female (Studies 1 and 3) or a male (Study 2) investigator who looks at male and female participants. This experimental set up has the added benefit of permitting an examination of whether gender will influence any observed eye gaze induced memory effects.  2.1 Study 1: Gender specific memory effects arise from eye contact It is currently unclear whether socially communicative eye contact helps or hinders memory. To determine this, the studies presented in this chapter manipulated whether an investigator reading words aloud made eye contact with a participant or not and determined how this manipulation affected word recognition. The effect of direct eye gaze on performance has yielded mixed results, and so in the initial study it was unclear what direction the results would take. If the investigator’s eye contact is helpful when a participant encodes information, then recognition performance would be best for words spoken while the investigator made eye contact. Alternatively, if the investigator’s eye contact interferes with encoding, recognition performance would be worse for words spoken while the investigator made eye contact with the participant. Since the gender of both participants’ and the gaze cue (e.g., investigator) have been reported to modulate the effect of gaze on memory, this factor was systematically manipulated across the studies reported in this Chapter.  In Study 1, the investigator was female.  34  2.1.1  Method  Participants. Eighty-four undergraduate students from the University of British Columbia (42 males, 42 females) received course credit for participating. All reported speaking English as their first language. All had normal or corrected to normal vision and were naive about the purpose of the Experiment. Participation was not restricted to a particular cultural or ethnic group, nor limited based on residence in a Western culture. As a result, the participants represented a diverse range of cultural and ethnic backgrounds. Based on the lack of available information about the participant’s specific ethnicities, analyses relating to ethnicity or cultural differences were not possible in this study or any subsequent studies.  Design. A 2 (Investigator gaze: eye contact and no eye contact) by 2 (gender: male and female) mixed design was used, where investigator gaze was manipulated within participant and gender was a between-participant variable. Apparatus. E-Prime 2.0 (www.pstnet.com) controlled the timing and presentation of stimuli read aloud by the investigator to the participant and logged response accuracy and RTs. The stimuli were presented on a 17-in. monitor with a 1920 x 1080 pixel resolution.  Stimuli. The stimulus pool consisted of the 120 words from Macdonald and Macleod (1998). The words were nouns from 5 to 10 letters long, with frequencies greater than 30 per million (Thorndike & Lorge, 1944). The list of words is provided in Appendix A.  From the 120 words, three lists containing 40 words each were randomly generated. For a given participant two lists were selected for study; one list was presented with eye contact and the other list without. The third list was reserved for a recognition test. List selection was counterbalanced across participants such that each word was presented in each of the different 35  conditions (i.e., with eye contact, without eye contact, new words for recognition) an even number of times across participants.  Procedure. Participants learned words for a later memory test. During the initial encoding phase, participants were seated ~40 in. across from a female investigator who read aloud words individually. Critically, while the investigator read the words she either looked up to make eye contact briefly (less than a second) with the participant or kept gaze down at the computer screen to avoid eye contact. Eighty words total were read aloud in random order to the participants, half of which were presented with eye contact and the other half without. For a depiction of the encoding phase experimental setup, please refer to Figure 2.1 a.  A laptop screen that was only visible to the investigator indicated when a word was to be read aloud and provided instructions on whether or not to make eye contact with the participant on a given trial. To begin each trial, a blank screen appeared for 1500ms. Next, the instruction to look up at the participant or look down at the laptop was presented to the investigator. After 1000 ms a word also appeared and remained on screen for 3000 ms. As soon as the word appeared on screen, the investigator would then look as instructed either toward the participant or down at the computer screen as she read the word aloud. Next, a blank white screen would appear for 500 ms to alert the investigator of the end of the trial. The words and eye contact instructions were randomly intermixed. Participants were also instructed to make eye contact with the investigator during the experiment and if making eye contact was not possible (i.e., the investigator was looking down at the screen) to look at the investigator’s eyes. This way eye contact could be made more easily when the investigator looked up at the participant. If the investigator was unable to make eye contact with a participant consistently on every trial, the participant was excluded (2 participants 36  replaced). The instructional sequence visible to the investigator during the encoding phase is presented in Figure 2.1 b.  Once the encoding phase was complete, the investigator would open a recognition test on the laptop and turn the laptop to face the participant. After reading the recognition test instructions to the participant, the investigator would monitor the participant’s performance as they completed 4 practice trials (which were excluded from the analysis). Next, the investigator left the participant alone in the room to complete the recognition test. The recognition test contained the words studied with eye contact, the words studied without eye contact, and 40 new words. The test words appeared on a computer screen in white font against a black background and were presented in random order. A fixation cross was presented for 500 ms before each word. When a word appeared, the subjects were instructed to make a “new” or “old” response for each test word by pressing buttons labeled “New,” and “Old” on the keyboard. There was a 500-ms blank interval before each word appeared on screen, and the word offset with the subject’s key response. The response accuracy and response times were recorded. All words were rotated through all of the conditions (with eye contact, without eye contact, and new) across participants. The trial sequence used during the recognition phase trial is presented in Figure 2.1 c. Once the recognition task was complete, the participant remained seated until the investigator came back to the room.   37   Figure 2.1 The depiction of the experimental setup and procedure used in Experiment 1. (a) The arrangement of the investigator, participant and laptop during the encoding phase. (b) The instructional sequence that was visible to the investigator for different trials during the 38  encoding phase. When instructed, the investigator would lift her eyes to make eye contact with the participant as the word to read aloud appeared on screen. The investigator is depicted from the participant’s perspective on each trial type. (c) The trial sequence that was presented to the participants during the recognition phase of the experiment. 2.1.2 Results A two-way mixed ANOVA was conducted on response time (RT), response accuracy (percentage correct), response sensitivity (d prime) and response bias (beta) with investigator gaze (2 levels: with eye contact and without eye contact) as the within participant factor and participant gender (2 levels: male and female) as the between participant factor.  RT. Mean RTs are presented in Figure 2.2. There were no main effects of investigator gaze (F(1,82)=.04, MSE=20386.56, p=0.84) or participant gender (F(1,82)=1.08, MSE=229174.88, p=0.30). Nor was there an interaction between investigator gaze and participant gender (F(1,82)=.36, MSE=20386.56, p=.55). 39   Figure 2.2 RT as a function of Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). Note that new words have been plotted in this figure as a reference point but were not included in the analysis. Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).         Percentage Correct: Analysis of the accuracy data (see Figure 2.3) revealed no main effect of investigator gaze (F(1,82)=0.37, MSE=36.69, p=.55) or participant gender (F(1,82)=0.88, MSE=321.28, p=.35). Critically, there was an interaction between investigator gaze and participant gender (F(1,82)=15.84, MSE=36.69, p<.001), such that female participants recognized more words that were spoken while the investigator made eye contact (79%) than when they did not (75%; t(41)=3.27, SEM=1.31, p<0.005). However, male participants recognized fewer words read while the investigator made eye contact (73%) than when they did not (76%; t(41)=2.37, SEM=1.33, p<0.05).  40    Figure 2.3 Percentage correct as a function of Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). Note that new words have been plotted in this figure as a reference point but were not included in the analysis. Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003). D’: The results mirror the accuracy data, revealing no main effect of investigator gaze (F(1,82)=0.12, MSE=0.05, p=.72) or participant gender (F(1,82)=0.08, MSE=1.11, p=.78), and an interaction between experimenter gaze and participant gender (F(1,82)=13.57, MSE=0.05, p<0.001). The one exception is that in contrast to the accuracy data, the male participants were no more sensitive to words presented while the investigator made eye contact (2.12) than when they did not (2.20; t(41)=1.07, SEM=0.08, p=0.29). Critically, the female participants were more sensitive to 41  words presented while the investigator made eye contact (2.22) than when they did not (2.08; t(41)=2.99, SEM=0.04, p<0.005).   Figure 2.4 D prime as a function of Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  Beta: Analysis of the beta values (see Figure 2.5) revealed no main effect of investigator gaze (F(1,82)=0.43, MSE=0.31, p=0.51) or participant gender (F(1,82)=0.87, MSE=14.79, p=0.35). There was however an interaction between investigator gaze and participant gender (F(1,82)=6.36, MSE=0.31, p<0.05), such that female participants were no more biased on words that were spoken while the investigator made eye contact (2.21) than when they did not (2.37; t(41)=1.10, SEM=0.14, p=0.28). However, male participants responded more conservatively (i.e., a tendency to indicate 42  a word is “new”, rather than “old”) on words read while the investigator made eye contact (2.98) than when they did not (2.71; t(41)=-3.02, SEM=0.09, p<0.005)   Figure 2.5 Decision bias (Beta) as a function of Participant gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact).   2.1.3 Discussion The results from this initial experiment demonstrate that females benefit from eye contact on recognition tests; however, this benefit was not observed in males. These findings suggest that eye contact has differential effects on memory for verbal information in males and females. Given that females are more attentive and responsive to nonverbal behaviour than males (Bailenson et al., 2001; Bayliss, di Pellegrino, & Tipper, 2005; Connellan, Baron-Cohen, Wheelwright, Batki, & Ahluwalia, 2000a; Hall, 1978; Lutchmaya, Baron-Cohen, & Raggatt, 2002; Marschner, 43  Pannasch, Schulz, & Graupner, 2015; Yee, Bailenson, Urbanek, Chang, & Merget, 2007) it is possible that, in the context of the present study, females dedicated more attention to information delivered during eye contact than males, which may have resulted in deeper processing and better retention of words presented with than without eye contact (Craik & Tulving, 1975). Another possibility is that making eye contact with the opposite-sex produces higher levels of arousal compared to making eye contact with the same-sex (Argyle & Dean, 1965; Donovan & Leavitt, 1980). Research has demonstrated that eye contact elevates physiological arousal (Gale, Kingsley, Brookes, & Smith, 1978; Helminen, Kaasinen, & Hietanen, 2011; Kleinke & Pohlen, 1971; Nichols & Champness, 1971; Wieser et al., 2009), and that high levels of arousal can interfere with performance on similar tasks (Beattie, 1981; Jelicic, Geraerts, Merckhelbach & Guerrieri, 2004; Smeets, Jelicic, Geraerts & Merckhelbach, 2007). While it is possible that eye contact holds one's attention by increasing arousal (i.e., affective arousal theory; Kelley & Gorham, 1988; Mather & Sutherland, 2011; Senju & Johnson, 2009b), eye contact between genders could produce excess arousal and anxiety, and thus, interfere with memory. Accordingly, male participants may have experienced more arousal than the female participants while making eye contact with the female investigator and paid less attention to the words spoken with eye contact as a result. Another way to think about this is that eye contact from a member of the opposite sex could be a source of distraction from the task at hand (Goldfarb et al., 1995; Nemeth et al., 2013). Since both the task and processing the eye contact are competing for cognitive resources, performance on the cognitive task suffers. These possibilities are examined in Experiment 2.  44  2.2 Study 2: Gender specific memory effects that arise from eye contact are not driven by investigator gender In the previous experiment female participants benefited from the investigator's gaze whereas male participants did not. This finding could be attributed to male participants experiencing arousal that interfered with processing when the female investigator made eye contact (Beattie, 1981; Jelicic, Geraerts, Merckhelbach & Guerrieri, 2004; Smeets, Jelicic, Geraerts & Merckhelbach, 2007). It could also be the case that female participants are simply more attentive to the investigator’s eye contact than males irrespective of the investigator’s gender (Bailenson et al., 2001; Bayliss et al., 2005; Connellan et al., 2000; Hall, 1978; Lutchmaya et al., 2002; Marschner et al., 2015; Yee et al., 2007). Experiment 2 seeks to distinguish between these possibilities by using a male investigator. If males now benefit from eye contact (and females are possibly hindered by eye contact), this would support the idea that the investigator’s gender contributes to the memory effect (vis-a-vis its relation to the participant). However, if the results of Experiment 1 are replicated, then a participant's gender is a critical contributing factor to this effect, a finding that would be consistent with the notion that females interpret non-verbal social cues differently than males.  2.2.1 Method Participants. Eighty-four undergraduate students from the University of British Columbia (42 males, 42 females) who had not previously participated in Experiment 1 received course credit for participating. All had normal or corrected to normal vision and were naive about the purpose of the experiment. 45   Design, Apparatus, Stimuli, and Procedure. The design, apparatus, stimuli, and procedure were identical to those used in the previous study, with the exception that now a male investigator read the words aloud to the participants instead of a female investigator. 2.2.2 Results Data analysis followed the same procedure that was used Experiment 1.  RT. Mean RTs are presented in figure 2.6. There was a marginally significant main effect of participant gender (F(1,82)=3.35, MSE=140584.08, p=.07), such that females (967 ms) were faster to respond than males (1055 ms). No other main effects or interaction were significant (all other F’s<1).   Figure 2.6 RT as a function of Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). Note that new words have been plotted 46  in this figure as a reference point, but were not included in the analysis. Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  Percentage Correct: Analysis of the accuracy data (see Figure 2.7) revealed no main effect of investigator gaze (F(1,82)=0.15, MSE=41.95, p=.70) or participant gender (F(1,82)=0.03, MSE=463.44, p=.87). Critically, there was an interaction between investigator gaze and participant gender (F(1,82)=15.22, MSE=41.95, p<.001), such that female participants recognized more words that were spoken while the investigator made eye contact (77%) than when they did not (72%; t(41)=3.68, SEM=1.17, p<0.001). However, male participants recognized fewer words read while the investigator made eye contact (73%) than when they did not (77%; t(41)=2.16, SEM=1.62, p<0.05).   Figure 2.7 Percentage correct as a function of Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). Note that new words have 47  been plotted in this figure as a reference point, but were not included in the analysis. Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  D’: The d prime results (plotted in Figure 2.8) mirror the accuracy data revealing no main effect of investigator gaze (F(1,82)=0.16, MSE=0.06, p=.69) or participant gender (F(1,82)=0.15, MSE=1.40, p=.70). Critically, there was an interaction between investigator gaze and participant gender (F(1,82)=13.49, MSE=0.06, p<.001). The one exception is that, in contrast to the accuracy data, the finding that male participants were less sensitive to words presented while the investigator made eye contact (2.22) than when they did not (2.34) was only marginally significant (t(41)=1.98, SEM=0.06, p=0.06). Female participants were more sensitive to words presented while the investigator made eye contact (2.25) than when they did not (2.10; t(41)=3.63, SEM=0.04, p<.001)    48  Figure 2.8 D prime as a function of Participant gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). Beta: Analysis of the beta values (see Figure 2.9) revealed no main effect of investigator gaze (F(1,82)=0.41, MSE=0.50, p=0.52) or participant gender (F(1,82)=0.66, MSE=9.87, p=0.42). Nor was there an interaction between investigator gaze and participant gender (F(1,82)=2.01, MSE=0.50, p=0.16).  Figure 2.9 Decision bias (Beta) as a function of Participant gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact).   2.2.3 Discussion These results from Experiment 2 replicate the findings reported in Experiment 1 that females benefit from eye contact on recognition tests whereas males do not. Furthermore, the present experiment provides evidence that the memory performance benefits in females and 49  deficits in males observed in Experiment 1 were not being driven by the gender of the investigator and its relation to the gender of the participant. These results are also consistent with the notion that females are generally more attentive to gaze cues than males. It is possible that female participants decode the various signals that could be embedded within the investigator's eye contact (e.g., a signal to pay attention, a signal that the investigator is watching to you, information about the investigators mental state, etc.) more readily than males. As a result, females quickly interpret this eye contact as a signal to pay attention, and dedicate more cognitive resources to words presented while the investigator looks at them (e.g., Otteson & Otteson, 1980; Sherwood, 1987). On the other hand, males may find interpreting the investigator’s eye contact distracting since it is uninformative to the task, or perhaps they require more cognitive resources than females to process the different social signals. As a result, their performance on the task at hand is impaired. Another intriguing possibility is that eye contact increases physiological arousal, and that this increase in arousal influences how one attends to the task at hand (Falck-Ytter et al., 2014; Helminen, Tytti, Pasanen, & Hietanen, 2016; Kelley & Gorham, 1988; Mather & Sutherland, 2011; Pönkänen, Alhoniemi, Leppänen, & Hietanen, 2011; Pönkänen, Peltola, & Hietanen, 2011; Senju & Johnson, 2009b). While experiencing a small amount of physiological arousal can be beneficial to task performance, too much can have a detrimental effect (Conty, Russo, et al., 2010; Eysenck, 1982a, 1982b; Guez, Saar-Ashkenazy, Mualem, Efrati, & Keha, 2015; Jarick, Laidlaw, Nasiopoulos, & Kingstone, 2016; Mather & Sutherland, 2011). In the present study, the eye contact experienced may have had differential effects on the physiological arousal of male and female participants. As a result, female participants may have benefited from a little physiological arousal, whereas male participants may have experienced a great deal of physiological arousal which hindered their task performance. However, in past research, only prolonged eye contact between 50  live people has been sufficient to generate an arousal response (Argyle & Dean, 1965; Helminen, Kaasinen, & Hietanen, 2011; Hietanen et al., 2008; Jarick et al., 2016). If the physiological arousal generated by perceived gaze serves to enhance attention and prioritize subsequent information processing (Kelley & Gorham, 1988; Mather & Sutherland, 2011; Senju & Johnson, 2009), then it is unlikely that the brief amount of eye contact in studies 1 and 2 would have generated an arousal response sufficient to enhance processing of the items presented with eye contact. As such, the memory benefits observed in studies 1 and 2 were most likely a result of eye contact serving as a social cue to pay attention to the upcoming information rather than producing arousal that enhanced memory. This is consistent with the idea that eye contact indicates an individuals' intent to communicate information or highlight the importance of upcoming information.  2.3 Study 3: Gender specific memory effects that arise from eye contact are not driven by the length of perceived eye contact In the two previous experiments the eye contact that was initiated by the investigator was quite brief (i.e., a quick glance (less than 1 second) up at the participant as the investigator said the word aloud). Although this brief glance may have provided enough time for females to decode the eye contact, it is possible that male participants needed longer periods of eye contact in order to decode this social cue. Indeed, in more natural settings, people tend to engage in eye contact with others between 1.7-3.6 seconds (Argyle & Dean, 1965; Helminen et al., 2011). To determine whether more eye contact can help males decode social cues more effectively, in the present study the investigator made eye contact for a longer period of time (approximately 3 seconds). In order to maximize any effect of prolonged eye contact on the male participant’s arousal, whether it is beneficial arousal or distracting arousal, we used a female investigator. If prolonged durations 51  enable men to use eye contact as a social cue to enhance verbal information processing, then the present study should reduce or eliminate the effects of participant gender on word recognition. A secondary advantage of using prolonged eye contact in the present study is to further investigate the role that arousal may play in enhancing or worsening word recognition through its association with eye contact. If prolonged eye contact generates an optimal arousal response that facilitates information processing, then we might expect eye contact to have a greater beneficial impact on memory than it did in studies 1 and 2. However, if eye contact generates arousal that distracts or interferes with information processing, than prolonging eye contact may accentuate memory decrements.  2.3.1 Method Participants. Eighty-four undergraduate students from the University of British Columbia (42 males, 42 females) who had not previously participated in Experiment 1 or 2 received course credit for participating. All had normal or corrected to normal vision and were naive about the purpose of the experiment.  Design, Apparatus, Stimuli, and Procedure. The design, apparatus, stimuli, and procedure were identical to those used in Experiments 1 and 2, with the exception that the female investigator made prolonged eye contact when instructed to look at the participant instead of brief eye contact to further accentuate any effect of eye gaze. On trials where the investigator made eye contact with the participant, the investigator said the words and made continuous eye contact with the participant for 3000 ms.  2.3.2 Results Data analysis followed the same procedure as that used in Experiments 1 and 2.  52  RT. There was a main effect of participant gender (F(1,82)=15.71, MSE=200664.95, p<.001), such that females (1019 ms) were faster to respond than males (1341 ms). No other main effects or interactions were significant (all other F’s<1).   Figure 2.10 RT as a function of Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). Note that new words have been plotted in this figure as a reference point, but were not included in the analysis. Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  Percentage Correct: Analysis of the accuracy data (see Figure 2.11) revealed a main effect of investigator gaze (F(1,82)=5.03, MSE=47.31, p<0.05), such that words presented with investigator eye contact (75%) were accurately recognized more than those presented without (73%). There was no main effect of participant gender (F(1,82)=1.77, MSE=389.94, p=.19). 53  Critically, there was an interaction between investigator gaze and participant gender (F(1,82)=6.37, MSE=47.31, p<.02), such that female participants recognized more words that were spoken while the investigator made eye contact (79%) than when they did not (74%; t(41)=3.45, SEM=1.47, p<0.001). However, male participants were no more likely to recognize words read with (72%) or without investigator eye contact (72%; t(41)=0.19, SEM=1.53, p=0.54).  Figure 2.11 Percentage correct as a function of Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). Note that new words have been plotted in this figure as a reference point, but were not included in the analysis. Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  D’: The results (plotted in Figure 2.12) mirror the accuracy data revealing a main effect of investigator gaze (F(1,82)=7.90, MSE=0.06, p<0.01), such that words presented with investigator 54  eye contact (75%) were accurately recognized more than those presented without (73%). There was no main effect of participant gender (F(1,82)=3.03, MSE=1.17, p=.09). The one exception is that the interaction between investigator gaze and participant gender was only marginal (F(1,82)=3.10, MSE=0.06, p=0.08).    Figure 2.12 D prime as a function of Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003). Beta: Analysis of the beta values (see Figure 2.13) revealed no main effect of investigator gaze (F(1,82)=3.1, MSE=0.3, p=0.1) or participant gender (F(1,82=0.84, MSE=12.49, p=0.36). Nor was there an interaction between investigator gaze and participant gender (F(1,82)=0.15, MSE=0.3, p=0.7). 55   Figure 2.13 Decision bias (Beta) as a function of Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  Comparison between Experiments 1-3. An additional follow-up analysis comparing all three Experiments was run to reveal any differences (or similarities) in the effects eye contact had on memory in each Experiment.  A three-way mixed ANOVA was conducted on mean response accuracy with investigator gaze (2 levels: with eye contact and without eye contact) as the within participant factor and experiment (3 levels: female investigator with brief glance, male investigator with brief glance, and female investigator with prolonged gaze) and participant gender (2 levels: 56  male and female) as between participant factors4. The analysis of the response accuracy data revealed a marginal main effect of investigator gaze (F(1,246)=3.67,MSE=42.00,p=.06), such that participants recognized more words that the investigator said while making eye contact (75%) than when they did not (74%). There was also an interaction between investigator gaze and participant gender (F(1,246)=35.16, MSE=42.00, p<.001), such that female participants recognized more words that were spoken while the investigator made eye contact (78%) than when they did not (74%; t(125)=6.00, SEM=0.76, p<0.001). However, male participants recognized fewer words read while the investigator made eye contact (73%) than when they did not (75%; t(125)=2.67, SEM=0.87, p<0.01). No other main effects or interactions were significant (all F’s< 1.27).   2.3.3 Discussion The results of Experiment 3 replicated the finding in both Experiment 1 and 2 that females recognized more words and were more sensitive to words presented with eye contact than without eye contact. Unlike the previous studies, male participants’ recognition performance was not significantly different in the eye contact and no eye contact conditions. Further, the follow-up analysis comparing all three Experiments revealed that the length of eye contact did not modify the effect eye contact had on memory in female participants.   The failure to observe any interaction between Experiment and Investigator gaze or Experiment, Investigator gaze, and Participant gender suggests that exaggerating eye contact does not help the males encode the non-verbal eye contact cues, nor did it enhance the eye contact                                                  4 The complete analysis on mean response time, response accuracy, response sensitivity (D prime), and response bias (Beta) is reported in Appendix B.  57  benefit observed in female participants (as compared to female participants in Experiments 1 and 2). These results also suggest that the memory benefits that arise from eye contact in Experiments 1-3 are not being mediated by an arousal response, since prolonging the eye contact in the present experiment would have, if anything, increased arousal, which could modify how eye contact affects performance. Instead, these results are consistent with the idea that eye contact provides a signal to pay attention, and that interpreting this social signal is responsible for enhancing information processing in the previous studies. If anything, it seems that lengthening the investigator’s eye contact reduced the interference eye contact caused the males in the studies 1 and 2. By providing the signal longer, males may have had enough resources to process the eye contact and perform the memory task without these two tasks competing for cognitive resources. Since they did not perform better when the investigator made eye contact than when they did not, it seems that this cue may not have had social relevance for the males. 2.4 General discussion In the present studies, female participants benefitted from the investigator's gaze more than male participants on a subsequent memory test. Specifically, female participants recognized more words in a subsequent memory test when the words were previously associated with eye contact from an investigator than when they were not. This was true regardless of the investigator's gender (female in Experiments 1 and 3 and male in Experiment 2), and whether the investigator's gaze was a quick glance (as in Experiments 1 and 2) or a prolonged stare (in Experiment 3). In contrast, male participants showed no benefit of the investigator's gaze on subsequent memory tests when the investigator's gaze was held longer (Experiment 3), and actually recognized fewer words on a subsequent memory test when they were associated with brief eye contact from the Investigator 58  (Experiments 1 and 2) relative to when they were not. While these findings suggest that eye gaze could provide a useful social cue to pay attention for females, this is not the case for males.  Since there was minor variation in how the investigator’s eye contact influenced males’ memory performance, an additional analysis was conducted to compare how the investigator’s gaze affected word recognition in the three studies presented in this Chapter (reported in section 2.3.2). The failure to find an interaction between experiment and participant gender, nor an interaction between experiment, participant gender and investigator gaze on response accuracy in that analysis indicates that any variation in how the investigator’s eye contact influenced male participants’ memory performance in Experiments 1-3 is not critical. The key finding is that the male participants' performance, though variable as a function of eye conduct across experiments, never improves with eye contact. These findings converge with a growing body of literature that suggests males and females attend to social information differently. In comparison to males, females dedicate more attention to social stimuli, such as faces and eyes (Connellan et al., 2000; Lutchmaya et al., 2002a) and more easily decode the nonverbal signals exhibited by others (Hall, 1978; McClure, 2000; Rosenthal et al., 1979). More importantly, previous studies have demonstrated that females are more responsive to eye gaze (Bailenson et al., 2001; Bayliss et al., 2007). These data suggest that females are more sensitive to social signals in general and eye gaze in particular, and thus the investigator’s eye contact engaged their attention in the first three experiments (despite being irrelevant to the task).  The general interpretation of the memory effect observed in females is that when eye contact accompanies information it is interpreted as signaling the intent to communicate information that warrants attention (Csibra & Gergely, 2009; Duncan, 1972; Kampe et al., 2003; Niederehe & Duncan, 1974; Senju & Johnson, 2009a). This is particularly noteworthy since the 59  investigator’s eye contact is actually irrelevant to the task at hand. As such, when people communicate with each other and their message is preceded or accompanied with eye contact, their eye contact serves to highlight the most important parts of their message. It would follow that in the present studies, when the female participant was with the speaker, information spoken with eye contact would be attended more and remembered better than information presented without eye contact.  While female participants noticed, and decoded the social signals associated with the investigator’s gaze with apparent ease, male participants did not. It is possible that male participants were simply insensitive to the investigator’s eye contact and/or did not think the eye contact was an important cue, and dedicated the same amount of attention to words regardless of whether the investigator looked at them or not. However, this explanation seems unlikely given that in male participants, the investigator’s eye contact, if anything, had a negative effect on their performance in studies 1 and 2. Moreover, participants were instructed to look at the investigator's eyes throughout the study, so it seems inconceivable that eye contact with the investigator went unnoticed. Another intriguing possibility is that the males were sensitive to the investigator's eye gaze, but were unable to decode which signal the investigator intended to convey since eye contact provides a variety of different social signals. While eye contact may signal to pay attention, as previously discussed, eye contact also signals that one is being monitored (Baltazar et al., 2014; Freeth et al., 2013; Guerin, 1986; Hazem, George, Baltazar, & Conty, 2017; Marschner et al., 2015; Nasiopoulos et al., 2015; Pönkänen, Peltola, et al., 2011; Risko & Kingstone, 2011), and facilitates decoding of the emotional and intentional messages of others. Discerning which message is most important and/or appropriate (and requires the most attention) in a given situation 60  may be more challenging for males than females. In the present context, males may have struggled to dissociate which signal embedded in the investigator’s eye contact was most relevant (or if they did make this distinction, they appear not to have acted on it). As such, processing or actively ignoring the investigator’s eye contact may have interfered with the males’ ability to pay attention to information that was being spoken. This idea converges with the finding in previous work that the presence of eyes interfered with performance on a Stroop task (Beattie, 1986; Conty, Gimmig, Belletier, George, & Huguet, 2010; Nemeth, Turcsik, Farkas, & Janacsek, 2013), presumably because processing the eyes required the same cognitive resources (e.g., selective attention) used to perform the task. According to this idea, the brief eye contact provided in studies 1 and 2 was too difficult for the males to decode while simultaneously completing another task. As a result, they were unable to dedicate enough attention to the task of attending to what the investigator said when they were looked at, and their performance suffered. However, the less subtle signal provided in Study 3 did not alter the previous data pattern, undermining the interpretation that males just need a more salient gaze signal for it to yield a performance benefit.  The interpretations discussed here all assume that the investigator's eye gaze is being interpreted (at least by female participants) as a socially communicative cue. While the live interaction between the investigator and the participant ensures that social communication can occur, it is also possible that a non-social cue associated with the investigator’s eye gaze could also be driving the reported effects. Before concluding that socially communicative aspects of gaze produce these effects, it is important to exclude a non-social interpretation that could possibly account for the faciliatory effect of gaze.  This will be addressed in Chapter 3 where the social communicative aspects of eye gaze will be dissociated from the purely perceptual cues by using a video of the investigator instead of a live investigator.  61  Chapter 3: Do perceived and actual eye contact have different effects on attention and memory? (Study 4) 3.1 Introduction The previous studies demonstrated that females benefited from an investigator's gaze on a subsequent memory test, whereas males did not. These gender specific memory effects could be driven by a socially communicative cue that is embedded in the investigator’s eye contact (i.e., when someone looks at you it is a signal to pay attention). According to this idea, females were sensitive to the social cue embedded in the investigator's eye contact, and used it to facilitate their performance on the recognition test. However, male participants failed to interpret and apply the investigator’s eye contact as a signal to pay attention, and as a result their performance at test was hindered by the investigator’s eye gaze.  A different possibility altogether is that there was nothing socially communicative about the investigator’s gaze that drove the memory effects observed in the previous studies. For example, these effects could have arisen by observing the investigator shifting their gaze up from the computer monitor. In the previous studies, the investigator either kept their eyes on the computer screen while they read a word, or they lifted them to make eye contact just before saying a word. Observing just the movement of the eyes up from the computer screen could be an indicator that a word is about to be spoken, much in the same way the onset of a flashing light at a crosswalk indicates that one should pay attention for pedestrians. There is nothing inherently “social” about either of these cues, but they both serve the purpose of a warning cue that informs a participant to increase attention to an upcoming stimulus (i.e., a word or a pedestrian in the latter case). In fact, a variety of perceptual cues (i.e., arrows, flashes in the periphery, etc) are known to generate changes in attention (e.g., Bayliss, di Pellegrino, & Tipper, 2005; Friesen, Ristic, & Kingstone, 62  2004; Hayward & Ristic, 2015; Hietanen, Nummenmaa, Nyman, Parkkola, & Hämäläinen, 2006; Mulckhuyse & Theeuwes, 2010; Posner, 1980; Ristic, Wright, & Kingstone, 2007; Shin, Marrett, & Lambert, 2011). Given that both perceptual and socially communicative cues were embedded in the live investigator's eye contact in the Studies 1, 2 and 3 presented in Chapter 2, it is unclear which cue was actually driving the memory effects observed in the previous experiments.  The aim of Study 4 is to clarify whether socially communicative cues are responsible for the eye gaze related effects observed in the previous experiments. One way to isolate the social aspects of eye gaze from the perceptual ones is to have observers watch a video of the investigator instead of interacting with a live investigator. Numerous studies have demonstrated that people respond differently, both behaviorally and neurologically, when looking at the eye gaze of people presented in images versus actual, physically present people (Hietanen et al., 2008; Itier & Batty, 2009; Laidlaw et al., 2011; Pönkänen, Alhoniemi, et al., 2011; Pönkänen, Peltola, et al., 2011; Risko et al., 2012, 2016b; Schilbach, 2015; Schilbach et al., 2013; Teufel, Fletcher, & Davis, 2010). Furthermore, the eye gaze and gestures of people depicted in images and videos have less influence on the communication (Gullberg & Holmqvist, 2006; Heath & Luff, 1993; Rutter, 1987) and attention (Varao-Sousa & Kingstone, 2015; Wammes & Smilek, 2017) of an observer than they typically would during an encounter with a live person. Presumably, this is because the people depicted in the images and videos cannot see the observer and therefore their gaze behaviour is not actively communicating with the observer, and vice versa (Argyle, 1981; De Jaegher, Di Paolo, & Gallagher, 2010; Risko, Laidlaw, Freeth, Foulsham & Kingstone, 2012; Schilbach, 2010).  By using a video recording of the investigator in the present study, the socially interactive context that was produced by using a live investigator in Chapter 2 is removed. If the previous findings are replicated, it would suggest that perceptual cues derived from the eye gaze of someone 63  in a video are enough to generate the memory benefits and deficits associated with eye contact, and that a socially communicative context is not required to generate these memory effects. However, eliminating eye gaze related memory effects would be evidence for the idea that perceptual cues are not driving these previously observed effects. Instead, it would suggest that the socially communicative eye gaze from an individual that an observer could potentially interact with is required to produce these memory effects.  3.2  Method Participants. To examine investigator gender as a factor in one experiment rather than in two separate experiments, as was the case in Chapter 2, the sample size was doubled to 168 undergraduate students from the University of British Columbia (84 males, 84 females) who had not participated in any of the previous experiments received course credit for participating. All had normal or corrected to normal vision and were naive about the purpose of the Experiment.  Design. A 2 (Investigator gaze: eye contact and no eye contact) by 2 (Participant gender: male and female) by 2 (Investigator gender: male and female) mixed design was used, where investigator gaze was manipulated within participant and participant gender and investigator gender were a between-participant variables.  Apparatus. E-Prime 2.0 (www.pstnet.com) controlled the timing and presentation of stimuli read aloud by the investigator to the participant and logged response accuracy and RTs. The stimuli were presented on a 17-in. monitor with a 1920 x 1080 pixel resolution. Stimuli. The word stimuli were identical to those reported in the previous experiments, however, the participants now watched a video of the investigator from either Study 1 (female investigator) or Study 2 (male investigator). The videos shown to each participant were recorded 64  by a camera that was placed in front of the investigator, on a tripod that was adjusted so that the camera was positioned at the investigator’s eye level. This position was chosen to simulate both the distance, height, and eye level of a participant who would have sat across from the live investigator in the previous experiments. During the recordings, the investigator read the words aloud as in the previous experiments, i.e., when prompted by the laptop, the investigator either looked towards the computer screen or, to simulate eye contact for the viewer, briefly towards the camera lens. A total of 6 different videos were made to ensure that across participants, each word would appear in each condition evenly. Videos were presented full screen at the recorded resolution (1920 x 1080 pixels), on a 17-in. monitor. Participants were seated approximately 60 cm from the screen. Sound from the videos was also played through speakers built into the computer.  Procedure. The procedure was identical to those used in Experiments 1 and 2, with the exception that a participant was first assigned to watch a video of either a male or female investigator saying the words out loud. Participants were also instructed to look at the investigators’ eyes throughout the experiment.  3.3 Results A three-way mixed ANOVA was conducted on response time (RT), response accuracy, response sensitivity (d prime) and response bias (beta) with investigator gaze (2 levels: eye contact and no eye contact) as the within participant factor and participant gender (2 levels: male and female) and investigator gender (2 levels: male and female) as the between participant factors.  RT. There was a main effect of investigator gender (F(1,166)=13.47, MSE=214475.60, p<.001), such that participants were faster to respond with the male investigator (1018 ms) than 65  the female investigator (1203 ms). No other main effects or interactions were significant (all other F’s<1).    Figure 3.1 RT as a function of Investigator Gender (Female versus Male), Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). Note that new words have been plotted in this figure as a reference point, but were not included in the analysis. Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  Percentage Correct: Analysis of the accuracy data (figure 3.2) revealed a main effect of participant gender (F(1,166)=4.43, MSE=542.79, p<0.05), such that female participants were more accurate (72%) than male participants (66%). No other main effects or interactions were significant (all other F’s<1.4)  66   Figure 3.2 Percentage correct as a function of Investigator Gender (Female versus Male), Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). Note that new words have been plotted in this figure as a reference point, but were not included in the analysis. Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  D’: No main effects or interactions were significant (all other F’s<1.8).  67   Figure 3.3 D prime as a function of Investigator Gender (Female versus Male), Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  Beta: Analysis of the beta values revealed an interaction between investigator gender and participant gender (F(1,166)=4.89, MSE=10.47, p<0.05). No other main effects or interactions were significant (all other F’s<1.8). Two independent samples t-tests using participant gender as a factor were conducted separately for male and female investigators. When a male investigator spoke words aloud, female participants (1.57) were less conservative than male participants (2.83; t(82)=2.62, SEM=0.48, p<.05). However, there was no significant difference in how conservatively female (2.69) and male participants (2.39) responded when a female investigator spoke words aloud t(82)=0.59, SEM=0.52, p=.56.  68    Figure 3.4 Decision bias (Beta) as a function of Investigator Gender (Female versus Male), Participant Gender (Female versus Male) and Investigator gaze (With eye contact versus Without eye contact). Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  Comparison between live and videotaped investigators. To reveal any difference in the memory effects generated by the eye gaze of a live investigator (in Studies 1 and 2 in Chapter 2) and a videotaped investigator, a four-way mixed ANOVA was conducted on response accuracy with Investigator gaze (2 levels: with eye contact and without eye contact) as the within participant factor and Investigator presence (2 levels: live and videotaped), Investigator gender (2 levels: male 69  and female) and Participant gender (2 levels: male and female) as between participant factors5. The analysis of the accuracy data revealed a main effect of investigator presence (F(1,328)=13.43, MSE=467.57, p<0.001), such that participants recognized more words said by an in-person investigator (75%) than a videotaped investigator (69%). Critically, there was a three-way interaction between investigator gaze, investigator presence, and participant gender (F(1,328)=15.88, MSE=44.75, p<001).  When the investigator was in-person, there was an interaction between investigator gaze and participant gender (F(1,166)=31.38, MSE=38.85, p<0.001) such that female participants recognized more words that were spoken while the investigator made eye contact (78%) than when they did not (74%; t(83)=4.91, SEM=0.87, p<0.001). However, male participants recognized fewer words read while the investigator made eye contact (73%) than when they did not (76%; t(83)=3.19, SEM=1.04, p<0.005). In contrast, the same analysis for the videotaped investigator, presented in section 3.3, yielded no effect of investigator gaze, nor an interaction between investigator gaze and participant gender. These findings indicate that the failure to observe any memory effects in Study 4 is because eye gaze from a live investigator is fundamentally different from the eye gaze of a videotaped investigator.  3.4 Discussion  In Study 4, there was no evidence that eye gaze displayed over video influenced memory. This was true regardless of the investigator or participant’s gender. This stands in direct contrast with the studies in Chapter 2 that demonstrated that eye contact from a live investigator improved memory in female participants, and reduced memory in male participants. Taken together, the                                                  5 For a report of the full analysis, please refer to Appendix C. 70  results demonstrate that people attend to the eye gaze of those they interact with in real life differently than the eye gaze of people depicted in images.   The current data dovetail with a growing body of research suggesting that viewing a live person elicits different neurological (Hietanen et al., 2006; Pönkänen, Alhoniemi, et al., 2011; Pönkänen, Peltola, et al., 2011) and behavioural responses (Heath & Luff, 1991) than viewing an image of a person. Indeed, the finding that participants who listened to a live investigator recognized more words, faster than those who listened to a video of the same words, supports this notion. By using images, the present study stripped away the social cues that would typically be present during a live encounter, leaving only the perceptual cues associated with eye gaze. Since the effect of eye gaze was eliminated when presented over video, the implication is that the effects observed previously in Chapter 2 were driven by communicative cues associated with eye contact instead of non-communicative perceptual cues. In the live setting, both the investigator and the participant can convey information and observe information through their eyes (Baltazar et al., 2014; Conty, George, & Hietanen, 2016; Gobel, Kim, & Richardson, 2015; Hazem et al., 2017; Jarick & Kingstone, 2015; Myllyneva & Hietanen, 2015, 2016; Nasiopoulos, et al., 2015; Risko & Kingstone, 2015; Risko et al., 2012; Risko, Richardson, & Kingstone, 2016). Because of this, the live investigator's eye contact conveys socially communicative information (e.g., that the participant can be seen) that is not present when the investigator is presented over video. This notion is also supported by recent research showing that social centers in the brain are more activated when observing live people than when viewing images of people (Hietanen et al., 2006; Pönkänen, Alhoniemi, et al., 2011; Pönkänen, Peltola, et al., 2011; Schilbach, 2015; Schilbach et al., 2010). Presumably, regions that are devoted to processing social information are recruited for processing any additional social information conveyed 71  through a live interaction. Likewise, these regions are not required as heavily when a person is presented over video since this social information is either absent or irrelevant (i.e., we choose to ignore these signals when someone is presented over video because we infer that the depicted individual has no intention to engage in a reciprocal encounter with us; Pönkänen, Peltola, & Hietanen, 2011). The notion above presents a challenge to researchers who have generally assumed that using images enables them to study social aspects of eye gaze present in real life with real people. However, if this assumption is misplaced, and indeed more and more research is suggesting this assumption may be, then there could be broad reaching implications as researchers have been using images to study the social effects of eye gaze for decades.  Moving forward, it is still unclear which social signals communicated by the live investigator are being interpreted by participants. Our findings are consistent with the idea that eye contact provides a social signal to pay attention, which results in memory benefits for information communicated with eye contact. However, there are many non-verbal social cues conveyed during a live encounter that can influence the way an observer pays attention. In fact, some research might suggest that head movements, rather than eye movements, are more important in eliciting attentional shifts in more natural contexts (Emery, 2000; Tomasello, Call, & Hare, 1998; Tomasello et al., 2005). Even though we have attributed the effects in Studies 1, 2, and 3 (Chapter 2) to a social signal conveyed by eye gaze, it is possible that a different social signal that is also associated with the investigators’ gaze could be driving these effects. In Studies 1, 2 and 3, the investigator would keep their gaze down at the computer screen until they read the instruction to make eye contact with the participant. After reading the instruction, they would lift their head to make eye contact with the participant. Thus, an investigator’s eye contact was preceded by a head lift which could indicate a) that the eye contact is coming which gives the participant a (literal) 72  "heads up" or b) simply be a social cue that is informative enough to suggest the participant pay attention. Studies 5 and 6 in Chapter 4 seek to clarify whether it was the investigator’s eye contact, or a general social cue that was associated with the eye contact that generated the memory benefits observed in the female participants in Chapter 2.  73  Chapter 4: Does eye contact or a general signal associated with eye contact signal when to pay attention?  4.1 Study 5: Eye contact, but not head lifts, facilitate memory task performance In Chapter 2, females benefited from the eye gaze of a live investigator on a subsequent memory test. The working hypothesis is that these memory benefits were driven by social signals associated with the live investigator’s eye gaze direction. In Chapter 3, the possibility that a non-socially communicative cue may have driven the memory benefit observed in Chapter 2 was tested by using a video of an investigator rather than a live investigator. However, any gaze related memory benefits disappeared when the investigator was presented over video, which lends further support to the idea that a socially communicative cue (rather than something non-communicative) was driving these effects. The collective interpretation of the findings from Chapter 2 and 3 is that a live investigator’s eye contact is being interpreted as a signal that the spoken information is important (Csibra & Gergely, 2009; Duncan, 1972; Kampe et al., 2003; Niederehe & Duncan, 1974; Senju & Johnson, 2009a). According to this view, female participants interpret this signal and dedicate more attention to the information communicated while eye contact is made. As a result, their performance on subsequent memory tests improves for information presented with eye contact.   However, it is still unclear whether the memory effects that have previously been reported (both in Chapter 2 and in previous literature) were due to the investigator’s eye contact enhancing memory, or whether the investigator’s gaze avoidance reduced memory. Recall that in previous research, listeners watched a speaker who never made eye contact with any listener in an audience or one who periodically made eye contact with some undefined listeners (Fullwood & Doherty-74  Sneddon, 2006; Otteson & Otteson, 1979; Sherwood, 1987). Even in instances where a single listener is present (in the studies in Chapter 2 and in Helminen, Pasanen, & Hietanen, 2016), it is possible that a listener’s memory was improved for the information spoken while making eye contact with the speaker, and it is also possible that a listener had worse memory for information presented while the speaker avoided eye contact. This reduction in memory could be due to the observer feeling excluded by the speaker (a possibility considered but not addressed by Fullwood & Doherty-Sneddon, 2006) or because the speaker’s gaze directs the observer’s attention elsewhere. In fact, the studies presented in Chapter 2 could not distinguish these two possibilities because there was no adequate baseline (i.e., where the speaker could make eye contact with someone other than the participant).  The present study sought to distinguish whether the memory performance of female participants is enhanced by the investigator’s eye contact and/or reduced when the investigator looks elsewhere (i.e., does not make eye contact). Accordingly, in the present study, and in all subsequent studies (with the sole exception of Study 9), only female participants were tested as the research goal is to examine the variables responsible for the memory benefit associated with the act of making eye contact, and this effect has been limited to female participants. A memory enhancement due to eye contact was dissociated from a memory decrement due to gaze avoidance by having an investigator alternate making eye contact with two participants or neither participant while they read words aloud. If performance improves for information spoken while the investigator looks at a participant relative to when the investigator looks at no one, then this would support the idea that eye contact improves memory. However, if a participant’s performance worsens for words spoken while the investigator looks at their partner relative to when no one is looked at, then this would suggest that a speaker’s gaze avoidance impairs memory. Note, these 75  two possibilities are not mutually exclusive. It is possible that eye contact and gaze avoidance could operate simultaneously to affect memory; in which case the memory effects reported in previous literature could reflect a combination of both social signals.  An additional aim of this study is to determine which social signal drove the memory effects in Chapter 2. There are many social signals communicated during a live social encounter that might affect how someone pays attention. For example, in Chapter 2 there were two non-verbal socially communicative cues, non-directional head lifts and eye contact, which may have signaled when an individual should pay attention to a particular word. During the experiments, the investigator either kept their eyes on the computer screen while they read a word, or they lifted their head to make eye contact just before saying a word. Observing the investigator's head lift up from the computer screen could signal to the participants that a word is about to be spoken, leading the participant to pay attention. This is a particular concern given that other non-verbal cues (i.e., eyebrow movements) can serve to highlight the importance and memorability of information during a conversation (Ekman, 1979; Whittaker & O’Conaill, 1997). As such, it is unclear whether experiencing mutual eye contact caused the memory benefit or whether simply observing a head lift at the beginning of eye contact trials signaled the importance of subsequent information, thereby enhancing memory.   The present study also sought to distinguish whether experiencing mutual eye contact or observing a head lift led to the previously observed memory benefits in females. A head lift was dissociated from eye contact by having an investigator alternate making eye contact with two participants or neither participant while they read words aloud. If a head movement drove the previously observed memory benefits, then memory for a word would improve anytime the investigator lifted their head while saying the word, regardless of whether the investigator made 76  eye contact with a participant or their partner. However, if eye contact drove the memory benefits in the previous studies, then memory performance would only improve for words spoken while the investigator made eye contact with a particular participant, but not when the investigator looked at their partner.   4.1.1 Method Participants. Forty-eight female undergraduate students from the University of British Columbia received course credit for participating. All had normal or corrected to normal vision and were naive about the purpose of the Experiment.   Stimuli and Apparatus. The stimuli and apparatus were the same as in the previous experiments except that the stimulus pool now consisted of only 108 words from the Appendix of MacDonald and MacLeod (1998).  To accommodate reading words to both participants, the same words were used, but they were now divided to be presented in three gaze conditions (i.e., participant, other participant, or screen) instead of two (i.e., participant or screen). To create an equal number of words for each condition, 12 words were randomly removed from the original list. Two identical laptops were also used to present words during the recognition test separately to each participant.   Procedure. Two participants learned words for a later memory test. In the initial study phase, both participants were seated beside each other ~10 in. apart and across from a female investigator who read words aloud individually. While the investigator read each word, she either looked up and then made eye contact briefly with the participant on their left, the participant on their right, or kept gaze down at the computer screen to avoid eye contact. Note that the investigator lifted their head and then only moved their eyes to make eye contact which allowed us to dissociate 77  the movement of the head from eye contact. Fifty-four words total were read aloud in random order to the participants. One third of the words were presented while making eye contact with the participant, another third while making eye contact with their partner, and the last third while looking down at the computer screen. For a depiction of the encoding phase experimental setup, please refer to Figure 4.1 a. All items were rotated through all of the gaze conditions across participants.  A laptop that was only visible to the investigator indicated when a word should be read aloud and provided instructions on whether to make eye contact with the participant on the left or right, or to look down at the computer screen on a given trial. First, an instruction to look towards the participant on the left, the participant on the right, or the computer screen was presented. After 1000 ms, a word also appeared and remained on screen for 2000 ms. While the word was on screen, the investigator read the word aloud while either making eye contact with one of the participants or while looking down at the computer screen. Finally, a blank screen was presented for 500ms before the next instruction was presented. The words and eye contact instructions were randomly intermixed. Participants were also instructed to make eye contact with the investigator during the experiment and if making eye contact was not possible (i.e., the investigator was looking down at the screen or at the other participant) to look at the investigators’ eyes. Subjects who did not consistently direct their gaze toward the investigator while the words were being spoken were excluded. The instructional sequence visible to the investigator during the encoding phase is presented in Figure 4.1 b. Once the encoding phase was complete, the participants both completed a recognition test in the same room, at the same time, on separate laptops. One laptop was located on a table behind both participants. This laptop was set up to run the recognition test before the experiment began 78  and remained closed throughout the encoding phase of the experiment. The investigator would open the laptop located behind the participants to display the recognition test and ask one participant to take a seat at this laptop. While the participant took their new seat, the investigator would open the recognition test on the laptop used during the encoding phase, and then turn this laptop to face the participant who stayed in their original seat. Note that across participants, the investigator alternated whether the participant on the right or left stayed in their original seat. Once both participants were seated and both laptops displayed the recognition test screen, the investigator read the instructions to the participants. Then, the investigator would monitor the participants’ performance as they completed 4 practice trials (which were excluded from the analysis). After the practice trials, the investigator left the participants alone in the room to do the recognition test. The recognition test was the same as in previous experiments except that it now contained the 18 words studied with eye contact, the 18 words studied while the investigator made eye contact with the other participant, the 18 words studied while the investigator looked at the screen, and 54 new words. The trial sequence used during the recognition phase is presented in Figure 4.1 c. First a fixation cross was presented for 500ms. Next, the word would appear on screen until the participant indicated by button press whether the word was previously presented or not. Afterwards, a blank screen would appear for 500ms before the next trial began.  After finishing the recognition task participants remained seated until the investigator came to their room. 79  80    Figure 4.1 The depiction of the experimental setup and procedure used in Study 5. (a) The arrangement of the investigator, participants and laptop during the encoding phase. In this example, the investigator is depicted looking at participant A, participant A’s partner (i.e., Participant B), or the laptop screen. Note that during the actual experiment participant A experiences eye contact with the investigator when the investigator looks at them on a given trial, while participant B simultaneously sees the investigator make eye contact with their partner. The reverse is also true. By looking at participant B, the investigator makes eye contact with participant B, and gives participant A the impression that they are making eye contact with their partner. (b) The instructional sequence that was visible to the investigator during the encoding phase. When prompted to make eye contact, the investigator made eye contact as soon as the word appeared on their screen. (c) The trial sequence that was presented to the participants during the recognition phase of the experiment.  4.1.2 Results A one-way within-subjects ANOVA was conducted on response time (RT), response accuracy, response sensitivity (d prime), and response bias (beta) with investigator gaze (3 levels: participant, partner, and screen) as a variable.  81  RT. Mean RT are presented in figure 4.2. There was no main effect of investigator gaze (F(2,94)=.56, MSE=69422.73, p=0.57).  Figure 4.2 RT as a function of Investigator gaze (Participant, Partner, Screen). Note that new words have been plotted in this figure as a reference point but were not included in the analysis. Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).          Percentage Correct: Analysis of the accuracy data (see Figure 4.3) revealed a main effect of investigator gaze (F(2,94)=12.57, MSE=86.01, p<0.001). Three planned two-tailed repeated measures t-tests revealed that participants recognized more words that were previously spoken while the investigator looked at them (79%) than when the investigator looked at the other participant (70%; t(47)=4.64, SEM=2.05, p<0.001) or at the computer screen (74%; t(47)=2.76, 82  SEM=1.72, p<0.01). Participants also recognized fewer words that were previously spoken while the investigator looked at their partner than when the investigator looked down at the computer screen (t(47)=-2.5, SEM=1.9, p<0.05).  Figure 4.3 Percentage correct as a function of Investigator gaze (Participant, Partner, Screen). Note that new words have been plotted in this figure as a reference point but were not included in the analysis. Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  D’: The results mirror the percentage correct. Analysis of the sensitivity data (see Figure 4.4) revealed a main effect of investigator gaze (F(2,94)=12.57, MSE=86.01, p<0.001). Three planned two-tailed repeated measures t-tests revealed that participants were more sensitive to words that were previously spoken while the investigator looked at them (1.91) than when they 83  looked at the other participant (1.62; t(47)=3.99, SEM=0.73, p<0.001) or at the computer screen (1.76; t(47)=2.25, SEM=0.07, p<0.05). Participants were also less sensitive to words that were previously spoken while the investigator looked at their partner than when they looked down at the computer screen (t(47)=2.19, SEM=0.07, p<0.05).   Figure 4.4 D prime as a function of Investigator gaze (Participant, Partner, Screen). Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  Beta: Analysis of the beta values (see Figure 4.5) revealed a marginal main effect of investigator gaze (F(2.94)=2.67, MSE=0.15, p=0.08).  84    Figure 4.5 Decision bias (Beta) as a function of Investigator gaze (Participant, Partner, Screen). Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  4.1.3 Discussion The present study demonstrates that memory performance improved for information presented when the investigator lifted her head and made eye contact with a participant relative to when the investigator looked at the screen or made eye contact with their partner.  This finding supports the previous conclusion that memory effects are driven by a social signal associated with the live investigator’s eye gaze direction. This finding is also consistent with the idea that mutual eye contact, but not observing a head lift, signals when a particular participant should pay attention to the words being spoken. (Csibra & Gergely, 2009; Kampe et al., 2003; Senju & Johnson, 2009). 85  As such, participants pay more attention to words when eye contact is made with them than when eye gaze is directed at something or someone else.    Importantly, there was also a memory cost when the participant saw the investigator look at someone else relative to when neither participant was looked at (i.e., when the investigator looked at the screen). This is consistent with the idea that a speaker’s gaze aversion can also reduce memory for what a speaker says. It also lends further support to the conclusions that the signal conveyed through eye gaze that leads to these memory effects is socially communicative.  The joint interpretation of these two findings is that eye contact communicates that a message is important, but only for a specific person (Csibra & Gergely, 2009).  A speakers’ eye contact serves to clarify who the speaker is trying to communicate with, e.g., the message is intended only for the person who is being looked at. So, a listener is very attentive to a speaker who looks at them. However, a listener is less inclined to pay attention to a speaker who looks at someone else since they infer that the speaker's eye gaze communicates that the message is not intended for the non-gazed-at listener and is meant for the person being looked at instead. In an ambiguous situation, when a speaker is not looking at anyone, it is not clear who the message is for. Under such circumstances, a listener appears to be less attentive than when a speaker makes eye contact, but more attentive then when gaze is directed to the screen. In essence, when two people are present, an investigator’s eye contact appears to include one in the interaction and excludes the other. This idea gives rise to the question of whether a live investigator's eye contact can help to include someone who is not physically present with the investigator. When in-person interactions are not possible, video-conferencing has become a popular way of interacting with one another. Video-conferencing is considered a somewhat richer form of communication than alternative 86  modes of communication such as emails or phone calls, since it enables people to see each other as they communicate. During a live video-conference, non-verbal communicative signals, such as simulated eye contact, could help facilitate communication, perhaps by alleviating the exclusion one might feel if they were unable to see these communicative signals. However, in Study 4 (Chapter 3), the memory benefits associated with eye contact disappeared when an investigator appeared over video, presumably because removing the interactive context stripped the socially communicative signals. However, both the socially communicative setting and the physical presence of the investigator were removed simultaneously in that study. If an interactive context was maintained, for example through a video conference call, an investigator’s eye gaze could be socially communicative even though the investigator is not physically present.  An interesting question that remains to be tested is whether participants need to be physically present or not during a live encounter for an investigators’ eye gaze to influence their memory. Just by being in the room with the investigator, the participants are included in an interaction where they can easily observe where the investigator looks, assess who the investigator is speaking to (i.e., either to them, their partner, or to both of them), and use this information to attend accordingly. However, when a participant is not in the same room as the investigator (i.e., communicating via video conference over skype), they are, in a sense, physically excluded from the interaction. It could be that when a live investigator appears over skype, participants will continue to interpret and use the investigator’s eye gaze as an attentional cue. Alternatively, a live investigator presented over skype may be perceived as if they were a pre-recorded video. If that were the case, the investigator’s eye gaze would not affect how the participants pay attention to what the investigator says. In sum, presenting an investigator over skype can clarify whether 87  participants must be physically present during a live interaction in order to experience the memory benefits and deficits associated with the investigators’ eye gaze. 4.2 Study 6: Eye contact over skype improves memory and social exclusion hinders it The Experiment presented in Chapter 3 demonstrated that a live setting is critical for producing eye gaze related benefits, since these memory benefits disappeared when participants viewed a video of an investigator instead of having an in-person interaction (Chapter 2). However, by using a video of the investigator, both the investigator’s physical presence and the live interaction were removed simultaneously. As such, it is unclear whether eye gaze related memory benefits (and deficits) would persist during a live interaction if the investigator was not physically present in the same room as the participant.  To investigate this idea, the current study presents a live investigator over skype with two participants, who are each seated in a different room.  Critically, both participants believe that their partner is in the room with the investigator and only they are isolated in a different room. If the investigator's physical presence is critical for producing gaze related memory effects, then neither a memory benefit nor deficit should be observed in the present study. However, if only the live setting, and not the investigator’s physical presence, is critical for generating gaze related memory effects, then the memory effects observed in the previous study should be expressed in the present investigation. 4.2.1 Method Participants. Thirty-six female undergraduate students from the University of British Columbia received course credit for participating. The data from twelve participants were omitted prior to the analysis due to technological difficulties (8) or a participant’s belief about the location 88  of the second participant (4). All had normal or corrected to normal vision and were naive about the purpose of the Experiment.   Stimuli, Apparatus and Procedure. The stimuli, apparatus and procedure where the same as those used in the previous study with the exception that now both participants viewed the investigator over skype instead of in person. Each participant sat in a separate room, in front of a 24-inch monitor set at a resolution of 1920 by 1200, with participants seated ~80 cm from the screen. The screen displayed the investigator reading the words aloud in real time over skype. Before the participants were escorted to their rooms, the investigator ensured that skype and the recognition task were both open on each of their computers. The recognition task was then minimized to the task bar so that the participant would only see the skype screen when they sat down. As each participant was seated, they were told that the investigator would be sitting in a nearby room with another participant who was already seated, and that they would see the investigator over skype. Participants were instructed to look at the investigator’s eyes throughout the experiment as though they were trying to make eye contact with the investigator, even when the investigator was not looking at the participant. Then, the investigator sat in a room separate from both participants across from two laptops that were placed where the participants were seated in the previous experiment. See figure 4.6 for the experimental setup during the encoding phase. Each laptop provided a view of the investigator to a participant (i.e., either participant A or B). To give the impression that the investigator was looking at a particular participant, the investigator looked into the camera of a laptop to simulate eye contact. The investigator had a third laptop which provided instructions identical to the previous experiment indicating whether to look at the camera of the laptop on their left (simulating eye contact for participant A in figure 4.6), on their right (simulating eye contact for participant B 89  in figure 4.6), or down at the laptop screen directly in front of them. Thus, each participant could see the investigator look at them, their partner, or the laptop screen in front of the investigator. Before beginning the encoding phase, the investigator asked whether the participants had any questions. The experimenter intentionally looked into the camera of each participant’s laptop to simulate eye contact after asking the question. If a participant did have a question, the investigator would simulate eye contact with the participant while they listened to the question, and while they responded to the question. Once both participants had verbally confirmed that they had no more questions, the encoding phase would begin. This is important to note since this verbal confirmation provided evidence to both participants that there was a second person participating in the study.  Figure 4.6 The depiction of the experimental setup and procedure used in Study 6. (a) The arrangement of the investigator, participants and laptops during the encoding phase. In this example, the investigator is depicted looking at participant A, participant A’s partner (i.e., Participant B), or the laptop screen.  90  Once the encoding phase was over, the investigator would explain the recognition task to the participants over skype. After confirming that both participants understood the instructions, the investigator would say, “If you are joining us on skype, please close skype and open up the program that is minimized on the task bar. I will come to your room shortly to make sure you are set up properly” while looking away from both participants (i.e., down at their laptop screen). Next, the investigator would close the skype video conversation and go to check that each participant had successfully opened the recognition task. Note that across participants, the investigator alternated whether they checked on participant A or participant B first. The recognition task itself was identical to the previous experiment. When the participants were finished the recognition task, the investigator would debrief each participant separately. After explaining that both participants had participated in the experiment over skype, the investigator would ask if the participant believed that their partner was in the room with the experimenter. Two participants said “no” and both their data (2) and their partner’s data (2) was discarded. Participants were also asked whether they noticed any glitches or technical difficulties (i.e., the screen froze, the video connection failed, etc.) while the experimenter was reading words to them. Four participants said “yes” and both their data (4) and their partner’s data (4) was excluded from the analysis.  4.2.2 Results A one-way within-subjects ANOVA was conducted on response time (RT), response accuracy, response sensitivity (d prime), and response bias (beta) with investigator gaze (3 levels: participant, partner, and screen).  91  RT. Mean RT are presented in figure 4.7. There was no main effect of investigator gaze (F(2,70)=.47, MSE=27427.35, p=0.63).  Figure 4.7 RT as a function of Investigator gaze (Participant, Partner, Screen). Note that new words have been plotted in this figure as a reference point but were not included in the analysis. Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).          Percentage Correct: Analysis of the accuracy data (see Figure 4.8) revealed a main effect of investigator gaze (F(2,70)=5.27, MSE=97.09, p<0.01). Three planned two-tailed repeated measures t-tests revealed that participants recognized more words that were previously spoken while the investigator looked at them (72%) than when they looked at the other participant (65%; t(35)=3.7, SEM=1.97, p<0.001) or at the computer screen (67%; t(35)=2.26, SEM=2.39, p<0.05). 92  However, unlike the previous study, there was no significant difference in the recognition of words that were previously spoken while the investigator looked at the screen or the participant’s partner (t(35)=0.72, SEM=2.57, p=0.48).  Figure 4.8 Percentage correct as a function of Investigator gaze (Participant, Partner, Screen). Note that new words have been plotted in this figure as a reference point but were not included in the analysis. Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  D’: The results mirror the accuracy data. Analysis of the sensitivity data (see Figure 4.9) revealed a main effect of investigator gaze (F(2,70)=4.86, MSE=.11, p<0.05). Three planned two-tailed repeated measures t-tests revealed that participants were more sensitive to words that were previously spoken while the investigator looked at them (1.75) than when they looked at their 93  partner (1.52; t(35)=3.4, SEM=0.07, p<0.005) or at the screen (1.57; t(35)=2.11, SEM=0.08, p<0.05). However, unlike the previous study, there was no significant difference in how sensitive participants were to words that were previously spoken while the investigator looked at the computer screen or at their partner (t(35)=0.66, SEM=0.08, p=0.51).  Figure 4.9 D prime as a function of Investigator gaze (Participant, Partner, Screen). Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003). Beta: Analysis of the beta values (see Figure 4.10) revealed no main effect of investigator gaze (F(2.70)=0.35, MSE=0.25, p=0.71).  94    Figure 4.10 Decision bias (Beta) as a function of Investigator gaze (Participant, Partner, Screen). Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  4.2.3 Discussion In the present experiment, a participant’s memory was best for words communicated when the investigator simulated eye contact over skype. This suggests that an investigator’s eye contact is beneficial in live settings, regardless of whether the participant is in the room with the investigator or not (i.e., over skype). This finding also stands in contrast to the finding in Chapter 3 that participants’ memory for words did not improve when a pre-recorded investigator simulated eye contact over video as they spoke. Presumably, when presented over skype, an investigator’s eye contact serves to include a participant in the social exchange. In person, participants interpreted an investigator’s eye contact as specifying who an investigator was intending to communicate with 95  when more than one person is present (Csibra & Gergely, 2009; Senju & Csibra, 2008; Csibra, 2010; Szufnarowska et al, 2015).   While there was a memory benefit when looked at, there was a cost for words spoken anytime the investigator looked away from the participant over skype. This performance cost was similar regardless of whether the investigator was looking towards another person or at their computer screen as she spoke. This finding stands in contrast to the observation in the previous experiment that memory is hindered more during an in-person interaction when the investigator looked at the participant's partner than when they looked at neither participant. Thus, a participant’s physical presence does influence whether an investigator's eye contact with another person will hinder memory performance.  It is possible that social exclusion is experienced over skype anytime the investigator looks away from the camera, regardless of who/what the investigator is looking at. In contrast when an interaction occurs in person, seeing the investigator look at someone else will momentarily exclude a participant from the interaction, which interferes with memory performance. However, being physically present in the room may lessen the experience of social exclusion when the investigator looks at the computer screen (and one can see that the investigator is not communicating to any particular person). The data here converges on the idea that the memory effects are due to eye contact and how it is interpreted by participants, i.e., that spoken words are intended for the person who is making eye contact. While an implicit assumption has been that these effects are unique to eye gaze, it is reasonable to ask if other cues that suggest who information is for, such as pointing (Yoon, Johnson, & Csibra, 2008) or calling out someone’s name (Kampe et al., 2003) might produce similar memory effects.  On the other hand, making eye contact with the investigator also conveys a strong signal that the investigator can see you (or is watching you). The feeling of being 96  watched motivates people to behave in more socially desirable ways (Bond & Titus, 1983; Levine, Resnick, & Higgins, 1993; Nasiopoulos, Risko, Foulsham, & Kingstone, 2014; Nasiopoulos et al., 2015; Risko & Kingstone, 2011; Zajonc, 1965). In the context of the task, behaving in a socially desirable way would be to follow task instructions to pay attention to what the investigator says. However, when eye contact is made with the participant’s partner, the participant believes that someone else is being monitored and s/he is thus excluded (i.e., the signal is not for me but for someone else). As a result, less attention is committed to the investigator's behaviour (i.e. a spoken word). The signal that one is “being monitored” is special to eye contact, and is not associated with other non-verbal communicative gestures, such as pointing. If these memory benefits arise when participants feel motivated to perform well because they are being monitored and deficits occur when a participant believes that someone else is being monitored, then other gestures should not produce these memory effects.  97  Chapter 5: Do other social cues produce and drive memory effects? Studies 5 and 6 in Chapter 4 demonstrate that eye contact can improve memory for what a speaker says, whereas not being selected for eye contact can degrade memory for what a speaker says. These changes in memory can be attributed to an observer interpreting that a speaker’s eye contact signals who should pay attention during a social encounter, and who should not. However, this particular signal can be sent through many other socially communicative gestures (i.e., pointing, Yoon et al., 2008), so the observed memory effects this signal produces need not necessarily be specific to eye gaze. For example, eye gaze and many other non-verbal gestures (e.g., pointing, Yoon et al., 2008; head and hand movements, changes in facial expression, Kampe, Frith & Frith, 2003; Morey, 1959, being spoken to, Kuhn et al., 2016) are used to bring something to the attention of others. These gestures can also be used to single out a particular person or object from a group; in the way that eye contact can specify who should pay attention. In a sense, a speaker’s eye contact or hand gesture could be used to “point” out a specific person in an audience. If this signal is crucial for driving the memory effects observed previously, then similar performance benefits and deficits should be observed regardless of whether the investigator points or looks at the two participants.  That said, specifying who should pay attention is not the only social signal that is conveyed through a speaker’s eye contact. Eye contact also sends a powerful signal that one is being observed (Conty et al., 2016; Gobel et al., 2015; Hazem et al., 2017; Myllyneva & Hietanen, 2016; Risko et al., 2016). This signal is a unique property of eye contact and may distinguish it from other communicative gestures or cues (i.e., such as pointing) that express who a speaker intends to communicate with, and researchers have speculated that it is the combination of signaling to others, and observing others, that contributes to eye gaze being a particularly sensitive stimulus. If 98  the signal that produces these memory effects is specific to eye contact, then only a speaker’s eye contact could produce these observed memory effects  Studies 7 and 8 investigate if the performance benefits and deficits produced in response to an investigator's eye gaze are specific to eyes, or whether they can be generated through the use of other communicative gestures that suggest who the message is for without also signaling to that person that s/he is being observed.  5.1 Study 7: Hand gestures produce memory task benefits and deficits It is currently unclear which signal conveyed through eye contact is helping and hindering memory performance. To distinguish which particular signal produces these performance effects, the present study will explore whether these memory effects persist when an investigator points (instead of looks) at a participant, their partner, or neither participant, while speaking a series of words. Afterwards, the words will be presented in a recognition test. If a signal that is unique to eye contact, such as the feeling of being watched, is responsible for the previously observed memory effects, then no memory effects should be observed in response to the investigators pointing. On the other hand, if these performance effects occur when a different nonverbal signal suggests who should pay attention, then test performance may improve for words spoken while the investigator points at a participant and worsen when the investigator points at their partner. In the event that both memory benefits and deficits are produced by a gesture other than the eyes, this would be a powerful demonstration that the eyes and other gestures produce these memory effects through the same mechanism – a communicative signal of "who the word is for”.  That said, these two communicative signals could operate simultaneously, and their impact on memory could be different. The memory benefits and memory deficits could be produced 99  through different mechanisms, and one effect, but not the other, could be unique to the eyes. Recall in the previous chapter (Studies 5 and 6) that the investigator’s physical presence did not change whether the investigator’s eye contact improved memory for what was said. However, when the investigator was not physically present, simply looking away was enough to generate a memory deficit for what was said and knowing whether the investigator looked at someone or something did not differentially affect performance. Thus, it could be that only eyes can improve memory when a participant is selected because the feeling of being monitored is unique to the eyes. If that were the case then this particular memory improvement could not be generated with a different non-verbal social gesture since only eye contact can lead one to feel monitored. However, if signaling ‘who a message is for’ generated the memory deficits then memory should worsen anytime someone else other than the participant is referenced, and this effect should generalize from the eyes to other social cues that specify, “who a message is for”. The present study will clarify which mechanism underlies the previously reported memory benefits and deficits that have been observed in response to eye gaze.  5.1.1 Method Participants. Forty-eight female undergraduate students from the University of British Columbia received course credit for participating. All had normal or corrected to normal vision and were naive about the purpose of the experiment.   Stimuli, Apparatus and Procedure. The stimuli, apparatus and procedure where the same as those used in Study 5 in Chapter 5 with the exception that now the investigator pointed at the participant, their partner, or made no gesture before saying the word on a given trial on a given trial.  100  The instructional sequence that was visible to the investigator during the encoding phase was the same as in the studies reported in Chapter 4 only it prompted the investigator to “point” instead of “look” at participants (for the exact timing, please refer to Figure 4.1). The instructions would prompt the investigator to point at a given participant, and then 1 second later a word would appear. The investigator would point and read the word simultaneously as soon as the word appeared. While reading the words to the participants, the investigator kept her eyes down on the screen. She kept her hand hidden behind the laptop screen out of the participants’ view. To point at the participants, the investigator would lift her right hand above the laptop screen and extend her index finger to briefly point at the participant (slightly less than a second), and then return her hand behind the laptop.   5.1.2 Results A one-way within-subjects ANOVA was conducted on response time (RT), response accuracy, response sensitivity (d prime), and response bias (beta) with investigator pointing (3 levels: participant, partner, and no one) as an independent variable.  RT. Mean RT are presented in figure 5.1. There was a main effect of investigator pointing (F(2,94)=3.79, MSE=30717.81, p<0.05). Three planned two-tailed repeated measures t-tests revealed that participants were faster to recognize words that were previously spoken while the investigator pointed at them (911 ms) than at the other participant (1007 ms; t(47)=2.27, SEM=42.22, p<0.05), or at the screen (979 ms; t(47)=2.42, SEM=27.89, p<0.05). However, there was no difference in how quickly participants recognized words that were said while the investigator pointed at the other participant or at no one (t(47)=.8, SEM=35.77, p=0.43). 101   Figure 5.1 RT as a function of Investigator pointing (Participant, No one, Partner). Note that new words have been plotted in this figure as a reference point but were not included in the analysis. Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).          Percentage Correct: Analysis of the accuracy data (see Figure 5.2) revealed a main effect of investigator pointing (F(2,94)=18.88, MSE=129.14, p<0.001). Three planned two-tailed repeated measures t-tests revealed that participants recognized more words that were spoken while the investigator pointed at them (77%) than at the other participant (63%; t(47)=6.21, SEM=2.3, p<0.001), or at no one (71%; t(47)=2.68, SEM=2.42, p<0.05). Participants also recognized fewer words that were previously spoken when the investigator pointed at the other participant than when they did not point (t(47)=3.45, SEM=2.25, p<0.005). 102   Figure 5.2 Percentage correct as a function of Investigator pointing (Participant, No one, Partner). Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  D’: The results mirror the accuracy data. Analysis of the sensitivity data (see Figure 5.3) revealed a main effect of investigator pointing (F(2,94)=19.65, MSE=0.14, p<0.001). Three planned two-tailed repeated measures t-tests revealed that participants were more sensitive to words that were previously spoken while the investigator pointed at them (1.86) than at the other participant (1.37; t(47)=6.32, SEM=0.08, p<0.001) or at no one (1.64; t(47)=2.61, SEM=0.08, p<0.05). Participants were also less sensitive to words that were previously spoken while the investigator pointed at the other participant than when they did not point (t(47)=3.67, SEM=0.07, p<0.005). 103   Figure 5.3 D prime as a function of Investigator pointing (Participant, No one, Partner). Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  Beta: Analysis of the beta values (see Figure 5.4) revealed a main effect of investigator pointing (F(2.94)=7.86, MSE=0.32, p<0.005). Three two-tailed repeated measures t-tests revealed that participants responded more liberally to words that were previously spoken while the investigator pointed at them (1.60) than the other participant (2.06; t(47)=3.79, SEM=0.12, p<0.001) and when the investigator did not point (1.83; t(47)=1.93, SEM=0.12, p=0.06). Participants also responded more conservatively to words that were previously spoken while the investigator pointed at the other participant than when they did not point (t(47)=2.17, SEM=.1, p<0.05).  104    Figure 5.4 Decision bias (Beta) as a function of Investigator pointing (Participant, No one, Partner). Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).   5.1.3 Discussion In this study, participants who were tested in pairs remembered words best when they were spoken by an investigator who pointed at them. Word recognition was also hindered when the investigator pointed at the other participant relative to when the investigator did not point. The findings here replicate the memory benefits and deficits that were produced in response to an investigator’s eye gaze in Studies 5 and 6 (Chapter 4).  Thus, the present data suggest that different gestures may affect memory through the same mechanism, and that the performance benefits and deficits in response to eye gaze and pointing arose from similar mechanisms. It could be that 105  pointing, eye gaze, and potentially other non-verbal gestures, can be used to signal when a particular person should pay attention. If the nature of the memory effect (when it occurs) is not unique to eyes, then a key distinguishing dimension of eyes -- that they observe as well as signal -- is not a critical feature to memory modulation. Since both eye gaze and pointing are visual, non-verbal gestures that can be used by a speaker, it is sensible that these cues could elicit attentional shifts that influence encoding in the same way. Plenty of laboratory research supports the idea that these cues can operate via similar mechanisms to elicit changes in attention when they are observed (though this is still a point of contention in the scientific community, see Ristic & Kingstone, 2012; Olk, Tsankova, Petca & Wilhelm, 2014). An intriguing question is whether similar encoding benefits and deficits could be produced by referential signals that are verbal in nature. For example, an instructor could single out a student in their class by calling out their name (Kampe, Frith, & Frith, 2003), which is an auditory signal, or by gesturing towards the student (i.e., looking at them, or pointing at them). While both pointing and eye contact can easily be observed, a verbal signal can only be heard. A speaker’s visual and verbal referential signals could operate via different mechanisms, in which case they might have differential effects on how participants encode information during a live encounter. By using a verbal signal to refer to participants, the cue will no longer be visual or non-verbal in nature, and the referential signal would not even be directional in nature (which could lead to an attentional shift, associated with what the speaker is saying, towards or away from the participant, Ristic & Kingstone, 2012; Olk, et al., 2014). If none of these visual aspects of the eyes and pointing contribute to the memory effects, and only the signal that a message is for a particular person drives these effects, then the memory effects observed in response to the eyes and pointing 106  should generalize to a verbal cue that also conveys this signal. This idea is tested in the next experiment.  5.2 Study 8: Verbal signals produce memory task benefits and deficits In the previous studies, memory benefits and deficits were generated through the use of non-verbal gestures that could easily be observed. However, it is unclear whether referential cues of different modalities (i.e., verbal instead of visual) will also produce similar memory benefits and deficits. This idea will be tested by using an investigator who reads words aloud to a pair of participants just after calling out the name of a participant, their partner, or neither participant. If the same mechanism underlies these verbal and non-verbal referential signals, then relative to when no one’s name is called, performance should improve for words spoken just after the investigator calls out a participant’s name and should worsen when their partner’s name is called. However, if the mechanisms subserving verbal and non-verbal cues are in fact different, then any memory effects generated by the investigator calling out the name of the participant, their partner, or no one would differ from those generated by pointing in the previous experiment.   5.2.1 Method Participants. Forty-eight female undergraduate students from the University of British Columbia received course credit for participating. All had normal or corrected to normal vision and were naive about the purpose of the experiment.   Stimuli, Apparatus and Procedure. The stimuli, apparatus and procedure where the same as those used in the previous Experiment with the exception that now the investigator called out 107  the name of the participant, their partner, or said neither participants’ name before saying the word on a given trial. The investigator asked the participants their names at the outset of the study. The instructional sequence that was visible to the investigator during the encoding phase was the same as in the studies reported in Chapter 4 and the previous study only it prompted the investigator to “name” participants instead of “point” or “look” at participants (for the exact timing, please refer to Figure 4.1). The instructions would first prompt the investigator to name a given participant, and then 1 second later a word would appear on screen. The investigator would say the participants’ name as soon as the word appeared and then read the word immediately after (approximately half a second after naming the participant). While reading the words to the participants, the investigator kept her eyes down on the screen.   5.2.2 Results A one-way within-subjects ANOVA was conducted on response time (RT), response accuracy, response sensitivity (d prime), and response bias (beta) with investigator naming (3 levels: participant, partner, and no one) as an independent variable. RT. Mean RT are presented in figure 5.5. There was no main effect of investigator naming (F(2,94)=.02, MSE=30206.40, p=0.97). 108   Figure 5.5 RT as a function of investigator naming (Participant, No one, Partner). Note that new words have been plotted in this figure as a reference point, but were not included in the analysis. Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).         Percentage Correct: Analysis of the accuracy data (see Figure 5.6) revealed a main effect of investigator naming (F(2,94)=14.98, MSE=170.31, p<0.001). Three two-tailed repeated measures t-tests revealed that participants recognized more words that were spoken after the investigator called out their name (75%) than when the other participants’ name was called (60%; t(47)=5.202, SEM=2.78, p<0.001), or no one’s name was called (66%; t(47)=3.5, SEM=2.51, p<0.005). Participants also recognized more words that were spoken after the investigator called no one’s name than when the other participants’ name was called (t(47)=2.11, SEM=2.69, p<0.05). 109   Figure 5.6 Percentage correct as a function of Investigator naming (Participant, No one, Partner). Note that new words have been plotted in this figure as a reference point, but were not included in the analysis. Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  D’: The results mirror the accuracy data. Analysis of the sensitivity data (see figure 5.7) revealed a main effect of investigator naming (F(2,94)=14.02, MSE=0.18, p<0.001). Three two-tailed repeated measures t-tests revealed that participants were more sensitive to words that were said after the investigator called their name (1.76) than when the other participant was called (1.31; t(47)=5.07, SEM=0.09, p<0.001) or no one was named (1.5; t(47)=3.14, SEM=0.08, p<0.005). Participants were also more sensitive to words said when the investigator called no one then when they called the other participant (t(47)=2.24, SEM=0.08, p<0.05). 110   Figure 5.7 D prime as a function of Investigator naming (Participant, no one, partner). Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  Beta: Analysis of the beta values (see Figure 5.8) revealed a marginal main effect of investigator naming (F(2.94)=2.46, MSE=0.29, p=0.09). Three two-tailed repeated measures t-tests revealed that participants responded more liberally to words that were previously spoken after the investigator called them (1.56) than when they called the other participant (1.80; t(47)=2.57, SEM=0.09, p<0.05), but not when they named no one (1.72; t(47)=1.64, SEM=0.1, p=0.11). There was no significant difference in how conservatively participants responded to words that were previously said after the investigator named the other participant or no one (t(47)=0.57, SEM=0.13, p=0.57). 111    Figure 5.8 Decision bias (Beta) as a function of Investigator naming (Participant, no one, partner). Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003). Comparison between Studies 5, 7, and 8. An additional follow-up analysis comparing all studies 5 (eye contact), 7 (pointing) and 8 (name calling) was run to reveal any differences (or similarities) in these different cues had on memory.  It is possible that the personalized verbal cue (i.e., calling out the participant’s name) provided an additional encoding boost relative to the non-verbal cues (i.e., pointing or eye contact). Hearing one’s own name could have a more powerful effect on attention than being pointed at or looked at (Arons, 1992; Conway, Cowan, & Bunting, 2001; Hawley, Litovsky, & Culling, 2004; Pollack & Pickett, 1957; Wood & Cowan, 1995). Indeed, one can easily focus their auditory attention on their name while filtering out a range of other sounds and conversations coming from noisy room (i.e., the cocktail party effect; Arons, 112  1992; Conway, Cowan, & Bunting, 2001; Hawley, Litovsky, & Culling, 2004; Pollack & Pickett, 1957; Wood & Cowan, 1995). It could be the case that we have such strong associations formed with our name that we can more easily process the message associated with the signal it may convey (i.e., you need to pay attention).  To test this possibility, the performance benefits and deficits observed in this study were compared to those observed in the previous study which used pointing, and Study 5 (Chapter 4) which used eye contact. A two-way mixed ANOVA was conducted on response accuracy with the person indicated (3 levels: participant, partner, and no one) as the within participant variable and investigator cue (3 levels: eye gaze, pointing, naming) as the between participant variable.  There was a main effect of person indicated (F(2,282)=45.45, MSE=128.49, p<.001), such that relative to when no one is indicated (70%), participants recognize fewer words when their partner is indicated (64%; t(143)=4.57, SEM=1.32, p<.001) and more words when they are indicated (77%; t(143)=5.16, SEM=1.29, p<.001). Participants also recognize more words when they are indicated than when their partner is indicated (t(143)=9.19, SEM=1.38, p<.001). There was also a main effect of investigator cue (F(2,141)=3.40, MSE=581.79, p<.05), such that relative to when the investigator used eye gaze (74%), participants recognized fewer words than when the investigator used naming (67%: t(94)=2.64, SEM=1.9, p<.05), but not when they use pointing (70%; t(94)=1.34, SEM=2.06, p<.19). Relative to when the investigator points, there is no significant difference in the amount of words participants recognize when the investigator uses naming (t(94)=1.3, SEM=1.9, p<.21). Critically, there was no interaction between person indicated and investigator cue (F(4,282)=1.29, MSE=107.58, p=.28).  The failure to detect an interaction between the person indicated and investigator cue suggests that the performance benefits and deficits were similar in this study to both the eye contact 113  and pointing studies. Thus, it is unlikely that calling out one’s name provides a more salient cue than being looked at or pointed at. This finding also lends further support to the idea that the same mechanism subserves all referential cues, regardless of whether they are verbal or non-verbal.  5.2.3 Discussion  In this study, calling out a participant’s name helped them remember what the investigator said.  However, when the investigator called out their partner’s name, participants were less likely to remember what the investigator said relative to when no name was called. These findings mirror the memory benefits and deficits that were found in the previous study when the investigator pointed at participants, and in Studies 5 and 6 (Chapter 4) when the investigator made eye contact with participants. Indeed, a comparison of the memory benefits and deficits observed in response to verbal cues (in this study), pointing (Study 7), and eye contact (Study 5 in Chapter 4) revealed that the performance benefits and deficits were similar in this study to both the eye contact and pointing studies. These data suggest that the emergence of these effects does not depend on the modality of a social signal, and that instead these memory effects can be generated by both verbal and non-verbal social signals. Furthermore, the data lend support to the idea that individuals infer a similar communicative signal (i.e., who’s turn it is to pay attention) from eye contact, gestures, and verbal cues, and that this inference is driving all of the memory benefits and deficits that have been observed in this chapter (Studies 7 and 8) and the previous chapter (Studies 5 and 6). The findings also stand in contrast to the idea that the investigator’s eye contact in Studies 5 and 6 (Chapter 4) made participants feel compelled to behave in socially desirable ways.  114   While the studies presented in this chapter certainly suggest that eye contact and other referential cues are all providing a similar signal, it is still the case that eye contact is simultaneously providing additional signals (Conty, George, & Hietanen, 2016; Gobel, Kim, & Richardson, 2015; Jarick & Kingstone, 2015; Myllyneva & Hietanen, 2015, 2016; Nasiopoulos, Risko, & Kingstone, 2015; Risko & Kingstone, 2015; Risko et al., 2012; Risko, Richardson, & Kingstone, 2016). These other signals do not interfere with or alter the processing that occurred during the experimental task. However, only female participants were tested. Recall in Studies 1-3 (Chapter 2) that male participants did not decode or use the signal to pay attention that was embedded in the speaker’s eye contact. Based on these findings, male participants were excluded in from Studies 5-8 (Chapters 4 and 5) in order to more effectively explore the memory benefits that were being generated by eye contact in Studies 1-3 (Chapter 2). Even though the experiments presented in this chapter and the last demonstrate that female participants can interpret and use a variety of different socially communicative cues, it is not clear whether male participants can.  We speculated that females were simply more effective than males at decoding social cues. However, it could be that the males can effectively decode a social signal to pay attention, but not when the signal is conveyed through eye contact. Perhaps they would be able to decode the signal if it was conveyed through pointing or verbal cues. If the eyes truly do not convey unique signals that contribute to the emergence of these memory benefits and deficits, then we would expect that memory for what a speaker says would be uninfluenced when any social cue is used to indicate a message is for a male participant. This idea will be explored further in the next chapter.    115  Chapter 6: Can other social cues produce memory benefits in males? (Study 9) In Studies 1-3 (Chapter 2), recognition performance improved when an investigator looked at female participants, but not when male participants were looked at. To better explore the performance benefits associated with eye contact, males were excluded from Studies 5-8 (Chapters 4 and 5) where females not only benefited from being looked at, but also from being pointed at or called on by the investigator. The findings from Studies 5-8 (Chapters 4 and 5) suggest that the memory benefits observed in Studies 1-3 (Chapter 2) generalized to other cues that are referential in nature (i.e., a signal suggesting who a message is intended for). The studies thus far indicate that female participants interpret and use a variety of socially communicative cues, including eye contact, to improve their memory performance. This raises the question of whether males’ memory performance can be improved or not when the signal “that a message is for them” is not conveyed thorough eye gaze but through some other cue instead.  In Studies 1-3 (Chapter 2), male participants’ memory performance did not improve when an investigator looked at them, suggesting that they were unable to interpret the signal that a message was for them. While there is a body of research suggesting that females are generally better than males at decoding any social cue (Bailenson et al., 2001; Bayliss et al., 2005; Connellan et al., 2000; Hall, 1978; Lutchmaya et al., 2002; Marschner et al., 2015; Yee et al., 2007), eye gaze is a particularly rich stimulus that is unique from other non-verbal cues because it can convey many signals simultaneously (i.e., being watched, pay attention, etc.). Further, laboratory evidence suggests that eye gaze is represented and processed in the brain differently than other social cues (Carlin & Calder, 2013; Emery, 2000; Grossmann, 2017; Hooker et al., 2003; Itier & Batty, 2009; Tipper, Handy, Giesbrecht, & Kingstone, 2008). While females may have an advantage for interpreting the referential signal and the investigator’s eye contact simultaneously, males may 116  struggle to process or ignore the eye contact which prevented them from interpreting the more general signal that a message is for them. In light of the findings from Studies 7 and 8 (Chapter 5) that pointing and name calling have similar effects on memory as eye gaze, it is of both empirical and theoretical importance to determine whether males can interpret the signal that they should pay attention when it is conveyed through a stimulus other than eye contact, such as pointing or name calling. It could be that males only struggle to process eye gaze, but not the referential signal that they should pay attention. This would mean that eye contact is unique in that it conveys additional information that can interfere with encoding and remembering what a speaker says. On the other hand, it is also possible that male participants will be unable to process the signal that a message is for them regardless of whether the signal is conveyed through eye contact or not. The present study will tease these two possibilities apart.    To clarify whether or not male participants can benefit from socially communicative cues that do not rely on eye contact, an investigator read words aloud while either pointing at a participant or not. Pointing was examined in this study because pointing gestures share more similarity with eye gaze than name-calling since pointing is a visual part of the body that is directional and delivers signals non-verbally. These words were then presented in a recognition test. If pointing does improve test performance, this would suggest that males can decode non-verbal gestures, but that they struggle to process eye contact. This finding would also support the idea that eye contact and other socially communicative gestures are functionally different. However, if pointing fails to improve test performance, it would suggest that males struggle to decode the non-verbal communicative cue. This finding would instead support the idea that eye 117  contact and other socially communicative gestures affect memory through the same mechanism, at least with regard to the present paradigm.  6.1 Methods  Participants. Twenty-eight undergraduate students from the University of British Columbia (28 males) received course credit for participating. All had normal or corrected to normal vision and were naive about the purpose of the experiment.  Design. Investigator pointing (Investigator pointing: at participant and no pointing) was manipulated within participant. Stimuli, Apparatus and Procedure. The stimuli, apparatus and procedure where the same as those used in Study 1 in Chapter 2 with the exception that now the female investigator pointed at the participant or made no gesture before saying the word on a given trial.  Instead of prompting the investigator to “look”, the instructional sequence during the encoding phase now promoted the investigator to “point” at the participant (for the exact timing, please refer to Figure 2.1). One second later, a word would appear on screen. As soon as the word appeared, the investigator would point and read the word simultaneously. While reading the words to the participants, the investigator kept her eyes and head directed towards the computer screen, and kept her hand hidden behind the laptop screen out of the participants view. To point at the participant, the investigator would lift her right hand above the laptop screen and extend her index finger to briefly point at the participant (slightly less than a second), and then return her hand behind the laptop.   118  6.2 Results Four two-tailed t-tests were conducted on response time (RT), response accuracy (percentage correct), response sensitivity (d prime) and response bias (beta) with investigator pointing (2 levels: participant and no pointing) as a within participant factor.  RT. Mean RTs are presented in figure 6.1.  There was no effect of investigator pointing (t(27)=0.41, SEM=43.49, p=0.68).  Figure 6.1 RT as a function of whether the investigator pointed (without pointing versus with pointing). Note that new words have been plotted in this figure as a reference point but were not included in the analysis. Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  Percentage correct. Analysis of the accuracy data (see Figure 6.2) revealed a main effect of investigator pointing (t(27)=2.85, SEM=1.60, p<0.01), such that participants recognized more 119  words that were spoken while the investigator pointed at them (71%) than when no gesture was made (66%).  Figure 6.2 Percentage correct as a function of whether the investigator pointed (without pointing versus with pointing). Note that new words have been plotted in this figure as a reference point but were not included in the analysis. Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  D’. The results mirror the accuracy data. Analysis of the sensitivity data (see Figure 6.3) revealed a main effect of investigator pointing (t(27)=2.79, SEM=0.05, p<0.05), such that participants were more sensitive to words that were spoken while the investigator pointed at them (1.47) than when no gesture was made (1.33). 120   Figure 6.3 D prime as a function of whether the investigator pointed (without pointing versus with pointing). Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  Beta. Analysis of the beta values (see figure 6.4) revealed that there was no effect of investigator pointing (t(27)=1.81, SEM=0.13, p=0.09). 121   Figure 6.4 Decision bias (Beta) as a function of whether the investigator pointed (without pointing versus without pointing). Error bars represent the 95% confidence interval as defined by Masson and Loftus (2003).  Comparison between eye gaze (Study 1 in Chapter 2) and pointing (Study 9) in male participants. An additional analysis was conducted to directly compare the memory effects observed in response to eye gaze in Study 1 in Chapter 2 and pointing in the present study. Only the male participants from Study 1 were used for comparison purposes. A two-way mixed ANOVA was conducted on response time (RT), response accuracy (percentage correct), response sensitivity (d prime) and response bias (beta) with investigator cue (2 levels: participant and no one) as the within participant factor and cue type (2 levels: eye gaze and pointing) as the between participant factor. Critically, analysis of the accuracy data revealed an interaction between investigator cue and cue type (F(1,68)=13.64, MSE=36.59, p<.001), such that when the investigator used eye gaze 122  in Study 1, participants recognized fewer words read while the investigator made eye contact with them (73%) than when they did not (76%; t(41)=2.37, SEM=1.33, p<0.05). However, when the investigator pointed, the pattern reversed - participants recognized more words that were spoken while the investigator pointed at them (71%) than when no gesture was made (66%; t(27)=2.85, SEM=1.60, p<0.05)6.    6.3 Discussion The results from this study demonstrate that pointing can improve recognition performance in male participants. This stands in contrast to Studies 1-3 presented in Chapter 2, where eye contact failed to generate memory benefits in male participants. Furthermore, the comparison between the memory effects observed in male participants in response to eye gaze in Study 1 in Chapter 2 and pointing in the present study indicate that while males may struggle to use and/or choose to ignore eye gaze as a nonverbal referential social signal, they can interpret a different nonverbal referential social signal, i.e., pointing. Males may use pointing, but not eye gaze, because pointing is a more overt signal. Eye contact, on the other hand, conveys a variety of different signals, which could make it more challenging for males to process. The finding that males do not use the investigator’s eye contact, whereas females do, is consistent with the idea that females are more sensitive to the subtle messages (or are better able to parse the various signals) embedded in one’s eye gaze (Bailenson et al., 2001; Bayliss et al., 2007; Connellan et al., 2000; Hall, 1978; Lutchmaya et al., 2002a; McClure, 2000; Rosenthal et al., 1979).                                                   6 The full analysis is reported in Appendix D. 123  Furthermore, while Studies 5-8 (Chapters 4 and 5) demonstrate that eye contact and other social gestures communicate similar signals, the data from the current study highlight the differences that exist in what these social signals convey. There is an ongoing debate regarding whether different cues are subserved by different mechanisms (Brignani, Guzzon, Marzi, & Miniussi, 2009; Marotta, Román-Caballero, & Lupiáñez, 2018; Olk, Tsankova, Petca, & Wilhelm, 2014; Ristic & Kingstone, 2012). This particular study suggests that these cues affect memory through different mechanisms, however, Studies 5-8 (Chapters 4 and 5) highlight how eye contact and other cues also share a common mechanism that affects memory performance. Thus, any memory effects that are observed in response to eye gaze cannot be assumed to reflect a signal that is unique to eye gaze or a signal that is general to all other referential cues. Instead, it is sensible to assume that eye gaze related memory effects reflect a mechanism common to a variety of referential cues, and that something unique to gaze contributes to the absence of memory benefits in males. Furthermore, this assumption may apply to studies both in laboratory and natural settings that examine the effects of eye gaze, pointing, name calling, and other social cues, across a wide array of stimuli that vary in terms of potential for social interaction (ranging from still photos to naturally occurring interactions between real people). The implications and interpretations of this entire body of work will be discussed further in Chapter 7.  124  Chapter 7: General Discussion 7.1 Chapter overview The goal of this dissertation was to explore and clarify whether socially communicative eye contact can enhance memory for spoken information, by using a novel paradigm that directly tests this issue. Past laboratory studies, that were rigorous in nature, had compromised the socially communicative aspects of gaze by relying solely on images of people to manipulate eye gaze. On the other hand, studies from natural settings that retained the socially communicative aspects of eye gaze by using live people, had compromised rigor by failing to systematically manipulate or measure eye gaze.  Thus, from previous work alone, it was unclear whether, and if so how, socially communicative eye gaze affects memory. The studies presented in this document relied on the combined strengths of both laboratory and natural settings to address this matter. The present work systematically manipulated the eye gaze of a real and present speaker in the context of a traditional recognition paradigm used frequently in the laboratory.   Each chapter addressed some aspect of the issues raised in Chapter 1 – specifically the issue surrounding the need to use a controlled paradigm to study the effect of eye gaze, without compromising the signals that socially communicative eye contact provides in a natural setting.  Chapters 2 and 3 directly tested the assumption suggested in the literature that socially communicative aspects of eye gaze can improve memory for spoken information. These initial four experiments revealed that socially communicative aspects of eye gaze have a significant effect on memory for spoken words, such that memory improved in females when eye contact was made, but not in males. The next three chapters aimed to determine whether the communicative signal that improved memory in the previous chapters was specific to the eyes. These studies revealed that when an individual is part of a dyad, memory can improve when eye contact is made, and can 125  also decline when eye contact is made with one's partner. While eye gaze can influence memory in a manner that is unique, the communicative signal that drives memory effects in the present paradigm can be conveyed through other social cues as well.  In this final chapter, a general discussion of the studies presented in Chapters 2-6 is presented. Implications and future directions based on the presented research are also discussed.   7.2 Summary of thesis 7.2.1 Chapter 1 An overwhelming amount of our knowledge regarding how people attend to the eyes of others has come from laboratory research. These studies reveal that people are highly attentive to the eye gaze of others across a wide variety of tasks (e.g., free viewing; Birmingham et al., 2008a, 2009a; Foulsham et al., 2010; Foulsham & Sanderson, 2013; Gustav Kuhn et al., 2009; Mojzisch et al., 2006; Schrammel et al., 2009; Walker-Smith et al., 1977; Yarbus, 1967; attentional cueing; Driver et al., 1999; Friesen & Kingstone, 1998, 2003; Kuhn et al., 2014; Langton & Bruce, 1999; Palanica & Itier, 2011; Rensink & Kuhn, 2015; Ristic, Friesen, & Kingstone, 2002; Senju & Hasegawa, 2005; Vuilleumier, 2002; Wiese et al., 2012; Wykowska et al., 2014; Zwickel & Võ, 2010; visual search; Doi, Ueda, & Shinohara, 2009; Doi & Ueda, 2007; Nathalie George et al., 2006; Senju et al., 2005; 2008; Palanica & Itier, 2011; von Grunau & Anston, 1995; and face detection; Conty et al., 2007, 2006; Itier, Van Roon, & Alain, 2011; Itier, Villate, et al., 2007; Macrae et al., 2002; Pageler et al., 2003; Vuilleumier et al., 2005) and a variety of stimuli (e.g., images of faces; Laidlaw et al., 2012; complex scenes; Birmingham et al., 2008a, 2009a; Vuilleumier et al., 2005; and dynamic videos, Foulsham et al., 2010; Foulsham & Sanderson, 2013; Kuhn et al., 2014; Wykowska et al., 2014). Recent research has revealed that the attentional 126  behaviors and underlying neural mechanisms engaged while interacting with a real person, who can interact with you, are fundamentally different than those exhibited while viewing an image of a person (e.g., Hietanen et al., 2008; Pönkänen, Alhoniemi, Leppänen, & Hietanen, 2011; Pönkänen, Peltola, & Hietanen, 2011; Risko, Laidlaw, Freeth, Foulsham, & Kingstone, 2012; Schilbach et al., 2013; Teufel, Fletcher, & Davis, 2010).  Indeed, while people look at others who may interact with them when it is socially acceptable (e.g., while sharing a meal; Wu et al., 2014; Wu et al., 2013), they avoid looking at people when it is not (e.g., Gallup, Hale, et al., 2012; Gallup, Chong, et al., 2012; Laidlaw et al., 2011). To better understand how socially communicative eye gaze influences attention, researchers are conducting investigations that make use of a wide array of stimuli ranging from images to real people, in a wide array of settings ranging from the lab to natural settings (Jarick & Kingstone, 2015; Jarick et al., 2016; Laidlaw et al., 2016; Laidlaw et al., 2011; Nasiopoulos et al., 2015; Pfeiffer et al., 2012, 2013; Przyrembel, Smallwood, Pauen, & Singer, 2012; Risko & Kingstone, 2011, 2015, Risko et al., 2012, 2016, Schilbach, 2010, 2015, Schilbach et al., 2006, 2013; Wilms et al., 2010).  Even though many of the same concerns regarding the ecological validity of attention to social stimuli also apply to memory for social stimuli, most of the work investigating how eye gaze affects memory has relied on using images of people as stimuli. Some of these studies report that direct gaze can make an image of a face more memorable (Hood, Macrae, Cole-Davies, & Dias, 2003; Macrae et al., 2002; Mason et al., 2004; Smith et al., 2006) and words associated with direct gaze are also often more memorable (Falck-Ytter, Carlström, & Johansson, 2014; Fry & Smith, 1975; Kelley & Gorham, 1988; Macrae et al., 2002; though for examples of instances where the direct gaze of a face has hindered memory see, Beattie, 1981; Conty, Gimmig, Belletier, George, & Huguet, 2010; Nemeth, Turcsik, Farkas, & Janacsek; 2013). While it is promising that 127  investigations using live people also suggest that a speaker’s eye contact may improve memory, as described below, the methodologies used by memory researchers in these socially communicative settings prevent one from actually concluding that eye contact improves memory. Studies using live people to generate interactive settings have also asked a different question about eye gaze and memory than studies that use images of people. Investigations using live speakers have focused on whether eye gaze influences memory for what a speaker says (Fullwood & Doherty-Sneddon, 2006; Helminen et al., 2016; Otteson & Otteson, 1980; Phelps, Doherty-Sneddon, & Warnock, 2006; Sherwood, 1987), whereas studies using images have primarily sought to understand how eye gaze influences memory for what someone looks like (Farroni et al., 2002; Hood, Macrae, Cole-Davies, & Dias, 2003; Laidlaw & Kingstone, 2017; Mason, Hood, & Macrae, 2004; Smith, Hood, & Hector, 2006). Moreover, investigations with live speakers have not actually measured and/or systematically manipulated when a given participant experiences eye contact with an investigator. Speakers in these studies either make eye contact occasionally as they speak to an audience or they do not make any eye contact at all. The participants in the audience are then asked to recall what the speaker said. The way eye contact is manipulated and measured in these studies creates two central issues that demand investigation. The first is that these studies have not measured when, and how much, eye contact a particular participant makes with a speaker. Thus, it is entirely possible that participants do not make eye contact with the speaker at all throughout the study. The uncertainty regarding the extent to which people experienced eye contact in these studies prevents one from determining whether eye contact improves memory.  The second limitation is that the specific information that was spoken while the speaker putatively made eye contact is not distinguished from information said without eye contact. As a result, these studies cannot determine whether memory effects related to gaze reflect, 128  for example, enhancement from direct gaze (Falck-Ytter, Carlström, & Johansson, 2014; Fry & Smith, 1975; Kelley & Gorham, 1988; Macrae et al., 2002) and/or a decline resulting from gaze aversion (Beattie, 1981; Conty, Gimmig, Belletier, George, & Huguet, 2010; Nemeth, Turcsik, Farkas, & Janacsek; 2013). At best, the mixed results in the literature suggest that both factors may be in play.    7.2.2 Chapter 2 In Chapter 2, I sought to clarify whether socially communicative eye contact helps or hinders memory for what a speaker says. In Study 1, a female investigator read words aloud to a participant while making eye contact, or not, as a word was spoken. Next, these words, and words that participants had not heard previously, were presented in a recognition test. An investigator’s eye contact seemed to improve word recognition, but only for female participants. In male participants, word recognition was actually hindered by the investigator’s eye contact. These findings suggest that socially communicative eye contact improves memory for spoken words for females, but not for males. One explanation for the gender differences in Study 1 is that female participants benefited from the investigator’s eye contact more than males because they shared a common identity (i.e., being female; Buttelmann et al, 2013; Cassidy et al, 2011, Lewin & Herlitz, 2002). Similarly, making eye contact with an investigator of the opposite gender may have interfered with the male participants’ ability to pay attention to the words spoken when the investigator made eye contact. However, it is also possible that female participants were simply more attentive to the investigator’s eye contact than male participants irrespective of the investigator’s gender (Bailenson et al., 2001; Bayliss, Pellegrino, & Tipper, 2005; Connellan et al., 2000a; Hall, 1978; 129  Lutchmaya et al., 2002; Marschner et al., 2015; McClure, 2000; Yee et al., 2007). To distinguish between these possibilities, Study 2 used a male investigator instead of a female investigator. The results replicated the finding from Study 1 that a speaker’s eye contact improved word recognition only in female participants. Taken together, the results of Studies 1 and 2 demonstrate that the investigator’s gender is not driving the gender specific memory effects. This finding is consistent with the notion that females are interpreting non-verbal social cues, such as eye contact, and using them differently than males. In Studies 1 and 2 the eye contact initiated by the investigator was quite brief (i.e., a quick glance for less than a second as the word was spoken). While females may have noticed these brief glances, the investigator’s eye contact may have been too brief for male participants to both notice the eye contact and then dedicate more attention to words accordingly. In Study 3, investigators prolonged their eye contact to provide a longer (approximately three seconds) opportunity to observe and process the investigator’s eye gaze. Despite this, the results replicated the finding from Studies 1 and 2, such that females recognize more words spoken while the investigator made eye contact, but male participants do not. Collectively the data provide clear evidence that socially communicative eye contact facilitates encoding and later memory for female participants, but not for male participants. These findings highlight the importance of systematically examining how gender influences gaze related memory effects.  Another possibility for the various contradictory findings that exist in the literature is that some work has been done in socially communicative contexts using live people as stimuli (Fullwood & Doherty-Sneddon, 2006; Helminen et al., 2016; J. Otteson & Otteson, 1980; Phelps et al., 2006; Sherwood, 1987); whereas other work has made use of images which do not generate socially communicative contexts  (Beattie, 1981; Conty, Gimmig, Belletier, George, & Huguet, 130  2010; Falck-Ytter, Carlström, & Johansson, 2014; Fry & Smith, 1975; Kelley & Gorham, 1988; Macrae et al., 2002; Nemeth, Turcsik, Farkas, & Janacsek; 2013). A real person's eye contact provides many different signals. For example, when eye contact is made one can glean information about someone’s emotional, attentional, and mental state very quickly, and it is a powerful signal that you are paying attention to each other. Any or all of these signals could influence memory.  In Chapter 2, the possibility of interaction between the investigator and participant ensured that the investigator's eye gaze would convey socially communicative signals. However, non-social cues associated with the investigator’s eye gaze could still be influencing the participant’s behaviour. Before concluding that socially communicative aspects of gaze produce these memory effects, it is important to eliminate a non-social interpretation that could possibly account for the significant effect of gaze on memory.   7.2.3 Chapter 3 In Chapter 3, I tested whether the memory effects observed in Chapter 2 were contingent on socially communicative eye gaze or not.  By presenting a video of an investigator instead of a live investigator, the socially communicative function of eye gaze was removed.  A live investigator generates an interactive context in which both the investigator and the participants can convey and observe signals with their eyes. This is not the case when the investigator is presented over video, since the investigator cannot observe any signals that the participants convey through their eye gaze. Viewing a pre-recorded investigator enabled a strong test of whether non-social signals embedded in eye gaze could generate memory effects that were previously observed in response to a live investigator's eye gaze in Chapter 2. Participants watched a video of a female or male investigator who either simulated eye contact with the participant or kept her/his eyes down at the computer screen as words were read aloud. Afterwards participants completed the same 131  recognition test that was used in the studies presented in Chapter 2. The results were unequivocal. The memory effects previously observed in response to a live investigator's eye gaze disappeared. Without a socially communicative context, eye gaze had no effect on memory. This finding provides strong support for the idea that socially communicative signals conveyed through eye gaze influence memory.  This also stands in contrast with a non-communicative explanation of how eye contact could affect memory. 7.2.4 Chapter 4 The interpretation of the memory effects reported in Chapter 2 and 3 and those previously reported in the literature is that a live investigator’s eye contact enhances memory (Helminen, Pasanen, et al., 2016; Otteson & Otteson, 1979; Sherwood, 1987). However, it is equally plausible that the investigator’s gaze avoidance reduced memory (Fullwood & Doherty-Sneddon, 2006). These two possibilities were not distinguished in Chapter 2 (or in previous literature) because there was no adequate baseline (i.e., where the investigator could made eye contact with someone other than the participant). The studies in Chapter 4 included a baseline to distinguish whether the memory performance is enhanced by the investigator’s eye contact and/or reduced when the investigator looks elsewhere (i.e., does not make eye contact with anyone) in female participants. The secondary goal of Chapter 4 was to clarify whether observing a head lift or experiencing mutual eye contact is responsible for the previously observed memory benefits. During a live social encounter, there are many social signals communicated that might affect how someone pays attention. For example, in the previous studies two non-verbal socially communicative cues, both non-directional head lifts and directional eye contact, could signal when an individual should pay attention to a particular word. Since eye contact did not improve memory in male participants in any of the previous studies, only female participants were tested in Chapter 4. In Study 5, an 132  investigator read words aloud to two participants. While reading the words, the investigator alternated making eye contact with one participant and then the other or looked away from both participants by looking at a computer screen. Two important findings emerged. First, word recognition improved only when a participant made eye contact with the investigator. Second, word recognition was worse when the investigator looked at a participant’s partner relative to when neither participant was looked at. This suggests that observing eye contact between other people excludes a participant and can lead to memory decrements for that excluded participant. Furthermore, these findings also indicate that experiencing eye contact, but not a head-lift, led to these memory effects. The joint interpretation of the findings in Study 5 is that eye contact can signal who a message is intended for (Csibra & Gergely, 2009; Senju & Csibra, 2008). When two people are present, the listeners infer that an investigator’s eye contact includes one in the interaction, and the listener is more attentive as a result. Likewise, by observing a speaker make eye contact with someone else a listener infers that they are excluded, and so they dedicate less attention to what a speaker says.  This raises the question of whether participants need to be physically present or not during a live encounter with an investigator for these different memory effects to emerge. In Chapter 2, the eye gaze of a videotaped investigator had no effect on memory. While presenting the investigator over video was meant to only remove the socially communicative aspects of eye gaze, the physical presence of the investigator was also removed. Study 6 tested whether the memory benefits and deficits observed in Study 5 would persist during a live interaction if the investigator was not physically present in the same room as the participant.  A live investigator was presented over skype to two participants who were seated in separate rooms. Critically, both participants believed that their partner was in the room with the investigator and only they had been isolated. 133  As in study 5 when the investigator was in-person, a participant’s memory was best for words communicated when the investigator simulated eye contact over skype. While there was a memory cost for words spoken anytime the investigator looked away from the participant over skype, this performance cost was similar regardless of whether the investigator appeared to be looking at the partner or at their computer screen. This finding suggests that unless a participant is physically present to see the investigator looking at something else (rather than someone else), they will assume that the spoken information is intended for someone in the room with the investigator. Thus, social exclusion is experienced over skype anytime the investigator looks away from a participant, regardless of who/what the investigator is looking at.  In sum, the studies in Chapter 4 provide further evidence that socially communicative eye contact modulates memory. These studies demonstrate that experiencing eye contact improves a participant’s memory for what an investigator said, and observing an investigator make eye contact with someone else reduces memory for what was said. These changes in memory can be attributed to a participant interpreting an investigator's eye contact as signaling who a message is for.  7.2.5 Chapter 5 The social signal of who a message is for is not necessarily unique to eye contact. Directional non-verbal gestures, such as pointing, can convey this message as well. While there is plenty of evidence to suggest eye gaze is a unique social stimulus (Conty, Russo, et al., 2010; Csibra & Gergely, 2009; Gobel et al., 2015; Hayward et al., 2017; Jarick & Kingstone, 2015; Kleinke, 1986; Myllyneva & Hietanen, 2016; Nasiopoulos et al., 2015; Nemeth et al., 2013; Pönkänen, Alhoniemi, et al., 2011; Pönkänen, Peltola, et al., 2011; Risko et al., 2016b; Schilbach, 2015; Senju & Csibra, 2008) there are definitely situations in which its effects are similar to that 134  of other stimuli (Böckler et al., 2015; Kingstone, Smilek, Ristic, Friesen, & Eastwood, 2003; Kuhn & Kingstone, 2009; Ristic, Friesen, & Kingstone, 2002; Ristic et al., 2007). For example, certain spatial orienting effects elicited by images of eyes in the lab are practically indistinguishable from attentional effects elicited by other non-social cues (i.e., arrows; Galfano et al., 2012; Kuhn & Kingstone, 2009; Ristic, Friesen, & Kingstone, 2002; Ristic, Wright, & Kingstone, 2007; Ristic & Kingstone, 2012). The studies presented in Chapter 5 sought to discern whether the memory effects observed in the dissertation are unique to the eyes or whether these effects might generalize to other social cues that can also convey who a message is for. In Study 7, the investigator read words aloud to two participants while alternating between pointing at a participant, their partner, or at neither participant. Relative to when no one was pointed at, participants’ recognition performance was best for words that were spoken by an investigator who pointed at them, and worse than baseline for words spoken while the investigator pointed at their partner.  This demonstrates that the previous memory effects are not gaze specific, and that they generalize to another visual, directional, nonverbal social signal, i.e., pointing.  That said, it is also possible that both pointing, and gaze, shift attention in a similar way because both cues are visuo-directional stimuli. To test whether these effects generalize to a non-visual and non-directional social signal, in Study 8 the investigator called out the name of a participant, their partner, or neither participant before reading a word. The findings mirrored the memory benefits and costs that were found in Study 7 when the investigator pointed at participants, and those related to gaze (e.g. Studies 5 and 6). Together these data indicate that a key variable to the memory effects observed throughout the present studies is 135  the communicative signal of "who the word is intended for"7. Furthermore, this information can be conveyed by a visual directional change in eye gaze, or similarly by a pointing gesture, and even by a nonvisual, nondirectional communicative signal such as calling out a person's name.   7.2.6 Chapter 6 This chapter investigated if the memory improvements observed in response to cues other than eye contact in Chapter 5 would generalize to male participants. Recall that male participants were excluded from Chapters 4 and 5 because their recognition performance failed to show any reliable benefit from an investigator's eye contact, and in two of the first three studies, the results suggested a memory cost for words that were spoken during eye contact. One possibility is that males do not have the same tendency as females to use the investigator's eye gaze to guide their attention in this paradigm, but males could interpret a general signal (who a message is for) if it was conveyed through a different cue instead of eye gaze. To address this issue in Study 9 male participants were read words by an investigator who either pointed at the participant before reading each word or not. Male participants recognized words better when spoken while the investigator pointed at them. This finding suggests that, unlike eye gaze, other socially communicative cues (e.g., pointing) can elicit verbal memory benefits in males.  Note also that while the data from Chapters 5 and 6 indicate that multiple social stimuli can signal "who a spoken word is for", and although female participants seem to be equally sensitive to all these cues, for male participants, eye gaze appears to be different from other social cues, such as pointing.                                                  7 Note, that in all of the studies there was never any actual instruction to the participants that some words were more (or less) important to attend to than others. 136  7.2.7 Summary  In sum, the findings from this body of work shed light on how socially communicative signals -- eye gaze, pointing, and naming -- affect memory. The data from Chapter 2 established that socially communicative eye contact could improve memory for verbal information. Chapter 3 demonstrated that a socially communicative context was critical for generating eye gaze related memory benefits, and that in the absence of a socially communicative context, eye gaze related memory benefits would disappear. In Chapter 4 eye gaze generated memory benefits and deficits, which suggested that eye gaze was communicating who should pay attention and who should not. Even though a live context was required to produce these effects, being physically present was not. Chapter 5 shed light on a variety of social cues (including eye contact) that can induce memory benefits and deficits because all of these cues can communicate who should pay attention. This provided clear evidence that the observed memory effects were not unique to eye contact. However, the data from Chapter 6, considered jointly with data from Chapter 2 reveal that eye gaze is a rich social cue that is still processed differently than other social cues. Therefore, memory effects that are generated with eye gaze (and specifically those associated with eye contact) cannot be assumed to be unique to eye gaze or common to all social cues.     137   7.3 Implications, limitations and future directions 7.3.1 The hierarchy of social cues The goal at the outset of this thesis was to test the assumption that socially communicative aspects of eye gaze drive the memory effects that have been reported in the literature. The studies presented in Chapters 2 and 4 support this assumption by providing clear evidence that socially communicative eye gaze impacts memory. However, it also became apparent that the memory benefits and deficits observed in response to eye gaze in Chapter 4 could generalize to other social cues that convey referential information (i.e., who a message is for) in Chapter 5. The data from these chapters clearly demonstrate that a variety of social cues (including eye contact) can induce memory benefits and deficits, and suggest that all of these social cues communicate who should pay attention. Despite the fact that all of these cues can produce memory benefits and deficits, these social cues are not necessarily of equal importance or utility. Indeed, the findings from Chapter 2 and 6 suggest that there are important differences between these social cues, and that these differences could affect one's ability to use and/or rely on them. Some cues, and eye contact seems to be one of them, may be relatively more important to pay attention to than others. For example, eye gaze may be more important than hand gestures because the eyes can convey attentional information as well as other information about a person (i.e., mental, emotional, and intentional states). Indeed, many studies suggest that we prefer to attend to the eyes over all other information, although it should be noted that these studies carry the general caveat that they were conducted with images of people as stimuli (Birmingham et al., 2008a, 2008b, 2009a, 2009b; Castelhano et al., 2007; Cheng et al., 2013; Coutrot & Guyader, 2014; Foulsham et al., 2010; Foulsham & Sanderson, 138  2013; Henderson et al., 2005; Gustav Kuhn et al., 2009; Laidlaw et al., 2012; Pelphrey et al., 2002; Smilek et al., 2006; Walker-Smith et al., 1977; Yarbus, 1967). However, the present studies cannot shed light on the relative importance of these different social cues since their impact on memory was not assessed while manipulating the cues in conjunction with one another.  Future studies could explore the relative importance of these cues by using variants of the paradigm used in this thesis. For example, researchers could manipulate any number of relevant social cues (i.e., pointing, naming, head lifts, fluctuations in speech, eye contact, etc.) together in the same study to learn whether these different cues have similar effects on memory. Further, by offering two different cues at the same time (e.g., pointing as someone while making eye contact) researchers could determine how these cues operate in conjunction with one another to affect memory. The fact that we do prefer to look at the eyes - when it is socially acceptable to do so – suggests that individuals (or more specifically, females) might rely on the eyes more than other cues when they are visible (Birmingham et al., 2008a, 2008b, 2009a, 2009b; Castelhano et al., 2007; Cheng et al., 2013; Coutrot & Guyader, 2014; Foulsham et al., 2010; Foulsham & Sanderson, 2013; Gustav Kuhn et al., 2009; Smilek et al., 2006). However, the tendency to look at the eyes is curbed dramatically (to the point that it nearly disappears) in contexts when it is inappropriate to look at the eyes (Cary, 1978; Foulsham, Walker, & Kingstone, 2011; Freeth et al., 2013; Gallup, et al., 2012; Gobel, et al., 2015; Goffman, 1963; Gregory et al., 2015; Kuhn, et al., 2016; Laidlaw, et al., 2011; Laidlaw, et al., 2016; Patterson, et al., 2002; Wesselmann, et al., 2012; Wu, et al, 2013; 2014; Zuckerman, et al., 1983). In these situations, it is likely that other social cues are relied on instead of the eyes. For example, primates will rely on different social signals (e.g., head position) to glean information about other primates when their eyes are not visible (Deaner & Platt, 2003; Scerif, Gomez, & Byrne, 2004; Tomasello 139  et al., 1998). Given the challenge of monitoring the eyes of others in certain situations, an interesting question to explore is which social cues people prefer to rely on, and how this might change with context. Using the study described below, a hierarchy of social attentional cues could be established by determining which cues people rely on when multiple cues are available or the available social cues provide conflicting information. For example, to establish the relative importance of pointing and eye contact, the paradigm used in Chapters 4 and 5 could be adapted so that the investigator alternates providing cues to two participants while reading words aloud. On a given trial, the investigator could deliver a cue by itself (i.e., either pointing or making eye contact) or with another cue (i.e., pointing and making eye contact simultaneously). When the investigator provides two cues at the same time, these cues could either provide the same signal (i.e., pointing at a participant while making eye contact with them) or conflicting signals (i.e., pointing at a participant while making eye contact with their partner).  Participants may rely on a certain cue more than the others because in everyday life it provides a stronger more reliable signal to pay attention. If eye contact was the preferred signal to pay attention, then memory performance would improve when the investigator makes eye contact with a participant, regardless of whether the participant or their partner is pointed at. Likewise, memory performance may worsen when eye contact is made with the participant’s partner, regardless of whether the participant or their partner is pointed at. On the other hand, if pointing is the preferred attentional cue, then memory performance should improve when a participant is pointed at, regardless of whether they make eye contact with the participant or their partner. Similarly, memory performance may worsen when the investigator points at their partner, regardless of whether they make eye contact with the participant or their partner.  140  Another possibility is that participants will rely on both signals to direct their attention. If this were the case, then memory performance may improve the most when the investigator points and looks at a participant and decline the most when the investigator points and looks at their partner, relative to when only one signal is available. However, when the signals conflict with one another, memory performance may vary based on whether both signals are relied on or whether one signal can override the other. For example, if both cues are relied on equally, then attempting to resolve the conflicting eye gaze and pointing signals might interfere with memory performance for both participants. However, if one cue overrides the other, performance might improve for both participants when conflicting pointing and eye contact cues are presented because they rely on whichever cue indicates a word is for them and disregard the other cue that signals that the same word is for their partner. Of course, if one signal is always preferred when cues are in conflict, let us suppose that eye gaze is the preferred cue, then the participants' word memory would benefit from being looked at even though their partner is pointed at. In contrast, their partner’s memory for this same word would decline since s/he was not looked at by the investigator. In sum, this example study illustrates one of many possible experiments that could explore the relative importance of different social cues. Any number of social cues (i.e., head lifts, naming, accentuating speech, etc.) could replace and/or be added to those used in the example above. Furthermore, it would be sensible to establish which cues people rely upon in contexts where it is appropriate to look at people and in contexts where it is not. Suppose that people generally preferred to monitor the eyes over other nonverbal and vocal cues, a tendency that has been reported in numerous studies (Birmingham et al., 2008a, 2008b, 2009a, 2009b; Castelhano et al., 2007; Cheng et al., 2013; Coutrot & Guyader, 2014; Foulsham et al., 2010; Foulsham & Sanderson, 2013; Gustav Kuhn et al., 2009; Smilek et al., 2006). In a context where the social 141  norm is generally not to look at other people (e.g., in a waiting room), this tendency to use the eyes drops considerably (Cary, 1978; Foulsham, Walker, & Kingstone, 2011; Freeth et al., 2013; Gallup, et al., 2012; Gobel, et al., 2015; Goffman, 1963; Gregory et al., 2015; Kuhn, et al., 2016; Laidlaw, et al., 2011; Laidlaw, et al., 2016; Patterson, et al., 2002; Wesselmann, et al., 2012; Wu, et al, 2013; 2014; Zuckerman, et al., 1983). Instead, individuals may rely on non-verbal signals, such as pointing or vocal cues, to avoid being 'caught' looking at someone’s eyes. This line of inquiry could be used to determine the circumstances under which a normally less preferred attentional cue is more likely to be used.  7.3.2 Exploring how cultural and individual differences affect eye gaze related memory effects.  In both the present studies and in previous work (Goodman, Phelan, & Johnson, 2012; Helminen, Pasanen, & Hietanen, 2016; Hood et al., 2003; Macrae et al., 2002; Mason et al., 2004; Otteson & Otteson, 1979; Smith et al., 2006;  Vuilleumier et al, 2005) a participant’s gender modified whether direct gaze had a positive or negative impact on memory. In the present work, eye contact improved verbal memory in female participants and hindered memory in males. However, it is not entirely clear why eye gaze had differential effects on memory in males and females. In fact, it has been a challenge for researchers to simply explain why direct gaze can have both positive and negative influences on cognitive processes more generally, let alone how one’s gender modifies these gaze related effects. Conty, George and Hietanen (2016) proposed that direct gaze first captures one’s attention and then triggers self-referential processing, i.e., a heightened processing of contextual information in relation with the self (Northoff, et al, 2006). According to this account, direct gaze can have both positive and negative effects on performance since the tendency to pay attention to the direct gaze of others either facilitates or interferes with 142  performance on a task (e.g., direct gaze may facilitate processing a face, but hinder processing information that is not related to the face). However, once direct gaze has triggered self-referential processing, any information associated with it would be prioritized. Indeed, a large body of research suggests that memory is improved for information processed in relation to oneself (i.e., the self-referential memory effect; Kim, 2011; Macrae, Moran, Heatherton, Banfield & Kelley, 2004; Northoff, et al, 2006). In the studies presented in Chapter 2, it is possible that the extent to which the speaker’s eye contact triggered self-referential processing differed between males and females. Females may have processed both the speaker’s eye contact and the self-referential cue it provides simultaneously, or simply processed these two signals more efficiently and sequentially. Thus, any interference8 in hearing what the speaker said, caused by simply processing the speaker’s eye contact, was overridden by the self-referential processing benefit triggered through the speaker’s eye contact. Men on the other hand may notice and process the speaker’s eye contact, but not the self-referential cue it provides. As a result, the speaker’s eye contact only interferes with processing what the speaker says. This could be due to interference caused by processing any self-referential cue in the context of the task (i.e., any self-referential cue could be distracting since it’s irrelevant to the task of listening to everything the speaker says). However, this seems unlikely since in contrast to when males are looked at (Chapter 2), male’s memory performance improves for information spoken when they are pointed at (Chapter 9). Instead, the joint interpretation of these findings suggests that men can process self-referential cues which can then facilitate                                                  8 Note that it is entirely possible that women do not experience interference at all in response to eye contact, and may even experience facilitation due to eye contact at this stage. 143  processing information associated with them, but not when self-referential cues are conveyed through eye contact. This supports the idea that the memory deficit for words spoken with eye contact observed in male participants (Chapter 2) is unique to processing eye contact, rather than processing self-referential cues more generally.  Even though the present studies revealed that the benefit of eye gaze on memory is contingent on a participant’s gender, this is merely one personal characteristic of many that have yet to be explored. There are a number of cultural and individual differences that influence how individuals pay attention to the eyes of others (Blais, et al., 2008; Connellan et al., 2000; Dawson, et al., 1998; Lutchmaya et al., 2002a; 2011; Senju & Johnson, 2009a; Wieser, Pauli, Alpers, & Mühlberger, 2009), and it is reasonable to think that these factors could also influence social memory as well. For example, individuals from different cultures look at the eyes of others differently (Blais et al., 2008; Jack, et al., 2007), and the tendency to look at other people also differs in Western and East Asian locations (Patterson, et al., 2007). Perhaps most importantly, the tendency to make eye contact with other people is determined by cultural norms (Knapp, Judith, & Horgan, 2009). While it is unclear why the preference to look at people and their eyes differs across cultures (Caldera, et al., 2010; Park & Huang, 2010), it is clear that the tendency to look at, and potentially learn from, the eyes of others is contingent on one’s culture. Thus, one exciting line for future investigation would be to systematically examine if, and when, eye gaze related memory effects generalize to participants of different ethnicities and cultures.   Likewise, there are a number of individual differences that could modify one’s tendency to look at and, subsequently learn from the eyes of others. Some individuals experience anxiety or feel uncomfortable while making eye contact with others. For example, individuals with autism spectrum disorder often have difficulty making or maintaining eye contact (e.g. Dawson, et al., 144  1998), an effect that is even stronger and more consistent in interactive contexts, which interferes with social interaction (Gregory & Antolin, 2018). Experiencing discomfort or anxiety while making eye contact may modify whether an individual will experience memory effects in response to the eye gaze of a speaker. Supposing that gaze anxiety did modify eye gaze related memory effects, it would be interesting to explore whether memory effects generated by other social cues are also modified by reported gaze anxiety, or not. Differences in social status could also influence memory effects related to the eyes and other social cues. People preferentially attend to and follow the gaze of individuals who appear more dominant (Jones et al., 2010) or of higher social status (Dalmaso, Pavan, Castelli, & Galfano, 2012). Thus, an individual’s willingness to look at and learn from someone may change depending on whether they perceive someone to be of higher or lower social status. It would be useful to determine whether a participant’s social status influences the magnitude of eye gaze related memory effects. Future studies could also manipulate a speaker’s social status, in the same way a speaker’s gender was manipulated in the present studies, to learn how this factor contributes to memory effects generated by a speaker’s social cues.   Even though examining other individual and cultural differences was beyond the scope of the present investigation, some of these differences could contribute to variation in the eye gaze related memory effects observed in the present work. As noted above, examining whether and, if so, how cultural or individual differences affect the influence eye gaze and memory in interactive settings promises to be fruitful lines for future investigation. This work should yield a more complete understanding of whether eye gaze related memory effects are sensitive to both individual and cultural differences which, ultimately, will enhance the generalizability of the social memory effects observed in the present studies. 145  7.3.3 Exploring how eye contact affects different components of memory While the present work has investigated how word recognition is affected by socially communicative signals (especially gaze direction) it has not explored or manipulated memory for other types of materials (e.g., faces) or different memory processes (e.g., retrieval). Declarative memory involves memories that we can consciously process and consider, and involves a number of components including working memory (Baddeley, 2003; Baddeley & Hitch, 1974), episodic memory (Tulving, 1972 Tulving, 2002; Tulving & Murray, 1985), and semantic memory (Collins & Quillian, 1969; Tulving, 1972). Three processes appear integral to the acquisition and recollection of such memories (Baddeley, 1992; Baddeley, Eysenck, & Anderson, 2009; Brébion, David, Bressan, & Pilowsky, 2007; Brown & Craik, 2000; Crowder, 1976): encoding (processing sensory information as a construct that can be remembered later), consolidation (stabilizing a memory trace after it has been acquired) and retrieval (accessing the information when needed). Encoding, the critical interface between working memory and longer-term consolidation, drives the beginning of short-term consolidation, which occurs within a few hours of initial processing. Neurobiologically, this short-term consolidation involves changes to existing synaptic connections and the creation of new connections in the hippocampal circuit (Frankland & Bontempi, 2005; Todd, Palombo, Levine, & Anderson, 2011). Over the longer-term, successful consolidation entails a broader reorganization of the brain regions that support memory at the system-level, including, and in particular, the temporal lobe.   Not all memories are retrieved with comparable fidelity and ease. Indeed, some researchers have argued that there are different degrees of retrieval success, often drawing a distinction between recall and recognition (MacLeod & Kampe, 1996).  In recall, the information must be retrieved from memories (Lockhart, 2000).  In recognition, the presentation of a familiar stimulus 146  provides a cue that the information has been seen before (and unfamiliar stimulus will not provide a cue; Brown, Roediger, & McDaniel, 2014). A cue might be an object or a scene—any stimulus that reminds a person of something related. While recognition and recall are often described as distinct forms of retrieval, both reflect the encoding and consolidation processes, but vary in terms of fidelity and confidence of the memory. The present body of work has demonstrated that manipulating a speaker’s eye contact and other social cues during encoding9 can influence recognition memory for semantic (word) information, but many outstanding questions remain. An important question related to memory retrieval is whether a speaker’s eye contact affects recall as well recognition. While studies from natural settings suggest that viewers generally recall more information when speakers periodically make eye contact than when they do not (Fullwood & Doherty-Sneddon, 2006; Otteson & Otteson, 1980; Sherwood, 1987), this notion remains to be tested in a rigorous paradigm that permits one to assess who is, and is not, receiving eye contact, and what information specifically is being delivered in those moments. The influence that eye contact has on recall could be tested using a modified version of the paradigm used in this thesis. For instance, participants could listen to a speaker who makes eye contact or not while speaking a word. Afterwards, instead of giving them a recognition test, participants could freely recall as many words as they can. If eye contact affects recall in the same way it affects recognition, then one would expect participants (or at least female participants) to recall more words that were spoken with eye contact than without it.                                                   9 It should be noted that manipulations during encoding may be manipulating short-term consolidation as well.  147  Another interesting question relevant to memory retrieval is whether making eye contact during retrieval will help or hinder this process. Some research suggests that direct gaze during the retrieval process can enhance memory for a face (Hood, Macrae, Cole-Davies, & Dias, 2003; Smith, Hood, & Hector, 2006). However, this question has yet to be tested in a paradigm that systematically manipulates a speaker’s eye gaze during the retrieval process. This could be done by having an investigator read words aloud to a participant while alternating making eye contact as a word is spoken or not. Next, the participant would be asked to recall as many words as possible as the investigator notes the recalled words on a laptop. For half of the participants, the investigator would make eye contact as they recalled words and for the other half the investigator's eyes would be directed down at the laptop. Critically, if eye contact helps people retrieve information then participants should recall more words when the investigator looks at them during retrieval than when they do not. However, if eye contact hinders the retrieval process, then participants who made eye contact with the investigator during retrieval should recall fewer words than those who did not. Furthermore, including an eye contact manipulation during encoding would help clarify whether eye contact during retrieval has a greater impact (whether it is beneficial or not) on recalling information that was previously associated with eye contact during encoding.  In sum, there are a number of different aspects of memory that could be influenced by eye contact. The study examples provided above are meant to be illustrative and not exhaustive. There are many other variations of these studies, that could rely on different stimuli (e.g., face stimuli instead of words, or visual instead of verbal information) to better understand the relationship between eye contact and all aspects of memory (i.e., semantic memory, episodic memory, etc.). Future studies could extend the present work by exploring the effects of eye contact on all of the different components of memory mentioned at the outset of this section. They could also, for 148  example, examine whether the effects are eye contact specific, or general to other visual (e.g., pointing) or nonvisual (e.g., verbal) cues.   7.3.4 Implications for online learning environments An applied aspect of the present work is that it provides insight on how eye contact can influence learning for in-person and online learning environments. Videos and video conferencing (e.g., skype) are being used more frequently to create online learning environments. Video communication is often used as a proxy for face-to-face interactions, under the assumption that being able to see an instructor improves a learning experience. However, people using these environments often report losing a sense of social connectivity and feeling more alone than when they are physically present with an instructor (Armstrong-Stassen et al., 1998; Abbott et al., 1993). Consequently, many students report difficulties in maintaining attention in these online environments where communication takes place over video (Armstrong-Stassen et al., 1998). Perhaps most importantly, Varao-Sousa and Kingstone (2015) demonstrated that students actually remember less information from a video of a lecture than when the same material is presented in a classroom by an instructor who is physically present. Our finding that words spoken by a speaker presented over video are less memorable than words spoken by a live speaker converges with the previous results reported by Varao-Sousa and Kingstone (2015). The suggestion therein is that attention may wane in these online learning environments because lecturers are unable to convey immediacy over video (e.g., eye contact, gestures, etc.). Consistent with this idea, previous work suggests that non-verbal behaviours have less impact on communication when they are expressed over video than when they are conveyed in person (Heath & Luff, 1992; Rutter, 1987; Shimada & Hiraki, 2006). Presumably, these non-verbal behaviours 149  help to foster connection between students and their instructors. When students feel less connected with their instructor, they dedicate less attention to learning the course material. The present studies thus shed light on whether making eye contact and experiencing simulated eye contact with a live speaker can improve learning and/or affect learning differently. When a live speaker presents information over camera or in person, information retention, at least for females, is better relative to when speakers are recorded. This suggests that viewers learn better in socially communicative settings, regardless of whether they are physically present in the room with a speaker. In a live setting, eye contact from a speaker presented in person or over camera can also improve memory for what the speaker says. This is consistent with the idea that a speaker’s simulated eye contact serves to include an individual in the “learning environment.” This benefit of simulated eye contact is additional to any benefit a person might experience just by being present in the room with a speaker who is not looking at them. As such, online classrooms could be crafted in such a way that the speaker simulates eye contact with female students regularly to boost their retention of the course material.  However, anytime a live speaker does not simulate eye contact over video (e.g., they are looking at someone or something else); participants’ retention of what the speaker says may be reduced considerably. Being physically present with the speaker has the added benefit of protecting individuals from experiencing this memory cost every time the speaker looks away from the participant. Instead, it is only experienced if the speaker looks at someone else in the room as they speak. This finding converges with a body of research that suggests that being in the presence of others can improve performance on a task (Aiello & Douthitt, 2001; Bond & Titus, 1983; Uziel, 2007; Zajonc, 1965). Despite the improvements gained from a speaker’s eye contact in any live setting, there is still a retention benefit to simply being physically present with a speaker. 150  An additional question to address in future studies is whether the belief that one can interact with someone who is actually recorded (not live) affects how their eye gaze influences memory. To date the little work that has been done on this issue has yielded mixed results (Fullwood & Doherty-Sneddon, 2006; Helminen et al., 2016; Sherwood, 1987). By presenting participants a video recording of the speaker (as described in the studies presented in Chapter 3), researchers could readily manipulate participants’ beliefs about whether the speaker is pre-recorded or live to determine whether holding one belief or the other influences whether the speaker’s eye contact facilitates learning lecture material.  7.4 Conclusion The significance of the eyes in human relationships and communication has fascinated scientists for centuries. While the present findings only begin to scratch the surface of this broad area of investigation, this work does highlight the importance of conducting studies in contexts where eye contact can be communicative. Indeed, in the absence of a communicative context, eye gaze did not modulate memory. This conclusion has tremendous implications for social theories of human communication, memory, and cognition more broadly, as images of the eyes have been used to manipulate and measure social behaviour and social neural mechanisms of various cognitive processes across different populations (e.g., infants, children, adults, aged adults, patients) and paradigms using both behavioural and neuroimaging measures. Using real people in future studies will enable the assessment of the social effects of eye gaze in particular, and social signals in general, thereby enhancing our understanding of the cognitive and neural bases of human communication and social interaction.  151  References  Adolphs, R., Gosselin, F., Buchanan, T. W., Tranel, D., Schyns, P., & Damasio, A. R. (2005). A mechanism for impaired fear recognition after amygdala damage. Nature, 433(7021), 68–72. https://doi.org/10.1038/nature03051 Aiello, J. R., & Douthitt, E. A. (2001). Social facilitation from Triplett to electronic performance monitoring. Group Dynamics: Theory, Research, and Practice, 5(3), 163–180. https://doi.org/10.1037/1089-2699.5.3.163 Althoff, R. R., & Cohen, N. J. (1999). Eye-movement-based memory effect: a reprocessing effect in face perception. Journal of Experimental Psychology. Learning, Memory, and Cognition, 25(4), 997–1010. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/10439505 Anderson, N., Risko, E. F., & Kingstone, A. (2011). Exploiting human sensitivity to gaze for tracking the eyes. Behavior Research Methods, 43, 843–852. Ando, S. (2004). Perception of gaze direction based on luminance ratio. Perception, 33(2000), 1173–1184. https://doi.org/10.1068/p5297 Andrews, T. J., Davies-Thompson, J., Kingstone, A., & Young, A. W. (2010). Internal and external features of the face are represented holistically in face-selective regions of visual cortex. Journal of Neuroscience, 30, 3544–3552. Argyle, M., & Cook, M. (1976). Gaze and mutual Gaze. Cognition (Vol. 122). Oxford, England: Cambridge U Press. Argyle, M., & Dean, J. (1965). Eye-Contact , Distance and Affiliation Author ( s ): Michael Argyle and Janet Dean Reviewed work ( s ): Published by : American Sociological Association Stable URL : http://www.jstor.org/stable/2786027 . Sociometry, 28(3), 289–304. Argyle, M., Lefebvre, L., & Cook, M. (1974). The meaning of five patterns of gaze. European Journal of Social Psychology, 4(2), 125–136. https://doi.org/10.1002/ejsp.2420040202 Arons, B. (1992). A Review of The Cocktail Party Effect. Journal of the American Voice I/O Society, 12(7), 35–50. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.446.8514&rep=rep1&type=pdf Baddeley, A. (1992). Working memory. Science, 255(5044), 556–559. Retrieved from http://tecfaetu.unige.ch/perso/maltt/carlei0/Fichiers/Baddeley1983.pdf 152  Baddeley, A. (2003). Working memory: looking back and looking forward. Nature Reviews Neuroscience, 4(10), 829–839. https://doi.org/10.1038/nrn1201 Baddeley, A. D., Eysenck, M., & Anderson, M. C. (2009). Memory. New York, NY: Psychology Press. Retrieved from https://trove.nla.gov.au/work/15600341 Baddeley, A. D., & Hitch, G. (1974). Working Memory. Psychology of Learning and Motivation, 8, 47–89. https://doi.org/10.1016/S0079-7421(08)60452-1 Bailenson, J. N., Blascovich, J., Beall, A. C., & Loomis, J. M. (2001). Equilibrium revisited: Mutual gaze and personal space in virtual environments. Teleoperators and Virtual Environments, 10, 583–598. Retrieved from https://vhil.stanford.edu/pubs/2001/equilibrium-revisited-mutual-gaze-and-personal-space-in-virtual-environments/ Baltazar, M., Hazem, N., Vilarem, E., Beaucousin, V., Picq, J.-L., & Conty, L. (2014). Eye contact elicits bodily self-awareness in human adults. Cognition, 133(1), 120–127. https://doi.org/10.1016/j.cognition.2014.06.009 Baron-Cohen, S. (1995). Mindblindness An Essay on Autism and Theory of Mind. MIT Press; Cambridge. Cambridge, MA: MIT Press. Bayliss, A. P., di Pellegrino, G. Di, & Tipper, S. P. (2005). Sex differences in eye gaze and symbolic cueing of attention. The Quarterly Journal of Experimental Psychology Section A, 58(4), 631–650. https://doi.org/10.1080/02724980443000124 Bindemann, M., Scheepers, C., & Burton, A. M. (2009). Viewpoint and center of gravity affect eye movements to human faces. Journal of Vision, 9(2), 1–16. https://doi.org/10.1167/9.2.7 Birmingham, E., Bischof, W. F., & Kingstone, A. (2008a). Gaze selection in complex social scenes. Visual Cognition, 16(2–3), 341–355. https://doi.org/10.1080/13506280701434532 Birmingham, E., Bischof, W. F., & Kingstone, A. (2008b). Social attention and real-world scenes: The roles of action, competition and social content. The Quarterly Journal of Experimental Psychology, 61(7), 986–998. https://doi.org/10.1080/17470210701410375 Birmingham, E., Bischof, W. F., & Kingstone, A. (2009a). Get real! Resolving the debate about equivalent social stimuli. Visual Cognition, 17(6–7), 904–924. https://doi.org/10.1080/13506280902758044 Birmingham, E., Bischof, W. F., & Kingstone, A. (2009b). Saliency does not account for fixations to eyes within social scenes. Vision Research, 49(24), 2992–3000. https://doi.org/10.1016/j.visres.2009.09.014 Blais, C., Jack, R. E., Scheepers, C., Fiset, D., & Caldara, R. (2008). Culture shapes how we look 153  at faces. PLoS ONE, 3(8), e3022. https://doi.org/10.1371/journal.pone.0003022 Böckler, A., van der Wel, R. P., & Welsh, T. N. (2015). Eyes only? Perceiving eye contact is neither sufficient nor necessary for attentional capture by face direction. Acta Psychologica, 160, 134–140. https://doi.org/10.1016/j.actpsy.2015.07.009 Bond, C. F., & Titus, L. J. (1983). Social facilitation: A meta-analysis of 241 studies. Psychological Bulletin, 94(2), 265–292. https://doi.org/10.1037//0033-2909.94.2.265 Brébion, G., David, A. S., Bressan, R. A., & Pilowsky, L. S. (2007). Role of processing speed and depressed mood on encoding, storage, and retrieval memory functions in patients diagnosed with schizophrenia. Journal of the International Neuropsychological Society, 13(1), 99–107. https://doi.org/10.1017/S1355617707070014 Brignani, D., Guzzon, D., Marzi, C. A., & Miniussi, C. (2009). Attentional orienting induced by arrows and eye-gaze compared with an endogenous cue. Neuropsychologia, 47(2), 370–381. https://doi.org/10.1016/j.neuropsychologia.2008.09.011 Brown, P. C., Roediger, H. L., & McDaniel, M. A. (2014). Make it Stick The Science of Successful Learning. Harvard University Press. Retrieved from www.100mustreads.com Brown, S. C., & Craik, F. I. M. (2000). Encoding and Retrieval of Information. In E. Tulving & F. I. M. Craik (Eds.), The Oxford Handbook of Memory (pp. 93–108). New York: NY: Oxford University Press. Retrieved from https://books.google.ca/books?hl=en&lr=&id=DOYJCAAAQBAJ&oi=fnd&pg=PT158&dq=brown+craik&ots=PrNvbPve97&sig=PPVPf_wAWUW1RZMDcmhZ8OEJMLg#v=onepage&q=brown craik&f=false Caldara, R., Schyns, P., Mayer, E., Smith, M. L., Gosselin, F., & Rossion, B. (2005). Does prosopagnosia take the eyes out of face representations? Evidence for a defect in representing diagnostic facial information following brain damage. Journal of Cognitive Neuroscience, 17(10), 1652–1666. https://doi.org/10.1162/089892905774597254 Campbell, R., Heywood, C. A., Cowey, A., Regard, M., & Landis, T. (1990). Sensitivity to eye gaze in prosopagnosic patients and monkeys with superior temporal sulcus ablation. Neuropsychologia, 28(11), 1123–1142. https://doi.org/10.1016/0028-3932(90)90050-X Carlin, J. D., & Calder, A. J. (2013). The neural basis of eye gaze processing. Current Opinion in Neurobiology, 23(3), 450–455. https://doi.org/10.1016/j.conb.2012.11.014 Cary, M. S. (1978). The Role of Gaze in the Initiation of Conversation. Social Psychology, 41(3), 269–271. Castelhano, M. S., Wieth, M., & Henderson, J. M. (2007). I See What You See : Eye Movements 154  in Real-World Scenes Are Affected by Perceived Direction of Gaze. In Attention in cognitive systems. Theories and systems from an interdisciplinary viewpoint (pp. 251–262). Berlin Heidleberg: Springer. Cheng, J. T., Tracy, J. L., Foulsham, T., Kingstone, A., & Henrich, J. (2013). Two ways to the top: evidence that dominance and prestige are distinct yet viable avenues to social rank and influence. Journal of Personality and Social Psychology, 104(1), 103–125. https://doi.org/10.1037/a0030398 MacLeod, C. M., & Kampe, K. E.. (1996). Word Frequency Effects on Recall, Recognition, and Word Fragment Completion Tests, 221, 132–142. https://doi.org/10.1037//0278-7393.22.1.132 Collins, A. M., & Quillian, M. R. (1969). Retrieval time from semantic memory. Journal of Verbal Learning and Verbal Behavior, 8(2), 240–247. https://doi.org/10.1016/S0022-5371(69)80069-1 Connellan, J., Baron-Cohen, S., Wheelwright, S., Batki, A., & Ahluwalia, J. (2000). Sex differences in human neonatal social perception. Infant Behaviour & Development, 23, 113–118. https://doi.org/10.1016/S0163-6383(00)00032-1 Conty, L., George, N., & Hietanen, J. K. (2016). Watching Eyes effects: When others meet the self. Consciousness and Cognition, 45. https://doi.org/10.1016/j.concog.2016.08.016 Conty, L., Gimmig, D., Belletier, C., George, N., & Huguet, P. (2010). The cost of being watched: Stroop interference increases under concomitant eye contact. Cognition, 115(1), 133–139. https://doi.org/10.1016/j.cognition.2009.12.005 Conty, L., N’Diaye, K., Tijus, C., & George, N. (2007). When eye creates the contact! ERP evidence for early dissociation between direct and averted gaze motion processing. Neuropsychologia, 45(13), 3024–3037. https://doi.org/10.1016/j.neuropsychologia.2007.05.017 Conty, L., Russo, M., Loehr, V., Hugueville, L., Barbu, S., Huguet, P., … George, N. (2010). The mere perception of eye contact increases arousal during a word-spelling task. Social Neuroscience, 5(2), 171–186. https://doi.org/10.1080/17470910903227507 Conty, L., Tijus, C., Hugueville, L., Coelho, E., & George, N. (2006). Searching for asymmetries in the detection of gaze contact versus averted gaze under different head views: a behavioural study. Spatial Vision, 19(6), 529–545. https://doi.org/10.1163/156856806779194026 Conway, A. R. A., Cowan, N., & Bunting, M. F. (2001). The cocktail party phenomenon 155  revisited: The importance of working memory capacity. Psychonomic Bulletin & Review, 8(2), 331–335. https://doi.org/10.3758/BF03196169 Coutrot, A., & Guyader, N. (2014). How saliency, faces, and sound influence gaze in dynamic social scenes. Journal of Vision, 14(8), 1–17. https://doi.org/10.1167/14.8.5.doi Csibra, G., & Gergely, G. (2009). Natural pedagogy. Trends in Cognitive Sciences, 13(4), 148–153. https://doi.org/10.1016/j.tics.2009.01.005 Dalmaso, M., Pavan, G., Castelli, L., & Galfano, G. (2012). Social status gates social attention in humans. Biology Letters, 8(3), 450–452. https://doi.org/10.1098/rsbl.2011.0881 Daury, N. (2009). Gaze direction influences awareness in recognition memory for faces after intentional learning. Perceptual and Motor Skills, 109(1), 224–234. https://doi.org/10.2466/PMS.109.1.224-234 Daury, N. (2011). Influence of Gaze Direction on Face Recognition: A Sensitive Effect. Psychologica Belgica, 51(2), 139–153. Retrieved from http://www.psychologicabelgica.com/article/view/pb-51-2-139 Deaner, R., & Platt, M. (2003). Reflexive social attention in monkeys and humans. Current Biology, 13, 1609–1613. https://doi.org/10.1016/j Dodd, M. D., Weiss, N., McDonnell, G. P., Sarwal, A., & Kingstone, A. (2012). Gaze cues influence memory…but not for long. Acta Psychologica, 141(2), 270–275. https://doi.org/10.1016/j.actpsy.2012.06.003 Doi, H., & Ueda, K. (2007). Searching for a perceived stare in the crowd. Perception, 36(5), 773–780. https://doi.org/10.1068/p5614 Doi, H., Ueda, K., & Shinohara, K. (2009). Neural correlates of the stare-in-the-crowd effect. Neuropsychologia, 47(4), 1053–1060. https://doi.org/10.1016/j.neuropsychologia.2008.11.004 Donovan, W. L., & Leavitt, L. A. (1980). Physiologic correlates of direct and averted gaze. Biological Psychology, 10(3), 189–199. https://doi.org/10.1016/0301-0511(80)90014-9 Driver, J., Davis, G., Ricciardelli, P., Kidd, P., Maxwell, E., & Baron-Cohen, S. (1999). Gaze Perception Triggers Reflexive Visuospatial Orienting. Visual Cognition, 6(5), 509–540. https://doi.org/10.1080/135062899394920 Duncan, S. (1972). Some signals and rules for taking speaking. Journal of Personality and Social Psychology, 23(2), 283–292. Ellsworth, P., & Ross, L. (1975). Intimacy in response to direct gaze. Journal of Experimental 156  Social Psychology, 11, 592–613. Retrieved from http://www.sciencedirect.com/science/article/pii/0022103175900104 Emery, N. J. (2000). The eyes have it: the neuroethology, function and evolution of social gaze. Neuroscience & Biobehavioral Reviews, 24(6), 581–604. https://doi.org/10.1016/S0149-7634(00)00025-7 Eysenck, M. W. (1982a). Anxiety and Performance. In Attention and Arousal (pp. 95–123). Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-68390-9_6 Eysenck, M. W. (1982b). Theories of Arousal and Performance. In Attention and Arousal (pp. 47–66). Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-68390-9_4 Falck-Ytter, T., Carlström, C., & Johansson, M. (2014). Eye contact modulates cognitive processing differently in children with autism. Child Development, 0(1), 1–11. https://doi.org/10.1111/cdev.12273 Farroni, T., Csibra, G., Simion, F., & Johnson, M. H. (2002). Eye contact detection in humans from birth. Proceedings of the National Academy of Sciences, 99(14), 9602–9605. Foulsham, T., Cheng, J. T., Tracy, J. L., Henrich, J., & Kingstone, A. (2010). Gaze allocation in a dynamic situation: effects of social status and speaking. Cognition, 117(3), 319–331. https://doi.org/10.1016/j.cognition.2010.09.003 Foulsham, T., & Sanderson, L. A. (2013). Look who’s talking? Sound changes gaze behaviour in a dynamic social scene. Visual Cognition, 21(7), 922–944. https://doi.org/10.1080/13506285.2013.849785 Foulsham, T., Walker, E., & Kingstone, A. (2011). The where, what and when of gaze allocation in the lab and the natural environment. Vision Research, 51(17), 1920–1931. https://doi.org/10.1016/j.visres.2011.07.002 Frankland, P. W., & Bontempi, B. (2005). The organization of recent and remote memories. Nature Reviews Neuroscience, 6(2), 119–130. https://doi.org/10.1038/nrn1607 Freeth, M., Foulsham, T., & Kingstone, A. (2013). What affects social attention? Social presence, eye contact and autistic traits. PloS One, 8(1), e53286. https://doi.org/10.1371/journal.pone.0053286 Friesen, C. K., & Kingstone, A. (1998). The eyes have it ! Reflexive orienting is triggered by nonpredictive gaze. Psychonomic Bulletin & Review, 5(3), 490–495. Friesen, C. K., Ristic, J., & Kingstone, A. (2004). Attentional effects of counterpredictive gaze 157  and arrow cues. Journal of Experimental Psychology. Human Perception and Performance, 30(2), 319–329. https://doi.org/10.1037/0096-1523.30.2.319 Frischen, A., & Tipper, S. P. (2006). Long-term gaze cueing effects: Evidence for retrieval of prior states of attention from memory. Visual Cognition, 14(3), 351–364. https://doi.org/10.1080/13506280544000192 Fry, R., & Smith, G. F. (1975). The Effects of Feedback and Eye Contact on Performance of a Digit-Coding Task. The Journal of Social Psychology, 96, 145–146. https://doi.org/10.1080/00224545.1975.9923275 Fullwood, C., & Doherty-Sneddon, G. (2006). Effect of gazing at the camera during a video link on recall. Applied Ergonomics, 37(2), 167–175. https://doi.org/10.1016/j.apergo.2005.05.003 Galfano, G., Dalmaso, M., Marzoli, D., Pavan, G., Coricelli, C., & Castelli, L. (2012). Eye gaze cannot be ignored (but neither can arrows). The Quarterly Journal of Experimental Psychology, 65(10), 1895–1910. https://doi.org/10.1080/17470218.2012.663765 Gallup, A. C., Chong, A., & Couzin, I. D. (2012). The directional flow of visual information transfer between pedestrians. Biology Letters, 8, 520–522. Gallup, A. C., Chong, A., Kacelnik, A., Krebs, J. R., & Couzin, I. D. (2014). The influence of emotional facial expressions on gaze-following in grouped and solitary pedestrians. Scientific Reports, 4, 5794. https://doi.org/10.1038/srep05794 Gallup, A. C., Hale, J. J., Sumpter, D. J. T., Garnier, S., Kacelnik, A., Krebs, J. R., & Couzin, I. D. (2012). Visual attention and the acquisition of information in human crowds. Proceedings of the National Academy of Sciences of the United States of America, 109(19), 7245–7250. https://doi.org/10.1073/pnas.1116141109 George, N., & Conty, L. (2008). Facing the gaze of others. Clinical Neurophysiology, 38(3), 197–207. https://doi.org/10.1016/j.neucli.2008.03.001 Gobel, M. S., Kim, H. S., & Richardson, D. C. (2015). The dual function of social gaze. Cognition, 136, 359–364. https://doi.org/10.1016/j.cognition.2014.11.040 Goffman, E. (1963). Behavior in public places: notes on the social organization of gatherings. New York, NY: Free Press of Glencoe. Goodman, L. R., Phelan, H. L., & Johnson, S. A. (2012). Sex differences for the recognition of direct versus averted gaze faces. Memory (Hove, England), 20(3), 199–209. https://doi.org/10.1080/09658211.2011.651089 Gregory, N. J., & Antolin, J. V. (2018). Does social presences of the potential for interaction 158  reduce social gaze in online social scenarios? Introducing the “live lab” paradigm. Quarterly Journal of Experimental Psychology, 1-13. https://DOI: 10.1177/1747021818772812 Gosselin, F., & Schyns, P. G. (2001). Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research, 41, 2261–2271. https://doi.org/10.1016/S0042-6989(01)00097-9 Gregory, N. J., Lüpez, B., Graham, G., Marshman, P., Bate, S., & Kargas, N. (2015). Reduced gaze following and attention to heads when viewing a “live” social scene. PLoS ONE, 10(4). https://doi.org/10.1371/journal.pone.0121792 Grossmann, T. (2017). The Eyes as Windows Into Other Minds. Perspectives on Psychological Science, 12(1), 107–121. https://doi.org/10.1177/1745691616654457 Guerin, B. (1986). Mere presence effects in humans: A review. Journal of Experimental Social Psychology, 22, 38–77. https://doi.org/10.1016/0022-1031(86)90040-5 Guez, J., Saar-Ashkenazy, R., Mualem, L., Efrati, M., & Keha, E. (2015). Negative Emotional Arousal Impairs Associative Memory Performance for Emotionally Neutral Content in Healthy Participants. PLOS ONE, 10(7), e0132405. https://doi.org/10.1371/journal.pone.0132405 Gullberg, M., & Holmqvist, K. (2006). Visual Attention towards Gestures in Face-to-Face Interaction vs. on Screen Visual Attention towards Gestures in Face-to-Face Interaction vs. on Screen *. Pragmatics and Cognition. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.16.2066&rep=rep1&type=pdf Hall, J. A. (1978). Gender effects in decoding nonverbal cues. Psychological Bulletin, 85(4), 845–857. https://doi.org/10.1037/0033-2909.85.4.845 Hawley, M. L., Litovsky, R. Y., & Culling, J. F. (2004). The benefit of binaural hearing in a cocktail party: Effect of location and type of interferer. The Journal of the Acoustical Society of America, 115(2), 833–843. https://doi.org/10.1121/1.1639908 Hayward, D. A., & Ristic, J. (2015). Exposing the cuing task: The case of gaze and arrow cues. Attention, Perception, & Psychophysics, 77(4). https://doi.org/10.3758/s13414-015-0877-6 Hayward, D. A., Voorhies, W., Morris, J. L., Capozzi, F., & Ristic, J. (2017). Staring reality in the face: A comparison of social attention across laboratory and real world measures suggests little common ground. Canadian Journal of Experimental Psychology, 71(3), 212–225. Retrieved from http://psycnet.apa.org/buy/2017-25743-001 Hazem, N., George, N., Baltazar, M., & Conty, L. (2017). I know you can see me: Social 159  attention influences bodily self-awareness. Biological Psychology, 124, 21–29. https://doi.org/10.1016/j.biopsycho.2017.01.007 Heath, C., & Luff, P. (1992). Media Space and Communicative Asymmetries: Preliminary Observations of Video-Mediated Interaction. Human-Computer Interaction, 7(3), 315–346. https://doi.org/10.1207/s15327051hci0703_3 Heath, C., & Luff, P. (1993). Disembodied Conduct: Interactional Asymmetries in Video-Mediated Communication Disembodied Conduct: interactional asymmetries in video mediated communication, 35–54. Retrieved from https://pdfs.semanticscholar.org/bb2b/742c5deaf8364ce3f863ac708f042d19e82c.pdf Helminen, T. M., Kaasinen, S. M., & Hietanen, J. K. (2011). Eye contact and arousal: the effects of stimulus duration. Biological Psychology, 88(1), 124–130. https://doi.org/10.1016/j.biopsycho.2011.07.002 Helminen, T. M., Pasanen, P., & Hietanen, J. K. (2016). Learning under your gaze: the mediating role of affective arousal between perceived direct gaze and memory performance. Psychological Research, 80(2), 159–171. https://doi.org/10.1007/s00426-015-0649-x Henderson, J. M., Falk, R., Minut, S., Dyer, F. C., & Mahadevan, S. (2001). Gaze control for face learning and recognition by humans and machines. In T. F. Shipley & P. J. Kellman (Eds.), From fragments to objects: Segmentation and Grouping in vision (pp. 463–481). Elsevier. Henderson, J. M., Williams, C. C., & Falk, R. J. (2005). Eye movements are functional during face learning. Memory & Cognition, 33(1), 98–106. https://doi.org/10.3758/BF03195300 Hietanen, J. K. (2016). Eye contact: to see and to be seen. International Journal of Psychology, 51, 167. Retrieved from https://insights.ovid.com/international-psychology/ijpsy/2016/07/001/eye-contact-seen/992/00011205 Hietanen, J. K., Leppänen, J. M., Peltola, M. J., Linna-Aho, K., & Ruuhiala, H. J. (2008). Seeing direct and averted gaze activates the approach-avoidance motivational brain systems. Neuropsychologia, 46(9), 2423–2430. https://doi.org/10.1016/j.neuropsychologia.2008.02.029 Hietanen, J. K., Nummenmaa, L., Nyman, M. J., Parkkola, R., & Hämäläinen, H. (2006). Automatic attention orienting by social and symbolic cues activates different neural networks: An fMRI study. NeuroImage, 33, 406–413. https://doi.org/10.1016/j.neuroimage.2006.06.048 Hood, B. M., Macrae, C. N., Cole-Davies, V., & Dias, M. (2003). Eye remember you: the effects 160  of gaze direction on face recognition in children and adults. Developmental Science, 6(1), 67–71. https://doi.org/10.1111/1467-7687.00256 Hooker, C. I., Paller, K. A., Gitelman, D. R., Parrish, T. B., Mesulam, M.-M. M.-M., & Reber, P. J. (2003). Brain networks for analyzing eye gaze. Brain Research Cognition Brain Research, 17(2), 406–418. https://doi.org/10.1016/S0926-6410(03)00143-5 Humphrey, K., & Underwood, G. (2010). The potency of people in pictures: evidence from sequences of eye fixations. Journal of Vision, 10(10), 19. https://doi.org/10.1167/10.10.19 Itier, R. J., Alain, C., Sedore, K., & McIntosh, A. R. (2007). Early face processing specificity: it’s in the eyes! Journal of Cognitive Neuroscience, 19, 1815–1826. https://doi.org/10.1162/jocn.2007.19.11.1815 Itier, R. J., & Batty, M. (2009). Neural bases of eye and gaze processing: The core of social cognition. Neuroscience & Biobehavioral Reviews, 33(6), 843–863. https://doi.org/10.1016/j.neubiorev.2009.02.004.Neural Itier, R. J., Villate, C., & Ryan, J. D. (2007). Eyes always attract attention but gaze orienting is task-dependent: evidence from eye movement monitoring. Neuropsychologia, 45(5), 1019–1028. https://doi.org/10.1016/j.neuropsychologia.2006.09.004 Jarick, M., & Kingstone, A. (2015). The duality of gaze: eyes extract and signal social information during sustained cooperative and competitive dyadic gaze. Frontiers in Psychology, 6, 1423. https://doi.org/10.3389/fpsyg.2015.01423 Jarick, M., Laidlaw, K. E. W., Nasiopoulos, E., & Kingstone, A. (2016). Eye contact affects attention more than arousal as revealed by prospective time estimation. Attention, Perception, & Psychophysics, 78(5). https://doi.org/10.3758/s13414-016-1085-8 Jones, B. C., DeBruine, L. M., Main, J. C., Little, A. C., Welling, L. L. M., Feinberg, D. R., & Tiddeman, B. P. (2010). Facial cues of dominance modulate the short-term gaze-cuing effect in human observers. Proceedings. Biological Sciences, 277(1681), 617–624. https://doi.org/10.1098/rspb.2009.1575 Kampe, K. K. W., Frith, C. D., & Frith, U. (2003). “ Hey John ”: Signals Conveying Communicative Intention toward the Self Activate Brain Regions Associated with “ Mentalizing ,” Regardless of Modality. The Journal of Neuroscience, 23(12), 5258–5263. Kelley, D. H., & Gorham, J. (1988). Effects of immediacy on recall of information. Communication Education, 37(3), 198–207. https://doi.org/10.1080/03634528809378719 Kim, H. (2012). A dual-subsystem model of the brain's default network: self-referential processing, memory retrieval processes, and autobiographical memory retrieval. 161  Neuroimage, 61(4), 966-977. Kingstone, A. (2009). Taking a real look at social attention. Current Opinion in Neurobiology, 19(1), 52–56. https://doi.org/10.1016/j.conb.2009.05.004 Kingstone, A., Smilek, D., & Eastwood, J. D. (2008). Cognitive Ethology: a new approach for studying human cognition. British Journal of Psychology, 99, 317–340. https://doi.org/10.1348/000712607X251243 Kingstone, A., Smilek, D., Ristic, J., Friesen, C. K., & Eastwood, J. D. (2003). Attention, Researchers! It Is Time to Take a Look at the Real World. Current Directions in Psychological Science, 12(5), 176–180. https://doi.org/10.1111/1467-8721.01255 Kleinke, C. L. (1986). Gaze and Eye Contact: A Research Review. Psychological Bulletin, 100(1), 78–100. https://doi.org/10.1037//0033-2909.100.1.78 Kleinke, C. L., Staneski, R. A., & Berger, D. E. (1975). Evaluation of an interviewer as a function of interviewer gaze, reinforcement of subject gaze, and interviewer attractiveness. Journal of Personality and Social Psychology, 31(1), 115–122. Retrieved from http://europepmc.org/abstract/med/1117400 Knapp, M. L., Judith, A. H., & Horgan, T. G. (2009). Nonverbal communication in human interaction. Kobayashi, H., & Kohshima, S. (1997). Unique morphology of the human eye. Nature, 387, 767–768. Kuhn, G., Benson, V., Fletcher-Watson, S., Kovshoff, H., McCormick, C. a, Kirkby, J., & Leekam, S. R. (2010). Eye movements affirm: automatic overt gaze and arrow cueing for typical adults and adults with autism spectrum disorder. Experimental Brain Research, 201(2), 155–165. https://doi.org/10.1007/s00221-009-2019-7 Kuhn, G., Caffaratti, H. a., Teszka, R., & Rensink, R. a. (2014). A psychologically-based taxonomy of misdirection. Frontiers in Psychology, 5, 1–14. https://doi.org/10.3389/fpsyg.2014.01392 Kuhn, G., & Kingstone, A. (2009). Look away! Eyes and arrows engage oculomotor responses automatically. Attention, Perception, & Psychophysics, 71(2), 314–327. https://doi.org/10.3758/APP Kuhn, G., & Martinez, L. M. (2012). Misdirection – Past, Present, and the Future. Frontiers in Human Neuroscience, 5, 1–7. https://doi.org/10.3389/fnhum.2011.00172 Kuhn, G., & Tatler, B. W. (2005). Magic and fixation: now you don’t see it, now you do. Perception, 34, 1155–1161. 162  Kuhn, G., Tatler, B. W., & Cole, G. G. (2009). You look where I look! Effect of gaze cues on overt and covert attention in misdirection. Visual Cognition, 17(6–7), 925–944. https://doi.org/10.1080/13506280902826775 Kuhn, G., Tatler, B. W., Findlay, J. M., & Cole, G. G. (2008). Misdirection in magic: Implications for the relationship between eye gaze and attention. Visual Cognition, 16(2–3), 391–405. https://doi.org/10.1080/13506280701479750 Kuhn, G., Teszka, R., Tenaw, N., & Kingstone, A. (2016). Don’t be fooled! Attentional responses to social cues in a face-to-face and video magic trick reveals greater top-down control for overt than covert attention. Cognition, 146, 136–142. https://doi.org/10.1016/j.cognition.2015.08.005 Laidlaw, K. E. W., Foulsham, T., Kuhn, G., Kingstone, A. (2011). Potential social interactions are important to social attention. Proceedings of the National Academy of Sciences of the United States of America, 108(14), 5548–5553. https://doi.org/10.1073/pnas.1017022108 Laidlaw, K. E. W., & Kingstone, A. (2017). Fixations to the eyes aids in facial encoding; covertly attending to the eyes does not. Acta Psychologica, 173, 55–65. https://doi.org/10.1016/J.ACTPSY.2016.11.009 Laidlaw, K. E. W., Risko, E. F., & Kingstone, A. (2012). A new look at social attention: orienting to the eyes is not (entirely) under volitional control. Journal of Experimental Psychology. Human Perception and Performance, 38(5), 1132–1143. https://doi.org/10.1037/a0027075 Laidlaw, K. E. W., Rothwell, A., & Kingstone, A. (2016). Camouflaged attention: covert attention is critical to social communication in natural settings. Evolution and Human Behavior, 37(6), 449–455. https://doi.org/10.1016/j.evolhumbehav.2016.04.004 Land, M. F., & Hayhoe, M. (2001). In what ways do eye movements contribute to everyday activities? Vision Research, 41(25–26), 3559–3565. https://doi.org/10.1016/S0042-6989(01)00102-X Land, M. F., & Mcleod, P. (2000). From eye movements to actions : how batsmen hit the ball. Nature Neuroscience, 3(12), 1340–1345. Langton, S. R. H., & Bruce, V. (1999). Reflexive Visual Orienting in Response to the Social Attention of Others. Visual Cognition, 6(5), 541–567. https://doi.org/10.1080/135062899394939 Levine, J. M., Resnick, L. B., & Higgins, E. T. (1993). Social foundations of cognition. Annual Review of Psychology, 44, 585–612. https://doi.org/10.1146/annurev.psych.44.1.585 163  Lockhart, R. S. (n.d.). Methods of memory research. In The Oxford handbook of memory (pp. 45–57). Retrieved from https://books.google.ca/books?hl=en&lr=&id=DOYJCAAAQBAJ&oi=fnd&pg=PT84&ots=PrNvbQof69&sig=nX7FHfvTB9ZiS3VMmBvqUWb2agw#v=onepage&q&f=false Luria, S. M., & Strauss, M. S. (1978). Comparison of Eye Movements over Faces in Photographic Positives and Negatives. Perception, 7(3), 349–358. https://doi.org/10.1068/p070349 Lutchmaya, S., Baron-Cohen, S., & Raggatt, P. (2002). Foetal testosterone and eye contact in 12-month-old human infants. Infant Behavior & Development, 25, 327–335. Retrieved from http://docs.autismresearchcentre.com/papers/2002_Lutch_eyecont.pdf Macdonald, P. A., & Macleod, C. M. (1998). The influence of attention at encoding on direct and indirect remembering. Acta Psychologia, 98, 291–310. Macrae, C. N., Hood, B. M., Milne, A. B., Rowe, A. C., & Mason, M. F. (2002). ARE YOU LOOKING AT ME ? Eye Gaze and Person Perception. Psychological Science, 13(5), 460–464. Macrae, C. N., Moran, J. M., Heatherton, T. F,, Banfield J. F., Kelley, W. M. (2004). Medial prefrontal activity predicts memory for self. Cerebral Cortex, 14(6), 647-54. https://doi.org/10.1093/cercor/bhh025 Mansfield, E. M., Farroni, T., & Johnson, M. H. (2003). Does gaze perception facilitate overt orienting ? Visual Cognition, 10(1), 7–15. https://doi.org/10.1080/13506280143000647 Mares, I., Smith, M. L., Johnson, M. H., & Senju, A. (2016). Direct gaze facilitates rapid orienting to faces: Evidence from express saccades and saccadic potentials. Biological Psychology, 121. https://doi.org/10.1016/j.biopsycho.2016.10.003 Marotta, A., Román-Caballero, R., & Lupiáñez, J. (2018). Arrows don’t look at you: Qualitatively different attentional mechanisms triggered by gaze and arrows. Psychonomic Bulletin & Review, Online, 1–6. https://doi.org/10.3758/s13423-018-1457-2 Marschner, L., Pannasch, S., Schulz, J., & Graupner, S. T. (2015). Social communication with virtual agents: The effects of body and gaze direction on attention and emotional responding in human observers. International Journal of Psychophysiology, 97(2). https://doi.org/10.1016/j.ijpsycho.2015.05.007 Mason, M. F., Hood, B. M., & Macrae, C. N. (2004). Look into my eyes: gaze direction and person memory. Memory, 12(5), 637–643. https://doi.org/10.1080/09658210344000152 Mather, M., & Sutherland, M. R. (2011). Arousal-Biased Competition in Perception and 164  Memory. Perspectives on Psychological Science, 6(2), 114–133. https://doi.org/10.1177/1745691611400234 McClure, E. B. (2000). A meta-analytic review of sex differences in facial expression processing and their development in infants, children, and adolescents. Psychological Bulletin, 126(3), 424–453. https://doi.org/10.1037/0033-2909.126.3.424 Mckelvie, S. J. (1976). The role of eyes and mouth in the memory of a face. The American Journal of Psychology, 89(2), 311–323. Mertens, I., Siegmund, H., & Grüsser, O. J. (1993). Gaze motor asymmetries in the perception of faces during a memory task. Neuropsychologia, 31(9), 989–998. Retrieved from http://www.sciencedirect.com/science/article/pii/002839329390154R Mojzisch, A., Schilbach, L., Helmert, J. R., Pannasch, S., Velichkovsky, B. M., & Vogeley, K. (2006). The effects of self-involvement on attention, arousal, and facial expression during social interaction with virtual others: a psychophysiological study. Social Neuroscience, 1(February 2015), 184–195. https://doi.org/10.1080/17470910600985621 Mulckhuyse, M., & Theeuwes, J. (2010). Unconscious attentional orienting to exogenous cues: A review of the literature. Acta Psychologica, 134(3), 299–309. https://doi.org/10.1016/j.actpsy.2010.03.002 Myllyneva, A., & Hietanen, J. K. (2015). There is more to eye contact than meets the eye. Cognition, 134, 100–109. https://doi.org/10.1016/j.cognition.2014.09.011 Myllyneva, A., & Hietanen, J. K. (2016). The dual nature of eye contact: To see and to be seen. Social Cognitive and Affective Neuroscience, 11(7). https://doi.org/10.1093/scan/nsv075 Nasiopoulos, E., Risko, E. F., Foulsham, T., & Kingstone, A. (2014). Wearable computing: Will it make people prosocial? British Journal of Psychology, 4, 1–8. https://doi.org/10.1111/bjop.12080 Nasiopoulos, E., Risko, E. F., & Kingstone, A. (2015). Social Attention, Social Presence, and the Dual Function of Gaze. In The Many Faces of Social Attention (pp. 129–155). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-21368-2_5 Nemeth, D., Turcsik, A. B., Farkas, G., & Janacsek, K. (2013). Social Communication Impairs Working-Memory Performance. Applied Neuropsychology. Adult. https://doi.org/10.1080/09084282.2012.685134 Niederehe, G., & Duncan, S. (1974). On Signaling It’s Your Turn to Speak. Journal of Experimental Social Psychology, 10, 234–247. Northoff, G., Heinzel, A., De Greck, M., Bermpohl, F., Dobrowolny, H., & Panksepp, J. (2006). 165  Self-referential processing in our brain—a meta-analysis of imaging studies on the self. Neuroimage, 31(1), 440-457. Nuku, P., & Bekkering, H. (2008). Joint attention: Inferring what others perceive (and don’t perceive). Consciousness and Cognition, 17, 339–349. https://doi.org/10.1016/j.concog.2007.06.014 Olk, B., Tsankova, E., Petca, A. R., & Wilhelm, A. F. X. (2014). Measuring effects of voluntary attention: A comparison among predictive arrow, colour, and number cues. Quarterly Journal of Experimental Psychology, 67(10), 2025–2041. https://doi.org/10.1080/17470218.2014.898670 Otteson, J., & Otteson, C. (1980). Effect of teacher’s gaze on children’s story recall. Perceptual and Motor Skills, 35–42. Retrieved from http://www.amsciepub.com/doi/pdf/10.2466/pms.1980.50.1.35 Pageler, N. M., Menon, V., Merin, N. M., Eliez, S., Brown, W. E., & Reiss, A. L. (2003). Effect of head orientation on gaze processing in fusiform gyrus and superior temporal sulcus. NeuroImage, 20(1), 318–329. https://doi.org/10.1016/S1053-8119(03)00229-5 Palanica, A., & Itier, R. J. (2011). Searching for a perceived gaze direction using eye tracking. Journal of Vision, 11(2), 1–13. https://doi.org/10.1167/11.2.19.Introduction Patterson, M. L., Iizuka, Y., Tubbs, M. E., Ansel, J., Tsutsumi, M., & Anson, J. (2007). Passing encounters east and west: Comparing Japanese and American pedestrian interactions. Journal of Nonverbal Behaviour, 31(3), 155–166. Patterson, M. L., Webb, A., & Schwartz, W. (2002). Passing Encounters: Patterns of Recognition and Avoidance in Pedestrians. Basic and Applied Social Psychology, 24(1), 57–66. https://doi.org/10.1207/S15324834BASP2401 Pelphrey, K. A., Sasson, N. N. J., Reznick, J. S., Paul, G., Goldman, B. D., & Piven, J. (2002). Visual Scanning of Faces in Autism. Journal of Autism and Developmental Disorders, 32(4), 249–261. https://doi.org/10.1023/A:1016374617369 Perrett, D., & Emery, N. (1994). Understanding the intentions of others from visual signals: Neurophysiological evidence. Cahiers de Psychologie Cognitive/Current Psychology of Cognition, 13(5), 683–694. Retrieved from http://psycnet.apa.org/record/1995-24608-001 Pfeiffer, U. J., Schilbach, L., Jording, M., Timmermans, B., Bente, G., & Vogeley, K. (2012). Eyes on the mind: investigating the influence of gaze dynamics on the perception of others in real-time social interaction. Frontiers in Psychology, 3, 537. https://doi.org/10.3389/fpsyg.2012.00537 166  Pfeiffer, U. J., Vogeley, K., & Schilbach, L. (2013). From gaze cueing to dual eye-tracking: Novel approaches to investigate the neural correlates of gaze in social interaction. Neuroscience & Biobehavioral Reviews, 37(10), 2516–2528. https://doi.org/10.1016/j.neubiorev.2013.07.017 Phelps, F. G., Doherty-Sneddon, G., & Warnock, H. (2006). Helping children think: Gaze aversion and teaching. British Journal of Developmental Psychology, 24(3), 577–588. https://doi.org/10.1348/026151005X49872 Pollack, I., & Pickett, J. M. (1957). Cocktail Party Effect. The Journal of the Acoustical Society of America, 29(11), 1262–1262. https://doi.org/10.1121/1.1919140 Pönkänen, L. M., Alhoniemi, A., Leppänen, J. M., & Hietanen, J. K. (2011). Does it make a difference if I have an eye contact with you or with your picture? An ERP study. Social Cognitive and Affective Neuroscience, 6(4), 486–494. https://doi.org/10.1093/scan/nsq068 Pönkänen, L. M., Peltola, M. J., & Hietanen, J. K. (2011). The observer observed: frontal EEG asymmetry and autonomic responses differentiate between another person’s direct and averted gaze when the face is seen live. International Journal of Psychophysiology : Official Journal of the International Organization of Psychophysiology, 82(2), 180–187. https://doi.org/10.1016/j.ijpsycho.2011.08.006 Posner, M. I. (1980). Orienting of attention. Quarterly Journal of Experimental Psychology, 32(1), 3–25. https://doi.org/10.1080/00335558008248231 Przyrembel, M., Smallwood, J., Pauen, M., & Singer, T. (2012). Illuminating the dark matter of social neuroscience: Considering the problem of social interaction from philosophical, psychological, and neuroscientific perspectives. Frontiers in Human Neuroscience, 6, 190. https://doi.org/10.3389/fnhum.2012.00190 Rensink, R. a., & Kuhn, G. (2015). A framework for using magic to study the mind. Frontiers in Psychology, 5(February), 1–14. https://doi.org/10.3389/fpsyg.2014.01508 Risko, E. F., & Kingstone, A. (2011). Eyes wide shut: implied social presence, eye tracking and attention. Attention, Perception & Psychophysics, 73(2), 291–296. https://doi.org/10.3758/s13414-010-0042-1 Risko, E. F., & Kingstone, A. (2015). Attention in the wild: Visual attention in complex, dynamic and social environments. Cambridge Handbook of Applied Perception Research, 466–487. Risko, E. F., Laidlaw, K. E. W., Freeth, M., Foulsham, T., & Kingstone, A. (2012). Social attention with real versus reel stimuli: toward an empirical approach to concerns about 167  ecological validity. Frontiers in Human Neuroscience, 6(May), 143. https://doi.org/10.3389/fnhum.2012.00143 Risko, E. F., Richardson, D. C., & Kingstone, A. (2016). Breaking the Fourth Wall of Cognitive Science: Real-World Social Attention and the Dual Function of Gaze. Current Directions in Psychological Science, 25(1), 70–74. https://doi.org/10.1177/0963721415617806 Ristic, J., Friesen, C. K., & Kingstone, A. (2002). Are eyes special? It depends on how you look at it. Psychonomic Bulletin & Review, 9(3), 507–513. https://doi.org/10.3758/BF03196306 Ristic, J., & Kingstone, A. (2012). A new form of human spatial attention: Automated symbolic orienting. Visual Cognition, 20(3), 244–264. https://doi.org/10.1080/13506285.2012.658101 Ristic, J., Mottron, L., Friesen, C. K., Iarocci, G., Burack, J. a, & Kingstone, A. (2005). Eyes are special but not for everyone: the case of autism. Brain Research. Cognitive Brain Research, 24(3), 715–718. https://doi.org/10.1016/j.cogbrainres.2005.02.007 Ristic, J., Wright, A., & Kingstone, A. (2007). Attentional control and reflexive orienting to gaze and arrow cues. Psychonomic Bulletin & Review, 14(5), 964–969. https://doi.org/10.3758/BF03194129 Scerif, G., Gomez, J.-C., & Byrne, R. W. (2004). What do Diana monkeys know about the focus of attention of a conspecific? Animal Behaviour, 68(6), 1239–1247. https://doi.org/10.1016/j.anbehav.2004.01.011 Schilbach, L. (2010). A second-person approach to other minds. Nature Reviews. Neuroscience, 11(6), 449. https://doi.org/10.1038/nrn2805-c1 Schilbach, L. (2015). Eye to eye, face to face and brain to brain: Novel approaches to study the behavioral dynamics and neural mechanisms of social interactions. Current Opinion in Behavioral Sciences. https://doi.org/10.1016/j.cobeha.2015.03.006 Schilbach, L., Timmermans, B., Vasudevi, R., Costall, A., Bente, G., Schlicht, T., & Vogeley, K. (2013). Toward a second-person neuroscience. Behavioral and Brain Sciences, 1–77. Retrieved from http://journals.cambridge.org/abstract_S0140525X12000660 Schilbach, L., Wilms, M., Eickhoff, S. B., Romanzetti, S., Tepest, R., Bente, G., … Vogeley, K. (2010). Minds made for sharing: initiating joint attention recruits reward-related neurocircuitry. Journal of Cognitive Neuroscience, 22(12), 2702–2715. https://doi.org/10.1162/jocn.2009.21401 Schilbach, L., Wohlschlaeger, A. M., Kraemer, N. C., Newen, A., Shah, N. J., Fink, G. R., & Vogeley, K. (2006). Being with virtual others: Neural correlates of social interaction. 168  Neuropsychologia, 44(5), 718–730. https://doi.org/10.1016/j.neuropsychologia.2005.07.017 Schneier, F. R., Rodebaugh, T. L., Blanco, C., Lewin, H., & Liebowitz, M. R. (2011). Fear and avoidance of eye contact in social anxiety disorder. Comprehensive Psychiatry, 52(1), 81–87. https://doi.org/10.1016/j.comppsych.2010.04.006 Schrammel, F., Pannasch, S., Graupner, S. T., Mojzisch, A., & Velichkovsky, B. M. (2009). Virtual friend or threat? the effects of facial expression and gaze interaction on psychophysiological responses and emotional experience. Psychophysiology, 46, 922–931. https://doi.org/10.1111/j.1469-8986.2009.00831.x Schyns, P. G., Bonnar, L., & Gosselin, F. (2002). SHOW ME THE FEATURES! Understanding Recognition From the Use of Visual Information. Psychological Science, 13(5), 402–409. Senju, A., & Csibra, G. (2008). Gaze following in human infants depends on communicative signals. Current Biology : CB, 18(9), 668–671. https://doi.org/10.1016/j.cub.2008.03.059 Senju, A., & Hasegawa, T. (2005). Direct gaze captures visuospatial attention. Visual Cognition, 12(1), 127–144. https://doi.org/10.1080/13506280444000157 Senju, A., Hasegawa, T., & Tojo, Y. (2005). Does perceived direct gaze boost detection in adults and children with and without autism? The stare-in-the-crowd effect revisited. Visual Cognition, 12(8), 1474–1496. https://doi.org/10.1080/13506280444000797 Senju, A., & Johnson, M. H. (2009a). Atypical eye contact in autism: models, mechanisms and development. Neuroscience and Biobehavioral Reviews, 33(8), 1204–1214. https://doi.org/10.1016/j.neubiorev.2009.06.001 Senju, A., & Johnson, M. H. (2009b). The eye contact effect: mechanisms and development. Trends in Cognitive Sciences, 13(3), 127–134. https://doi.org/10.1016/j.tics.2008.11.009 Senju, A., Kikuchi, Y., Hasegawa, T., Tojo, Y., & Osanai, H. (2008). Is anyone looking at me? Direct gaze detection in children with and without autism. Brain and Cognition, 67(2), 127–139. https://doi.org/10.1016/j.bandc.2007.12.001 Shepherd, S. V. (2010). Following gaze: gaze-following behavior as a window into social cognition. Frontiers in Integrative Neuroscience, 4, 5. https://doi.org/10.3389/fnint.2010.00005 Sherwood, J. V. (1987). Facilitative effects of gaze upon learning. Perceptual and Motor Skills, 64, 1275–1278. Shimada, S., & Hiraki, K. (2006). Infant’s brain responses to live and televised action. NeuroImage, 32(2), 930–939. https://doi.org/10.1016/j.neuroimage.2006.03.044 169  Shimojo, S., Simion, C., Shimojo, E., & Scheier, C. (2003). Gaze bias both reflects and influences preference. Nature Neuroscience, 6(12), 1317–1322. https://doi.org/10.1038/nn1150 Shin, M.-J., Marrett, N., & Lambert, A. J. (2011). Visual orienting in response to attentional cues: Spatial correspondence is critical, conscious awareness is not. Visual Cognition, 19(October 2011), 730–761. https://doi.org/10.1080/13506285.2011.582053 Smilek, D., Birmingham, E., Cameron, D., Bischof, W. F., & Kingstone, A. (2006). Cognitive Ethology and exploring attention in real-world scenes. Brain Research, 1080, 101–119. https://doi.org/10.1016/j.brainres.2005.12.090 Smith, A. D., Hood, B. M., & Hector, K. (2006). Eye remember you two: gaze direction modulates face recognition in a developmental study. Developmental Science, 9(5), 465–472. https://doi.org/10.1111/j.1467-7687.2006.00513.x Tatler, B. W., & Kuhn, G. (2007). Don’t look now: The magic of misdirection. Eye Movements: A Window on Mind and Brain, 697–714. https://doi.org/10.1016/B978-008044980-7/50035-5 Teufel, C., Alexis, D. M., Clayton, N. S., & Davis, G. (2010). Mental-state attribution drives rapid, reflexive gaze following. Attention, Perception & Psychophysics, 72(3), 695–705. https://doi.org/10.3758/APP Teufel, C., Fletcher, P. C., & Davis, G. (2010). Seeing other minds: Attributed mental states influence perception. Trends in Cognitive Sciences, 14(8), 376–382. https://doi.org/10.1016/j.tics.2010.05.005 Thorndike, E. L., & Lorge, 1. (1944). The teacher’s word book of 30,000 words. New York, Teacher’s College, Columbia University. Tipper, C. M., Handy, T. C., Giesbrecht, B., & Kingstone, A. (2008). Brain Responses to Biological Relevance. Journal of Cognitive Neuroscience, 20(5), 879–891. https://doi.org/10.1162/jocn.2008.20510 Todd, R. M., Palombo, D. J., Levine, B., & Anderson, A. K. (2011). Genetic differences in emotionally enhanced memory. Neuropsychologia, 49(4), 734–744. https://doi.org/10.1016/J.NEUROPSYCHOLOGIA.2010.11.010 Tomasello, M., Call, J., & Hare, B. (1998). Five primate species follow the visual gaze of conspecifics. Animal Behaviour, 55, 1063–1069. Retrieved from http://evolutionaryanthropology.duke.edu/sites/evolutionaryanthropology.duke.edu/files/site-images/Tomasello et al_ 1998_ Five primate species follow the visual gaze of 170  conspecifics.pdf Tomasello, M., Carpenter, M., Call, J., Behna, T., & Moll, H. (2005). Understanding and sharing intentions: The origins of cultural cognition. BEHAVIORAL AND BRAIN SCIENCES, 28, 675–735. Retrieved from http://www.eva.mpg.de/documents/Cambridge/Tomasello_Understanding_BehBrainSci_2005_1555292.pdf Tulving, E. (1972). Episodic and semantic memory. Organization of Memory, 1, 381–403. Retrieved from http://alumni.media.mit.edu/~jorkin/generals/papers/Tulving_memory.pdf Tulving, E. (2002). Episodic Memory: From Mind to Brain. Annual Review of Psychology, 53(1), 1–25. https://doi.org/10.1146/annurev.psych.53.100901.135114 Tulving, E., & Murray, D. (1985). Elements of Episodic Memory. Canadian Psychology, 26(3), 235–238. Retrieved from https://insights.ovid.com/canadian-psychology-psychologie-canadienne/capsy/1985/07/000/elements-episodic-memory/7/00011346 Uziel, L. (2007). Individual differences in the social facilitation effect: A review and meta-analysis. Journal of Research in Personality , 41, 579–601. https://doi.org/10.1016/j.jrp.2006.06.008 Varao-Sousa, T. L., & Kingstone, A. (2015). Memory for Lectures: How Lecture Format Impacts the Learning Experience. PLOS ONE, 10(11), e0141587. https://doi.org/10.1371/journal.pone.0141587 Vertegaal, R., Slagter, R., Van Der Veer, G., & Nijholt, A. (2001). Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 301–308). AMC. https://doi.org/10.1145/365024.365119 Vinette, C., Gosselin, F., & Schyns, P. (2004). Spatio-temporal dynamics of face recognition in a flash: itʼs in the eyes. Cognitive Science, 28(2), 289–301. https://doi.org/10.1016/j.cogsci.2004.01.002 von Grunau, M., & Anston, C. (1995). The detection of gaze direction : A stare-in-the-crowd effect. Perception, 24, 1297–1313. Vuilleumier, P. (2002). Perceived gaze direction in faces and spatial attention: a study in patients with parietal damage and unilateral neglect. Neuropsychologia, 40, 1013–1026. https://doi.org/10.1016/S0028-3932(01)00153-1 Vuilleumier, P., George, N., Lister, V., Armony, J., & Driver, J. (2005). Effects of perceived mutual gaze and gender on face processing and recognition memory. Visual Cognition, 171  12(1), 85–101. https://doi.org/10.1080/13506280444000120 Walker-Smith, G. J., Gale, A. G., & Findlay, J. M. (1977). Eye movement strategies involved in face perception. Perception, 6(3), 313–326. https://doi.org/10.1068/p060313n Wammes, J. D., & Smilek, D. (2017). Examining the Influence of Lecture Format on Degree of Mind Wandering. Journal of Applied Research in Memory and Cognition, 6(2), 174–184. https://doi.org/10.1016/J.JARMAC.2017.01.015 Wesselmann, E. D., Cardoso, F. D., Slater, S., & Williams, K. D. (2012). To Be Looked at as Though Air: Civil Attention Matters. Psychological Science, 23, 166–168. https://doi.org/10.1177/0956797611427921 Wiese, E., Wykowska, A., Zwickel, J., & Müller, H. J. (2012). I see what you mean: how attentional selection is shaped by ascribing intentions to others. PloS One, 7(9), e45391. https://doi.org/10.1371/journal.pone.0045391 Wieser, M. J., Pauli, P., Alpers, G. W., & Mühlberger, A. (2009). Is eye to eye contact really threatening and avoided in social anxiety?-An eye-tracking and psychophysiology study. Journal of Anxiety Disorders, 23, 93–103. https://doi.org/10.1016/j.janxdis.2008.04.004 Wilms, M., Schilbach, L., Pfeiffer, U., Bente, G., Fink, G. R., & Vogeley, K. (2010). It’s in your eyes--using gaze-contingent stimuli to create truly interactive paradigms for social cognitive and affective neuroscience. Social Cognitive and Affective Neuroscience, 5(1), 98–107. https://doi.org/10.1093/scan/nsq024 Wood, N., & Cowan, N. (1995). The cocktail party phenomenon revisited: How frequent are attention shifts to one’s name in an irrelevant auditory channel? Journal of Experimental Psychology: Learning, Memory, and Cognition, 21(1), 255–260. https://doi.org/10.1037/0278-7393.21.1.255 Wu, D. W.-L., Bischof, W. F., & Kingstone, A. (2013). Looking while eating: the importance of social context to social attention. Scientific Reports, 3, 2356. https://doi.org/10.1038/srep02356 Wu, D. W.-L., Bischof, W. F., & Kingstone, A. (2014). Natural gaze signaling in a social context. Evolution and Human Behavior, 35(3), 211–218. https://doi.org/10.1016/j.evolhumbehav.2014.01.005 Wykowska, A., Wiese, E., Prosser, A., & Müller, H. J. (2014). Beliefs about the minds of others influence how we process sensory information. PloS One, 9(4), e94339. https://doi.org/10.1371/journal.pone.0094339 Yarbus, A. L. (1967). Eye Movements During Perception of Complex Objects. In Eye 172  Movements and Vision (pp. 171–211). Springer, US. Yee, N., Bailenson, J. N., Urbanek, M., Chang, F., & Merget, D. (2007). The Unbearable Likeness of Being Digital: The Persistence of Nonverbal Social Norms in Online Virtual Environments. CyberPsychology & Behavior, 10(1), 115–121. https://doi.org/10.1089/cpb.2006.9984 Yoon, J. M. D., Johnson, M. H., & Csibra, G. (2008). Communication-induced memory biases in preverbal infants. Proceedings of the National Academy of Sciences of the United States of America, 105(36), 13690–13695. https://doi.org/10.1073/pnas.0804388105 Zajonc, R. B. (1965). Social Facilitation. Science, 149(3681), 269–274. Zuckerman, M., Miserandino, M., & Bernieri, F. (1983). Civil inattention exists—in elevators. Personality and Social Psychological Bulletin, 9, 578–586. Retrieved from http://psp.sagepub.com/content/9/4/578.short Zwickel, J., & Võ, M. L.-H. (2010). How the presence of persons biases eye movements. Psychonomic Bulletin & Review, 17(2), 257–262. https://doi.org/10.3758/PBR.17.2.257 173  Appendices  Appendix A: Word list account  engine  language  river address  entrance  laugh  sailor afternoon  envelope  leather  school amount  evening  lesson  shadow answer  factory  machine  shoulder arrow  fashion  market  speech attention  forest  meadow  station attitude  foundation merchant  steam author  friend  message  stream avenue  furniture  minute  summer basket  garden  neighbor  teacher battery  glass  nephew  theatre beauty  gravity  ocean  thread border  guardian  office  ticket branch  handle  orchard  traffic building  harbor  package  travel campaign  history  painting  treasure capital  holiday  partner  trousers captain  industry  peace  turnip castle  invention  pebble  uncle century  invitation  plate  uniform clothes  island  pocket  vacation daughter  journey  porch  valley debate  judge  powder  victory department justice  quarrel  village dinner  kettle  quarter  wagon direction  kingdom  queen  wheat distance  kitchen  record  wheel education  knock  resort  whisper election  ladder  reward  winter    174  Appendix B: A meta-analysis of all three experiments reported in Chapter 2.   A three-way mixed ANOVA was conducted on response time (RT), response accuracy, response sensitivity (d prime) and response bias (beta) with investigator gaze (2 levels: with eye contact and without eye contact) as the within participant factor and experiment (3 levels: Female investigator with brief glance, male investigator with brief glance, and female investigator with prolonged gaze) and participant gender (2 levels: male and female) as between participant factors. RT. The analysis of mean RTs revealed that there was a main effect of experiment (F(2,246)=5.04, MSE=190074.97, p<0.01), participants responded faster in Experiment 2 than Experiment 3 (t(166)=2.97, MSE=29, p<.005). There was also a main effect of gender (F(1,246)=15.16, MSE=190074.97, p<0.001), such that females (953 ms) were faster to respond than males (1104 ms). There was no main effect of investigator gaze (F(1,246)=0.71, MSE=21548.45, p=0.40). Nor were there interactions between investigator gaze, participant gender and experiment (F(2,246)=0.16, MSE=21548.45, p=0.85), investigator gaze and experiment (F(2,246)=0.89, MSE=21548.45, p=0.35), nor participant gender and experiment (F(2,246)=1.84, MSE=190074.97, p=0.16).           Percentage Correct: Analysis of the accuracy data (see Figure 2.3) revealed a marginal main effect of investigator gaze (F(1,246)=3.67,MSE=42.00,p=.06), such that participants recognized more words that the investigator said while making eye contact (75%) than when they did not (74%). There were no main effects of experiment (F(2,246)=0.29,MSE=391.47,p=.75) or participant gender (F(1,246)=1.27, MSE=391.47, p=.26). Critically, there was an interaction between investigator gaze and participant gender (F(1,246)=35.16, MSE=42.00, p<.001), such that female participants recognized more words that were spoken while the investigator made eye contact (78%) than when they did not (74%; t(125)=6.00, SEM=0.76, p<0.001). However, male 175  participants recognized fewer words read while the investigator made eye contact (73%) than when they did not (75%; t(125)=2.67, SEM=0.87, p<0.01). There were no interactions between investigator gaze, participant gender and experiment (F(2,246)=0.75, MSE=42.00, p=.48), investigator gaze and experiment F(2,246)=0.80, MSE=42.00, p=.45), nor participant gender and experiment F(2,246)=0.55, MSE=391.47, p=.58).  D’: The results mirror the accuracy data with the exception that, in contrast to the accuracy data, the male participants were only marginally less sensitive to words presented while the investigator made eye contact than when they did not (t(125)=1.94, SEM=0.03, p=0.05). Critically, the female participants were more sensitive to words presented while the investigator made eye contact than when they did not (t(125)=5.9, SEM=0.03, p<0.001).  Beta: Analysis of the beta values revealed no main effect of experiment (F(2,246)=0.03, MSE=12.38, p=0.97),  investigator gaze (F(1,246)=0.98, MSE=0.37, p=0.32) or participant gender (F(2,246)=2.30, MSE=12.38, p=0.13). There was however an interaction between investigator gaze and participant gender (F(1,246)=6.14, MSE=0.37, p<0.05), such that male participants were no more biased on words that were spoken while the investigator made eye contact (2.85) than when they did not (2.77; t(125)=1.0, SEM=0.08, p=0.31). However, female participants responded less conservatively on words read while the investigator made eye contact (2.24) than when they did not (2.43; t(125)=2.57, SEM=0.07, p<0.05). There were no interactions between investigator gaze, participant gender and experiment (F(2,246)=1.18, MSE=0.37, p=.31), investigator gaze and experiment (F(2,246)=1.02, MSE=0.37, p=.36), nor participant gender and experiment F(2,246)=0.05, MSE=12.38, p=.95).   176  Appendix C: A comparison of memory effects in response to a live investigator (Studies 1 and 2 in Chapter 2) and a videotaped investigator (Study 4 in Chapter 3)   A four-way mixed ANOVA was conducted on response time (RT), response accuracy, response sensitivity (d prime) and response bias (beta) with investigator gaze (2 levels: with eye contact and without eye contact) as the within participant factor and investigator presence (2 levels: live and videotaped), investigator presence (3 levels: male and female) and participant gender (2 levels: male and female) as between participant factors. RT. The analysis of mean RTs revealed that there was a main effect investigator presence (F(1,328)=12.19, MSE=199677.54, p<0.001), such that participants were faster to recognize words that were said by the in-person investigator (987 ms) than the videotaped investigator (1111 ms). There was also a main effect of investigator gender (F(1,328)=11.77, MSE=199677.54, p<0.001), such that participants were faster to recognize words that were said by the male investigator (991 ms) than the female investigator (1109 ms). There was a marginal interaction between investigator presence and investigator gender (F(1,328)=3.80, MSE=199677.54, p=0.05, such that when the investigator appeared over video, participants recognized words that were said by a male investigator (1018 ms) faster than a female investigator (1203 ms). However, when the investigator was in-person, participants recognized words that were said by a male investigator (965 ms) as fast as words said by a female investigator (965 ms). No other main effects or interactions were significant (all F’s<2.6).         Percentage Correct: Analysis of the accuracy data revealed a main effect investigator presence (F(1,328)=13.43, MSE=467.57, p<0.001), such that participants recognized more words said by an in-person investigator (75%) than a videotaped investigator (69%) and a marginal main effect of participant gender (F(1,328)=3.64, MSE=467.57, p=0.06) such that, female participants 177  (74%) recognized more words than male participants (71%). Critically, there was a three-way interaction between investigator gaze, investigator presence, and participant gender (F(1,328)=15.88, MSE=44.75, p<001). No other main effects or interactions were significant (all other F’s<1.4).  To interpret the interaction between investigator gaze, investigator presence and participant gender, a two-way mixed ANOVA with investigator gaze (2 levels: with eye contact and without eye contact) as a within participant factor and participant gender (2 levels: male and female) as a between participant factor was conducted for the in-person investigator and the videotaped investigator (the same analysis for the videotaped investigator is presented in section 3.3). When the investigator was in-person, there was an interaction between investigator gaze and participant gender (F(1,166)=31.38, MSE=38.85, p<0.001) such that female participants recognized more words that were spoken while the investigator made eye contact (78%) than when they did not (74%; t(83)=4.91, SEM=0.87, p<0.001). However, male participants recognized fewer words read while the investigator made eye contact (73%) than when they did not (76%; t(83)=3.19, SEM=1.04, p<0.005).  When the investigator was videotaped, there was a main effect of participant gender (F(1,166) =4.43, MSE=542.79, p<0.05), such that female participants were more accurate (72%) than male participants (66%). Critically, there was no main effect of investigator gaze (F(1,166)=0.80, MSE=50.04, p=0.37) nor an interaction between the investigator gaze and the participants gender (F(1,166) =1.54, MSE=50.04, p=0.69).     D’: The results mirror the accuracy data with the exception that, in contrast to the accuracy data, there was no main effect of participant gender (F(1,328)=0.84, MSE=1.35, p=0.36). 178  Beta: Analysis of the beta values revealed a marginal main effect of participant gender (F(1,328)=3.33, MSE=11.40, p=0.07), such that female participants (2.2) responded less conservatively than male participants (2.7). There was also an interaction between investigator gaze, investigator presence and participant gender (F(1,328)=6.24, MSE=0.35, p<0.05). No other main effects or interactions were significant (all F’s < 2.52). To interpret the interaction between investigator gaze, investigator presence and participant gender, a two-way mixed ANOVA with investigator gaze (2 levels: with eye contact and without eye contact) as a within participant factor and participant gender (2 levels: male and female) as a between participant factor was conducted for the in-person investigator and the videotaped investigator (the same analysis for the videotaped investigator is presented in section 3.3). When the investigator was in-person, there was an interaction between investigator gaze and participant gender (F(1,166) =7.21, MSE=0.40, p<0.01)  such that female participants were less conservatively biased on words that were spoken while the investigator made eye contact (2.23) than when they did not (2.42; t(83)=2.11, SEM=0.09, p<0.05). However, male participants were no more biased in their response to words read while the investigator made eye contact (2.89) than when they did not (2.71; t(83)=1.72, SEM=0.10, p=0.10). There were no significant main effects of investigator gaze (F(1,166) =.49, MSE=0.40, p=0.49) or participant gender (F(1,166) =1.57, MSE=12.188, p=0.22).  When the investigator was videotaped, there were no main effects of investigator gaze (F(1,166) =0.50, MSE=0.29, p=.48) or participant gender (F(1,166) =1.781, MSE=10.71, p=0.18), nor was there an interaction between investigator gaze and participant gender (F(1,166) =0.50, MSE=0.29, p=.48).   179  Appendix D: A comparison between the memory effects generated in response to investigator eye gaze (Study 1) and investigator pointing (Study 9) in male participants only.    A two-way mixed ANOVA was conducted on response time (RT), response accuracy, response sensitivity (d prime) and response bias (beta) with investigator cue (2 levels: participant and none) as the within participant factor and cue type (2 levels: eye gaze and pointing) as the between participant factor.  RT. The analysis of mean RTs revealed that there was no main effect of investigator gaze (F(1,68)=0.01, MSE=27956.08, p=0.99), nor investigator cue (F(1,68)=0.93, MSE=199844.57, p=0.34), nor an interaction between investigator gaze and investigator cue (F(1,68)=0.38, MSE=27956.08, p=0.34).          Percentage Correct: Analysis of the accuracy data revealed a marginal effect of investigator cue (F(1,68)=3.30, MSE=344.90, p=.08) an interaction between investigator cue and cue type (F(1,68)=13.64, MSE=36.59, p<.001), such that when the investigator used eye gaze, participants recognized fewer words read while the investigator made eye contact with them (73%) than when they did not (76%; t(41)=2.37, SEM=1.33, p<0.05). However, when the investigator pointed, the pattern reversed - participants recognized more words that were spoken while the investigator pointed at them (71%) than when no gesture was made (66%; t(27)=2.85, SEM=1.60, p<0.051). D’: The results mirror the accuracy data, with the exception that there was a significant effect of investigator cue (F(1,68)=6.82, MSE=1.79, p<.05) such that participants were more sensitive to words presented while the investigator used eye gaze (2.0) than when they used pointing (1.4).  180  Beta: Analysis of the beta values revealed an interaction between investigator cue and cue type (F(1,68)=10.99, MSE=0.194, p<0.01), such that participants responded more conservatively on words read while the investigator made eye contact (2.98) than when they did not (2.7; t(41)=-3.02, SEM=0.09, p<0.005). However, when the investigator pointed, participants responded no more conservatively when the investigator pointed at them (2.3) than when they did not (2.5; t(27)=1.81, SEM=0.13, p=0.09).     181  Appendix E: A summary of observed effect sizes (Cohen's d) in the percentage correct data for the critical t-tests in each study.   CueParticipant GenderComparison of participant to screenComparison of partner to screenEye gaze (Study 1) Female 0.31   - Male -0.24   - Eye gaze (Study 2) Female 0.27   - Male -0.22   - Eye gaze (Study 3) Female 0.34   - Male -0.02   - Eye gaze over pre-recorded video (Study 4) Female 0.02   - Male 0.06   - Eye gaze (Study 5) Female 0.33 0.27Eye gaze over skype (Study 6) Female 0.32 0.10Pointing (Study 7) Female 0.38 0.44Naming (Study 8) Female 0.56 0.31Pointing (Study 9) Male 0.31   - 

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0373094/manifest

Comment

Related Items