Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Examining the deployment of overt and covert attention to social stimuli in naturalistic and laboratory… Laidlaw, Kaitlin Elizabeth Wiggins 2015

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2016_february_laidlaw_kaitlin.pdf [ 1.77MB ]
Metadata
JSON: 24-1.0216552.json
JSON-LD: 24-1.0216552-ld.json
RDF/XML (Pretty): 24-1.0216552-rdf.xml
RDF/JSON: 24-1.0216552-rdf.json
Turtle: 24-1.0216552-turtle.txt
N-Triples: 24-1.0216552-rdf-ntriples.txt
Original Record: 24-1.0216552-source.json
Full Text
24-1.0216552-fulltext.txt
Citation
24-1.0216552.ris

Full Text

    EXAMINING THE DEPLOYMENT OF OVERT AND COVERT ATTENTION TO SOCIAL STIMULI IN NATURALISTIC AND LABORATORY ENVIRONMENTS by KAITLIN ELIZABETH WIGGINS LAIDLAW Hons. B.Sc., The University of Toronto, 2007 M.A., The University of British Columbia, 2010  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF  THE REQUIREMENTS FOR THE DEGREE OF  DOCTOR OF PHILOSOPHY  in  THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES  (Psychology)   THE UNIVERSITY OF BRITISH COLUMBIA  (Vancouver)  November 2015   © Kaitlin Elizabeth Wiggins Laidlaw, 2015   ii  Abstract  The study of social attention has in large part been constrained to studying how individuals look to images or videos of other people within highly controlled and isolated laboratory environments. The belief is that measuring responses to non-interactive images or videos of people can serve to inform and predict everyday social attentional behaviours. However, this implicit assumption has gone relatively untested. In order to better characterize how and why people pay attention to others, the present thesis explores the proposition that social attentional processes are generalizable across levels of realism and scale. In so doing, this thesis describes social attentional deployment based on whether it is oriented overtly (shifting attention with the eyes) or covertly (shifting attention without an eye movement), within both naturalistic and laboratory environments. Chapters 2 and 3 explore whether and how social attention is directed to nearby others within naturalistic environments, and identifies major departures from conclusions that were generated using computer-based laboratory tasks. In particular, the results of the first two chapters suggest a weakened role for overt orienting and a strong reliance on covert mechanisms. Chapter 4 confirms that a covert bias to social stimuli can also be observed within the lab. Chapter 5 moves away from initial selection of social stimuli within the environment to explore how attention is deployed to facial features once a person is already attended to, and demonstrates a non-volitional drive to overtly orient attention to the eyes. Finally, Chapter 6 asks whether overt and covert attentional selection of socially-relevant facial features have different behavioural effects, and reveals an important functional benefit of orienting attention overtly rather than covertly during face encoding for later recognition. Collectively, the results of this thesis support a generalized importance of attending to social stimuli and also extend upon previous work to demonstrate that the deployment of social iii  attention is modulated by the level of selection required, as well as degree of interaction afforded by the situation.    iv  Preface  All work presented in this dissertation was conducted in the Brain and Attention Research Laboratory and surrounding public spaces on campus at the University of British Columbia, Point Grey campus. All projects and associated methods were approved by the University of British Columbia’s Research Ethics Board [Towards a More Natural Approach to Attention Research 1-200, certificate #H10-00527, & Research in Cognitive Ethology, #H04-80767].  A version of Chapter 2 has been published [Laidlaw, K.E.W., Foulsham, T., Kuhn, G., & Kingstone, A. (2011). Potential social interactions are important for social attention. Proceedings of the National Academy of Sciences of the United States of America, 108, 5548-5553]. I was lead investigator, and was primarily responsible for research design, collection and analysis of data, and manuscript composition. All co-authors were involved in early research design and manuscript edits, and T. Foulsham additionally helped collect and analyze data. Some minor edits have been made in the current document to better focus analyses on features of the task that are most relevant to the present document.  A version of Chapter 4 has been published [Laidlaw, K.E.W., Badiudeen, T.A., Zhu, M.J.H., & Kingstone, A. (2015). A fresh look at saccadic trajectories and task irrelevant stimuli: Social relevance matters. Vision Research, 11, 82-90]. I was lead investigator and was responsible for project design (with advice from A. Kingstone), data analysis and manuscript composition. T.A. Badiudeen and M.J.H. Zhu were involved in data collection, while all three co-authors contributed to manuscript edits. Some minor additions and edits have been made to improve cohesiveness of this chapter to the rest of the document. v   A version of Chapter 5 has been published [Laidlaw, K.E.W., Risko, E.F., & Kingstone, A. (2012). A new look at social attention: Orienting to the eyes is not (entirely) under volitional control. Journal of Experimental Psychology: Human Perception and Performance, 38, 1132-1143]. I was lead investigator, and was primarily responsible for designing the project, as well as analyzing the data and composing the manuscript. My co-authors were involved in an advisory capacity throughout design and data collection, and contributed to manuscript edits. The terms 'automatic' and 'reflexive' were used in the published version of this paper, but to stay consistent with the terminology used throughout this thesis, I instead refer to 'non-volitional' and 'automated' orienting. Some minor additions and edits have been made to improve cohesiveness of this chapter to the rest of the document.  I was the lead investigator for the projects reported in Chapters 3 and 6, both of which have been submitted for publication. I was primarily responsible for design conception, data analysis, and manuscript composition. A. Kingstone acted in a supervisory capacity during project conception and manuscript editing. T.A. Badiudeen, J. Devji, T. Luk, A. Rothwell, K. Zhang, & M.J.H. Zhu, were involved in data collection and video coding.  Minor edits were made throughout to published works to constrain personal pronoun use to refer to the researchers of the present experiments, rather than to people or researchers in general.   vi  Table of Contents Abstract .......................................................................................................................................... ii Preface ........................................................................................................................................... iv Table of Contents ......................................................................................................................... vi List of Tables ................................................................................................................................ ix List of Figures ................................................................................................................................ x Acknowledgements ..................................................................................................................... xii Dedication ................................................................................................................................... xiii Chapter 1: Introduction ............................................................................................................... 1 1.1 Social attention to representations of people ................................................................ 5 1.2 Translating laboratory results to everyday social attention .......................................... 8 1.3 Thesis overview .......................................................................................................... 12 Chapter 2: Comparison of overt social attention to a real person or their representation . 16 2.1 Introduction ................................................................................................................ 17 2.2 Methods ...................................................................................................................... 21 2.3 Results ........................................................................................................................ 25 2.4 Discussion ................................................................................................................... 31 Chapter 3: Evidence of covert social attention to in a naturalistic environment.................. 37 3.1 Introduction ................................................................................................................ 38 3.2 Methods ...................................................................................................................... 41 3.3 Results ........................................................................................................................ 45 3.4 Discussion ................................................................................................................... 50 Chapter 4: Covert attentional bias to distracting representations of social stimuli ............. 55 vii  4.1 Introduction ................................................................................................................ 57 4.2 Experiment 1: Upright and inverted faces .................................................................. 63 4.3 Experiment 2: Scrambled faces .................................................................................. 72 4.4 General discussion ...................................................................................................... 77 Chapter 5: Social attention to features – Evidence for non-volitional orienting to the eyes, but not the mouth of face images ........................................................................................... 84 5.1 Introduction ................................................................................................................ 85 5.2 Experiment 1: Upright faces ....................................................................................... 89 5.3 Experiment 2: Inverted faces .................................................................................... 102 5.4 General discussion .................................................................................................... 113 Chapter 6: Exploration of the utility of non-volitional social attention to the eyes ............ 119 6.1 Introduction .............................................................................................................. 120 6.2 Experiment 1: Feature fixation avoidance effects on recognition performance ....... 124 6.3 Experiment 2: Isolated covert attention to facial features and its influence on recognition ........................................................................................................................... 135 6.4 General discussion .................................................................................................... 142 Chapter 7: General discussion ................................................................................................. 147 7.1 How is overt and covert attention directed to social stimuli in real life? ................. 147 7.2 How is volitional and non-volitional covert and overt visual attention deployed to representations of faces, and socially relevant facial features (e.g. eyes)? ......................... 150 viii  7.3 What purpose might a social attentional bias serve? ................................................ 153 7.4 What influences how social attention is directed across contexts? .......................... 155 7.5 Further implications .................................................................................................. 163 7.6 Future directions ....................................................................................................... 165 References .................................................................................................................................. 171 Appendix .................................................................................................................................... 198    ix  List of Tables Table 2.1 Event Coding Criteria ................................................................................................... 27 Table 3.1 Coder reliability ............................................................................................................ 47 Table 3.2 Pedestrian looking frequency across confederate eye position and action conditions . 49 Table 5.1 Experiment 1: Upright faces. Mean per trial fixation number and dwell times ........... 94 Table 5.2 Experiment 2: Inverted faces. Mean per trial fixation number and dwell times ........ 106 Table 6.1 Average non-normalized and normalized fixation number and dwell times to eyes and mouth ROIs during encoding ...................................................................................................... 130    x  List of Figures Figure 1.1 Schematic overview of the thesis’ chapters................................................................. 13 Figure 2.1 Graphical set-up of waiting room. ............................................................................... 24 Figure 2.2 Video frames exemplifying each coding event. .......................................................... 27 Figure 2.3 Mean overall duration of fixations to the confederate, head turns towards the confederate, and fixations on the condition’s baseline object ...................................................... 30 Figure 3.1 Example camera angles from the Outside (A) and Inside (B) locations. .................... 42 Figure 3.2 Final hand positions for each condition....................................................................... 44 Figure 3.3 Frequency of pedestrian looks made to the confederate as a function of confederate hand actions .................................................................................................................................. 50 Figure 4.1 Procedure for Experiments 1 and 2 ............................................................................. 65 Figure 4.2 Saccadic trajectory deviations relative to no-distractor trials ...................................... 69 Figure 4.3 Saccade trajectories as a function of RT for face and scrambled, non-face stimuli .... 74 Figure 5.1 Heat maps displaying the distribution of all participants' fixations over an amalgamated face for each of the three viewing conditions in Experiment 1 .............................. 93 Figure 5.2 Non-normalized average fixation dwell time per trial to the to-be-avoided feature  during the Don't Look block for upright and inverted  faces. ....................................................... 96 Figure 5.3 Normalized average fixation dwell time per trial into the to-be-avoided feature during the Don't Look block for upright and inverted faces. ................................................................... 96 Figure 5.4 Normalized average fixation dwell time per trial into the to-be-avoided feature during the Don't Look block for early and late viewing periods. ............................................................. 99 Figure 5.5 Heat maps displaying the distribution of all participants' fixations over an amalgamated face for each of the three viewing conditions in Experiment 2. ........................... 105 xi  Figure 6.1 Example face stimulus and heat maps displaying the distribution of participant fixations during the Encoding Phase for each of the three viewing conditions. ......................... 128 Figure 6.2 Normalized fixation dwell times to the eye and mouth ROIs across different viewing instructions during the encoding phase of Experiment 1. ........................................................... 130 Figure 6.3 Discrimination performance in the recognition phase for Experiment 1. ................. 132 Figure 6.4 Discrimination performance in the recognition phase for Experiment 2 .................. 140    xii  Acknowledgements  Many thanks go to my supervisor, Alan Kingstone, for his unwavering support throughout my time at UBC. I feel exceedingly lucky to have been able to work with someone who loves what he does and ignites the same enthusiasm in his students. My time spent working on a PhD would not have been nearly as much fun without the members of the Brain and Attention Laboratory, who are as caring as they are hilarious: thank you. Thanks also to my collaborators for their contributions to these papers, and to my many research assistants, without whom I'd likely still be coding data.   Thank you to my husband, Sean, for always grounding me. To Alice, thank you for being the best and sweetest distraction. Finally, thank you to my mum, Val, for teaching me to see value in hard work.  The research reported in this dissertation was supported by fellowships from the Natural Sciences and Engineering Research Council of Canada, and the University of British Columbia. Additional support was provided through grants awarded to Alan Kingstone.    xiii  Dedication           This is for you, Dad.   1  Chapter 1: Introduction   Imagine you are making your way from your workplace to the dentist's office. You walk down the hall and step into an elevator with another employee from a different department: a stranger. Even though you stand mere inches from them, you avoid making more than a cursory glance in their direction, and then stare straight ahead. Leaving the building, you navigate the busy streets, attending to oncoming pedestrians and occasionally even make assessments about their appearance and demeanor, all while you successfully avoid making them aware that they are the focus of your attention. As you enter the dentist's office, you catch the receptionist's gaze, quickly announce your arrival, and sit down in the waiting room. As you wait, you flip through a magazine, skimming the articles and focusing most on the pictures of other people. Another person enters, but you stay immersed in your magazine. Out of the corner of your eye, you see the hygienist enter the waiting room, and immediately look up when she calls your name.    This is just a small snapshot of a perfectly ordinary day. Yet, that these events seem perfectly ordinary and seemingly effortless make them a topic that is of extraordinary interest to many psychologists. People are surrounded by a seemingly limitless amount of dynamic visual information, yet the brain is limited in that it cannot possibly process everything in a timely fashion. To deal with this challenge, one needs attention. It is often said that attention is what turns looking into seeing: it is the process by which relevant or salient visual information is selected for further processing. The process of attending requires both selection of and orienting to a visual stimulus. Attentional selection is traditionally considered to occur either volitionally 2  or non-volitionally1, with the former being directed by the observer, and the latter being at least partially driven by properties of the stimulus itself. Examples of salient stimulus features known to attract attention non-volitionally include transients such as the appearance of a new object (Yantis & Jonides, 1984), or the onset of motion (Abrams & Christ, 2003; Franconeri & Simons, 2003). Recent evidence also suggests that the reward-value or task relevance of a stimulus can also bias attentional selection (B. A. Anderson, Laurent, & Yantis, 2013, 2011a, 2011b; Todd, Cunningham, Anderson, & Thompson, 2012). Attention has historically been compared to a spotlight (Posner, 1980) or zoom lens (Eriksen & Yeh, 1985) whose focus moves and changes size across the visual field. By focusing attention, it enhances the processing of the selected stimulus or features at the expense of competing representations (Desimone & Duncan, 1995; Gazzaley, Cooney, McEvoy, Knight, & D’Esposito, 2005; Polk, Drake, Jonides, Smith, & Smith, 2008; A. T. Smith, Singh, & Greenlee, 2000; Somers, Dale, Seiffert, & Tootell, 1999).  The orienting of one's attentional focus can occur in one of two ways: either overtly, using eye movements, or covertly, without a related eye movement (Posner, 1980). While eye movements and attention are often considered to move together and evidence demonstrates a tight coupling between attentional and oculomotor systems (Engbert & Kliegl, 2003; Hafed & Clark, 2002; M. S. Peterson, Kramer, & Irwin, 2004), the two systems are dissociable, thereby allowing attention to be oriented covertly when necessary (Kustov & Robinson, 1996; Moore, Armstrong, & Fallah, 2003; Van der Stigchel & Theeuwes, 2007). Note that traditionally, to                                                  1 In the current document, the term non-volitional is used in reference to findings that demonstrate orienting that is not easily controlled by the participant's immediate goals or intentions, whereas volitional orienting is. Authors have used several terms to describe similar behaviours, including exogenous, automatic, bottom-up, and reflexive. These terms sometimes refer to distinct and more specific criteria than that described above (e.g. automaticity may require orienting to be highly efficient and also without awareness or intention; Bargh, 1994). To simplify terminology and to highlight the relevant dimension by which I wish to distinguish different orienting behaviours, I will use non-volitional and volitional throughout. 3  attend covertly does not necessitate that an observer have the intention to 'hide' their attentional focus by not looking. Instead, the term simply refers to the manner by which attention is oriented: without making a saccade to, or remaining fixated on, the same location.   People make thousands of eye movements every day, fluidly shifting and focusing their attention around their environment to better process the relevant aspects of the world surrounding them. Without attention, a seemingly ordinary task like navigating a busy street might represent an insurmountable feat. In the tableau presented previously, it is noteworthy that attention was often described in relation to other people, whether it be to monitor passing pedestrians, look at images of other people in a magazine, or initiate an interaction with the receptionist. This is because the presence and actions of other people represent one of the richest and most prevalent sources of visual information that can be used in order to understand and adapt within a given environment. For decades, psychologists (e.g., Buswell, 1935; Yarbus, 1967) and anthropologists (e.g., Goffman, 1963, 1971) alike have noted the disproportional influence that other people have on an individual’s own looking and attentional behaviour, but the systematic study of social attention – that is, attention directed towards other social stimuli2, such as people, faces, or their features – is actually a fairly new subfield of visual attention research.   Tasked with understanding how, where, and why attention is directed when presented with social stimuli, researchers investigating social attention have largely adopted the                                                  2 The term visual social attention encompasses a diverse behavioural set, which includes initial detection and deployment of attention to another person, joint attention, gaze following, and may also play an important role in other socially-relevant behaviours such as mimicry and joint action (e.g., Charman et al., 1997; Sebanz, Knoblich, & Humphreys, 2008). The present thesis focuses on the initial selection and orienting of visual attention to social stimuli, and thus the term social attention is used to refer to this process throughout the document.   4  methodologies used within the field of visual attention more generally. This means to say that social attention has been relatively limited to measuring responses to simplistic stimuli presented within a repetitive computer task. Over time, it has become popular to study social attention using representations of social stimuli as a stand-in for the real thing (Friesen & Kingstone, 1998; Langton & Bruce, 1999). The decision to test social attention within arguably very “asocial” lab settings is likely in-part due to the desire to compare results to those already established using non-social stimuli, as well as in-part to satisfy basic tenants of experimental control. Despite these studies taking place in "asocial" settings that do not involve attending to real people, this should not suggest that there is no value in studying how social attention is directed to social representations (i.e. images or videos of other people), and indeed the present document also describes several laboratory-tasks. However, in order to better understand how social attention operates as a whole will require that researchers explore effects as they relate to both real and represented social stimuli. The choice to rely only on representations of social stimuli to study social attention may come at a cost. The central thesis of this document is based on the proposition that social stimuli and situations used to investigate social attention are critical to the conclusions that one draws regarding this process. Just as I will argue that relying on representations of social stimuli will not provide an adequate scope of how social attention operates generally, it is not necessarily the case that all questions will be ideally addressed using tasks that involve real people. Thus, in an effort to explore how and why attention is directed to social stimuli, both real and represented, this document will aim to cover two broad topics. The first relates to how social attention operates when observing real people. The second expands upon previous research using representations of social stimuli to probe questions about attentional control and orienting that are currently poorly understood and which will later be 5  relevant to forming a more comprehensive perspective of how social attention functions. To better understand the need to expand the study of social attention into more realistic contexts however, it is prudent to first review the basics of what previous lab-based studies have revealed about how images and videos of other people attract attention. Next, this will be related to the relatively small number of current studies that have been done within more naturalistic contexts. 1.1 Social attention to representations of people Overwhelmingly, studies have demonstrated that when looking at stimuli that contain representations of people, observers have a bias to attend to social over other salient non-social stimuli (see e.g., Birmingham, Bischof, & Kingstone, 2009a; Emery, 2000; Mundy & Newell, 2007; Risko, Laidlaw, Freeth, Foulsham, & Kingstone, 2012 for reviews of social cognition, including biases to attend to other people). This bias to attend to people, and especially to their eyes arises early: even small infants will preferentially gaze towards very simplistic face stimuli over scrambled versions of the same (Goren, Sarty, & Wu, 1975) and prefer to gaze at images of faces more if the eyes are visible (Farroni, Csibra, Simion, & Johnson, 2002; Farroni, Johnson, & Csibra, 2004). Developmental work has led some to suggest that humans possess brain architecture that contains information concerning the structural characteristics of other people (conspecifics), such as the configuration of the eyes above the nose/mouth (Morton & Johnson, 1991; see, Simion, Valenza, Cassia, Turati, & Umiltà, 2002 for support of this view). Such a neural system could drive attentional allocation to the location of people-like stimuli in the environment.   To better understand how attention is oriented to people, researchers have primarily focused on whether the bias to attend to images of social stimuli is volitional or non-volitional in nature. The importance of this question relates to the idea that humans were 'built' to detect social 6  stimuli: if attention is directed towards social stimuli even when doing so is inconsistent with current goals, it supports the view that this behaviour may be evolutionarily adaptive. Despite some ability to control attentional orienting towards social stimuli (Birmingham, Bischof, & Kingstone, 2008a), there is strong support for the view that representations of other people attract attention in an at least partially non-volitional manner – that is, a bias to orient to other people cannot be completely overridden by the present goals of the observer. For example, Fletcher-Watson and colleagues (2008) have demonstrated that the attentional bias towards representations of people is detectable within the very first fixation, suggesting the existence of rapid person detection processes that are consistent with what would be expected if social stimuli non-volitionally triggered attentional orienting. Using a standard attentional capture task, Devue and colleagues (2012) also found evidence supporting a non-volitional view of social attentional orienting. They demonstrated that an irrelevant face placed within an array of non-social objects (e.g. toys, clothes, food) will capture attention, as measured by fixations to the objects, more than the other distractors. Even when participants are tasked with another goal behavior (i.e., look to the unique coloured dot beside the distractor), a face in the display would cause them to make more oculomotor errors than if other objects were presented along with the target. That faces capture attention in a bottom-up fashion has also been supported via other studies, in which faces both interfere with searching for other animate object categories (Langton, Law, Burton, & Schweinberger, 2008), and are less prone to visual extinction in patients with spatial neglect (Vuilleumier, 2000). Interestingly, it appears that orienting towards representations of social stimuli is not contingent on known stimulus triggers, such as high contrast or motion transients. For example, Birmingham and colleagues (2008b) asked participants to look at photos of complex social 7  scenes and found a strong bias to attend to social stimuli despite demonstrating that the people within the images were not the most visually salient objects in the scene (based on saliency models of Itti & Koch, 2000). In short, fixations to people in images of social scenes are not driven by low-level saliency (Birmingham, Bischof, & Kingstone, 2009b). Levy, Foulsham, and Kingstone (2012) also demonstrated that directing one's attention to the eyes could not be fully accounted for by a central fixation bias (Foulsham & Underwood, 2009; Tatler, 2007), which, it could be argued, would drive attention to the eyes since they are roughly in the middle of the face. Taken together, it appears that the social relevance of a stimulus, not just its low-level features, contributes to attention being drawn non-volitionally towards representations of people and faces. Another major, albeit implicit, conclusion from using representations of social stimuli is that attentional orienting is often accomplished using the eyes, that is, attention is directed to the images overtly, rather than covertly. For instance, it is noteworthy that nearly every example presented above reflects work that equates attentional focus with eye fixation location. While attention must precede a shift in eye position (Deubel & Schneider, 2003; Hoffman & Subramaniam, 1995), it is not a requirement that it stay at fixation. To be sure, attention is also biased towards social stimuli even when covert measures are used (Brassen, Gamer, Rose, & Büchel, 2010; Theeuwes & Van der Stigchel, 2006; Wojciulik, Kanwisher, & Driver, 1998), but to measure covert orienting, researchers often require that participants avoid making eye movements. To date, no studies using social representations have formally investigated whether social stimuli preferentially elicit covert or overt orienting responses, and some have gone so far as to suggest that although functionally distinct from the oculomotor system, the use of covert attention alone provides little benefit (Findlay, 2004; Henderson, 2003). Taken together, the 8  literature demonstrates that unless told otherwise, participants will frequently overtly shift their attention.   Thus, the two major conclusions to come from studying representations of social stimuli are that 1) they attract attention in a partially non-volitional manner, and 2) attention is directed overtly, such that observers look to the social stimuli of interest. There are, of course, many unanswered questions that are still open to exploration when it comes to how social attention operates when presented with representations of social stimuli.  For instance, though researchers have observed non-volitional biases to attend to social stimuli, it is unclear whether attentional orienting to social features, such as the eyes, might also be non-volitional. Further, it is worth considering whether the biases to attend overtly may be a product of how the task stimuli are presented rather than an expression of a general preferential behaviour. In an attentional capture task using face stimuli as distractors, for example, participants are required to make an eye movement on every trial, which may prime the oculomotor system to direct attention overtly versus covertly. Erroneous saccades to an irrelevant face could therefore reflect a covert bias that is ultimately recorded as an overt bias due to the specifics of the experimental protocol. Broadly, these points relate to general questions of scale and context, which themselves may be major factors that would fundamentally alter how social attention is allocated once researchers move out of the lab and into real life. In the next section, I briefly explore this possibility.    1.2 Translating laboratory results to everyday social attention  As a compliment to studying social attention within the lab, it is worth considering how social attention might translate to more naturalistic tasks not involving images or videos of other people, and explore how social attention is directed to real people. A fundamental issue that must be addressed is one of generalizability. Given the relatively consistent conclusions to emerge 9  from laboratory tasks with social representations – mainly that attention is overtly and non-volitionally directed to images of people – is it valid to assume that social attentional behaviour is so robust that its deployment remains relatively unchanged across vastly different contexts? In other words, will these conclusions generated within the lab scale up to describe and predict how attention is directed to people in everyday situations? In relying on image and video-based studies of social attention, researchers have formed an implicit assumption that these representations capture the fundamental qualities of real life social stimuli. Based on these findings, one would anticipate that in real life, people would be unable to help but to look at others nearby. This is the assumption, but in reality, there is little evidence to draw upon to support the proposition that attentional behaviours generalize from representations of people to the real thing. In fact, it is only recently that researchers have acknowledged generalizability across lab and real life as a potential problem, and have challenged the field to question whether current methods for studying attention are appropriately suited to the study of a social phenomenon (Crundall & Underwood, 2008; Kingstone, Smilek, & Eastwood, 2008; Kingstone, Smilek, Ristic, Friesen, & Eastwood, 2003).  Though representations of social stimuli serve to remind the viewer of real people, and in certain circumstances, people will behave towards (though not necessarily attend to) an image or avatar of a person as they would to a real individual (Bailenson, Blascovich, Beall, & Loomis, 2001; Bateson, Callow, Holmes, Redmond Roche, & Nettle, 2013; Gillath, McCall, Shaver, & Blascovich, 2008; Yee, Bailenson, Urbanek, Chang, & Merget, 2007), they also differ in fundamental ways that could profoundly impact how attention is directed towards each stimulus type. One way relates to the pre-selection of stimuli within a computer-based task. When presenting stimuli using a traditional, repetitive computer paradigm, researchers engage in some 10  form of pre-selection on behalf of the participant. For example, in order to exert experimental control, the researcher may cut out features in the image (e.g., background objects, surrounding hair on the face, etc.) that they deem irrelevant to the task-at-hand (Pelphrey et al., 2002; Walker-Smith, Gale, & Findlay, 1977). Similarly, when presenting whole scenes, the original artist or photographer serves as an initial selective filter, composing the photo in a way that is not necessary representative to how one might view a similar scene in real life. In some instances, this level of selection is desirable, for example when testing whether attention is directed to different features of a face, provided that attention is already directed to the face itself. At the same time, however, this may limit researchers' ability to learn about more everyday behavioural repertoires. For instance, though a central fixation bias may not account for attention being directed to the eyes of faces (Levy et al., 2012), it may nevertheless interact non-trivially when observing scenes containing social stimuli (e.g. images of people may be more likely to be presented centrally). Further, presenting the same stimulus or stimulus category over many trials could increase its value or relevance to the observer within a given task, which has been shown to drive non-volitional attentional orienting (B. A. Anderson et al., 2011b). In short, the study of the selection of social stimuli necessitates that the participant be faced with a choice as to what to fixate, but this process is necessarily constrained and manipulated when studied using a computer-based task.  Even if participants were unrestricted in their ability to select stimuli to attend to, a more pressing concern relates to whether people respond to representations of social stimuli in a manner that is comparable to how they respond to real people. Theories as to why behaviour may not translate across lab to real life are explored throughout this document, but briefly, they focus on the idea that in real life, social attention is not unidirectional. When an observer attends to 11  another living person, they can also be observed themselves. The human eye has evolved to be not only a sensory organ, but also a communicative signaling device (Kobayashi & Kohshima, 1997, 2001a): unlike looking at someone's representation, looking to another person in real life can be interpreted and reacted to. In essence then, looking in real life is loaded with social gains and costs that could dramatically influence orienting behaviour, in particular how it relates to whether attention is overtly or covertly directed. The consequence of this bi-directional experience on social orienting is poorly understood because it is almost always removed from laboratory tasks (though see Chapter 7 for more discussion on this point).   Although there has been convergence on major issues surrounding social attention as it relates to representations of social stimuli, little, if anything can be assumed to directly translate to how real people are selected or attended to in naturalistic environments. To re-emphasize, this does not suggest that lab-based tasks do not meaningfully contribute to the understanding of the processes underlying social attentional orienting; the convergence of evidence from work using images of people supports the view that what is being studied in these contexts is a real and replicable behaviour. Indeed, the present work explores how social attention operates across both naturalistic and laboratory environments. However, the common methodologies of the lab constrain the investigative scope of how social attention operates in such a way that it may limit its predictive power. A more comprehensive study of social orienting across contexts, both within the lab and outside of it, will be beneficial in determining whether social attentional orienting is sufficiently described as being primarily overt and non-volitional. Using a broadly focused investigative approach that includes studying social attention both inside and outside of the lab could be helpful in identifying potential mediators of social attentional orienting behaviour. Doing so may re-define the conclusions drawn from studying social attention within a 12  relatively limited scope, and enable researchers to better connect with the behaviour as it exists in everyday environments.  1.3 Thesis overview  This document was born out of the desire to test the validity of the assumption that social attentional orienting generalizes across levels of realism, scale, and task. Following the observations laid out concerning how differences across contexts might primarily influence the manner by which attention is oriented, the focus of this document is mainly on how covert and overt social attention operate across both computer-based and naturalistic environments. Error! eference source not found. lays out a study space across covert/overt and computer-based/naturalistic dimensions, and serves as a guide for which dimension is focused on in each experimental chapter. In five chapters and eight studies, I purposefully report results that range both in methodology and scale in order to better catalogue how attention is directed to social stimuli across changes in task and representation. Through this, I hope to gain clearer insights into the flexibility of social attention, especially regarding what factors may be important in determining how people orient their attention to other people.  13   Figure 1.1 Schematic overview of the thesis’ chapters, as they relate to overt and covert attention in naturalistic and computer-based paradigms. A single chapter may relay information about multiple quadrants, as depicted by the bands for Chapter 2 and 6.  In so doing, this thesis will address four main questions: 1. How is overt and covert attention directed to social stimuli (i.e. people) in real life?  2. How is volitional and non-volitional overt and covert visual attention deployed to representations of faces, and socially relevant facial features (e.g. eyes)? 3. What purpose might a social attentional bias serve? 4. What influences how social attention is directed across contexts?  The first two chapters evaluate how attention is directed to real people within naturalistic environments. Chapter 2 asks whether people look (i.e. overtly attend) to others differently based 14  on whether they are presented as stimuli within pre-recorded videos, or are physically present in the room with the participant. The findings suggest a marked decrease in attention directed to real people as opposed to videotaped others, and the reasons for this discrepancy are explored. Building on ideas formed from the first experimental chapter, Chapter 3 tests the idea that unlike in the lab, covert attention may play a much more critical role in naturalistic environments. Specifically, Chapter 3 proposes that people use covert attention to monitor nearby others when overt looking is not socially appropriate or desired. Results support this proposal and lay the foundation for a theory of how covert and overt attention are differentially relied upon in naturalistic settings, as compared to what has previously been observed using computer-based tasks. An interesting question to emerge from this work is whether social stimuli are prioritized for covert inspection, in other words whether covert attention is shifted only in response to relevant social stimuli (such as that presented in Chapter 3), or if instead social stimuli elicit a more general covert attentional bias that is present regardless of task relevance. As a test of this, Chapter 4 measured saccade trajectory curvature in response to task-irrelevant, images of faces. Despite these faces being stripped of their communicative, interpretative, and dynamic features by being presented as non-interactive images, the results of Chapter 4 suggest that they still captured attention covertly.  Together, the first three chapters of the present thesis put forth the idea that covert attention is not only useful in place of overt looking behaviour in naturalistic environments, but serves to select for social stimuli even when presented on a computer screen. However, social attention encompasses more than selecting social from non-social stimuli: it also serves to direct processing within a stimulus itself. Chapters 5 and 6 explore social attentional biases when viewing a face to determine both whether particular features attract attention more strongly, as 15  well as why this might occur. Specifically, Chapter 5 asks whether the eyes – the most frequently looked-at facial feature – attract overt attention in a non-volitional or volitional manner. Results reveal a significant non-volitional bias to attend to the eyes of faces, implying that while people may be able to avoid overtly looking at people, once selected, they cannot help but focus their attention overtly on particular features. Finally, Chapter 6 tests the functionality of such a bias, by proposing that fixating on, rather than merely covertly attending to, the eyes plays a functional role in face learning. Finally, a general discussion of the findings is presented in Chapter 7. Returning to the questions posited above, the findings from all studies are considered as a whole in order to provide answers to each in turn. Implications and future directions based on the presented research are also discussed.   16  Chapter 2: Comparison of overt social attention to a real person or their representation  There is little debate that a fundamental drive behind the study of cognitive psychology is to better understand people's behaviour. What may be a more contentious statement, however, is that the use of repetitive and over-simplified computer-based tasks will sufficiently engage the same cognitive processes as are used in everyday environments. Without first grounding the study of cognition in real-life behaviour, researchers may find themselves studying effects that have little relevance to everyday action and perception. To ignore how behaviour manifests in naturalistic environments could be especially detrimental to understanding socially-mediated cognitive processes. For social cognitive researchers, it has been implicitly assumed that the conclusions generated from studying responses to two-dimensional representations of social stimuli will meaningfully translate to how people behave when they encounter a real person. For example, using simplistic, controlled computer-based tasks, several researchers have demonstrated that people frequently attend to social over non-social stimuli (Emery, 2000; Klein, Shepherd, & Platt, 2009), and show evidence for a social bias in orienting even within the first fixation on a scene (Kirchner & Thorpe, 2006). Can it be inferred that this same overt bias will be present in everyday encounters with strangers?   The first experimental chapter reports a direct test of this question through comparison of how people overtly attend to another person that is either seated in the same room as them, or presented on-screen as a pre-recorded video. Though the visual stimuli presented to participants were closely matched, it is unclear whether the participants’ interpretation of the real or represented other will influence their attentional behaviour.   A version of Chapter 2 has been published: Laidlaw, K.E.W., Foulsham, T., Kuhn, G., & 17  Kingstone, A. (2011). Potential social interactions are important for social attention. Proceedings of the National Academy of Sciences of the United States of America, 108, 5548-5553. 2.1 Introduction  Interest in understanding how human visual attention is influenced by social stimuli has grown substantially in recent years (Birmingham et al., 2008a, 2008b, 2009b; Smilek, Birmingham, Cameron, Bischof, & Kingstone, 2006; Zwickel & Võ, 2010). Everyday experience suggests that the social content of a scene, such as the people or faces in it, can 'grab' an individual's attention, leading the observer to focus in on these social stimuli, often at the expense of attending to other features in the environment. Empirical support for this bias towards social stimuli is abundant. When presented with a picture of a human face, observers will look most to the socially informative features of the face, showing a strong preference to look at the eyes (Baron-Cohen, Wheelwright, & Jolliffe, 1997; Emery, 2000; Henderson, Williams, & Falk, 2005; Itier, Villate, & Ryan, 2007; Pelphrey et al., 2002; Walker-Smith et al., 1977). Similarly, when instructed to examine a scene containing several individuals, participants tend to spend much of their time looking back and forth between the figures (Birmingham et al., 2008a, 2008b), and will preferentially look at social rather than non-social scenes if given a choice (Fletcher-Watson, Leekam, Benson, Frank, & Findlay, 2009).  While it may be the case that humans have a preference to attend to social stimuli during lab-based experiments, caution has been raised about the generalizability of lab-based results to the complex behaviour observed in everyday situations (Birmingham et al., 2008b; Kingstone, 2009). One common criticism of traditional lab-based research concerns the simplicity of the stimuli used to investigate social attention. For example, studies of face perception frequently have participants look at a series of schematic or photographed faces presented in isolation of 18  any other stimuli (Henderson et al., 2005; Pelphrey et al., 2002; Walker-Smith et al., 1977). While the simplicity of the tasks enables a high degree of experimental control, excluding extraneous information from the stimuli also serves to pre-select what “should be” important for the participant. In these situations, participants often look at the eyes, arguably because they communicate social information (Lobmaier, Fischer, & Schwaninger, 2006; Mason, Tatkow, & Macrae, 2005). However, showing relatively simple stimuli to participants may drive them to attend to the most complex or salient component of an image, which in the case of faces, may often be the eyes. To properly establish that people preferentially select for social stimuli within their environment, Birmingham and colleagues (Birmingham et al., 2008a, 2008b, 2009b) reasoned that one needs to provide participants with complex and natural scenes, within which social stimuli are embedded. Their research demonstrates that even when social figures are embedded within a complex scene, they are selected more often than would be expected based on low-level features (e.g., salience, size) alone. Research using complex, dynamic stimuli has also shown that people, especially their heads and eyes, are preferentially attended. When viewing a series of images that collectively tell a story, for example, participants will fixate an actor's face earlier and for longer than non-social control objects (Jasso & Triesch, 2008). In a study by Kuhn, Tatler and Cole (2009) in which participants watched a video of a magic trick, the proportion of fixations on the head and eyes was nearly 70%. Likewise, when participants were asked to watch videos of other students engaging in conversation, 77% of fixations were directed to the people in the clips (Foulsham, Cheng, Tracy, Henrich, & Kingstone, 2010).  Thus, the preference to attend to others appears to generalize to more complex stimuli. Even so, one cannot conclude that what is being measured in the lab corresponds to what occurs in everyday situations. There is more to everyday experience besides heightened visual 19  complexity. If the aim is to understand social attention, then one often overlooked factor must also be incorporated: the introduction of a social interaction, or at least the possibility of a social interaction, in which the participant is actively involved. With all computer-based studies, be they static or dynamic scene viewing tasks, it may be difficult for participants to interpret these situations as realistically social, because while persons within the scenes may be interacting amongst themselves (e.g. Birmingham et al., 2008a, 2008b; Klin, Jones, Schultz, Volkmar, & Cohen, 2002), the participant is nevertheless unable to join in on the observed interaction. Indeed, in some ways, a typical experimental task is exemplary of an anti-social situation, in which the participant is forced to remain an outsider because the people being observed are incapable of looking back and as such, there is no potential for a social interaction to emerge. Because of this, a participant may attend to these scenes in a very different manner than if they were actively involved within the scenario. For example, while participants may fixate the eyes of a forward-staring stranger during a computer-based face viewing task (Henderson et al., 2005; Itier, Villate, et al., 2007), participants will also show avoidance strategies when a stranger stares at them in a public space (Ellsworth et al., 1972; Goffman, 1963). Although the stimulus (a stranger facing the participant) is superficially similar in both cases, when the stranger is physically present and is capable of interacting with the participant, the behaviour of the participant changes. In other words, the potential (or lack thereof) for a social interaction to emerge may cause both an increase in looking behaviour when one knows that the other person cannot return their gaze (i.e. during traditional computer-based tasks), and a decrease in looking behaviour when mutual gaze is possible (i.e. in real life, where this gaze may signal a desire to communicate).  20  A failure to create opportunities for social interactions within the context of an experiment may be a particular problem for researchers studying social abilities of certain special populations, such as those with Autism Spectrum Disorder (ASD). Although social impairments in everyday life are characteristic of those with ASD, experimental evidence for these impairments, especially those relating to social attention, have been notoriously mixed within the ASD literature (Nation & Penny, 2008). That many lab-based paradigms do not involve any possibility for social interaction may be partly responsible for this lab vs. life behavioural distinction. Of course, a social interaction need not be one in which the participant is engaged in direct communication with another. A social interaction may be interpreted in a more general sense, such that a situation in which two individuals can send and receive verbal or non-verbal information from another may be considered, on some level, social. Simply put, to truly study social attention as it operates in everyday social situations, one may need to first create and embed a person in a social situation.   The goal of the present study was to examine social attention in a situation that is free of several of the limitations of more traditional, computer-based tasks. We were interested in investigating how participants look at another individual when they are outside the restrictions of an unrealistic experimental paradigm. Further, we aimed to compare looking behaviour under conditions where the potential for social interaction was experimentally manipulated. To accomplish this, half of the participants completed the study when a confederate was physically present with them, while half completed the study while a videotape of the same confederate played on a computer screen nearby. The latter condition, which is devoid of any potential for social interaction between the participant and the confederate, more closely mirrors what previous studies have used in order to gauge the participants’ preferences towards social stimuli. 21  In contrast, the former condition most closely approximates what one might experience in everyday life, in that there exists the opportunity for an interaction to emerge between individuals. In order to achieve as naturalistic an environment as possible to record looking behaviour, participants’ behaviour was recorded using a mobile eye tracker. The mobile eye tracker is a head-mounted camera that enables recording of both the participants' head orientation and the location of their eye fixations within the camera's field of view. Gaze behaviour was recorded while they waited for an experimenter to return with instructions for an unrelated real-world visual search task. Thus, during the time in which the participants’ looking behaviour was recorded, the participants were unaware that the study had commenced, and were given no instructions beyond being asked to sit and wait for the experimenter’s return. In both conditions, a confederate sat quietly and completed a questionnaire; the only difference was whether or not the confederate was physically present in the room with the participant, and thus whether there was the possibility of a social interaction to take place.  2.2 Methods 2.2.1 Participants  Twenty-six UBC students (19 female) with a mean age 21.92 yrs (SD = 4.15) took part in exchange for course credit or monetary remuneration. All provided informed written consent before participating and were fully debriefed upon study completion. The study was approved by UBC's Ethics Board. 2.2.2 Equipment and procedure  Eye gaze and line-of-sight video was recorded using an ASL Mobile eye-tracking device whereby a head-mounted optics system records gaze direction while a small colour camera records the participant's line-of-sight (40˚ vertical x 50˚ horizontal) at 30 Hz, which can be used 22  to infer head (and body) orientation. Before each use, and again at the end of the study, the eye tracker was calibrated by having participants fixate nine circular targets (roughly 1˚ in diameter) on one of the lab walls. This allowed gaze location to be mapped to positions within the field of view. The eye-tracker has an instrumental resolution of 0.1˚ and our setup yielded gaze accuracy within 1˚.  Participants were informed that they would complete a real-word search task while their eye movements were being recorded. The task, which was unrelated to the present study, involved having the participants navigate through the building in search for a specific room. Participants were fitted and calibrated with the eye tracker, then led to a waiting room. The experimenter then asked the participant to sit and wait while an instruction sheet was retrieved. The experimenter left the room and waited approximately two minutes until returning. The experimenter then returned, gave the instructions to the participant and waited for them to complete the navigation task. The present investigation was concerned with the participant’s behaviour during the waiting period. Thus, although participants were informed that the mobile eye tracker was on and recording for the duration of the study, including the wait period, all data pertinent to the study at hand was recorded before participants believed they had started the experiment proper.   Two groups of participants (n = 13 per group) completed the task. Half of the participants were run through the live confederate condition, in which a female confederate (age: 24 years) acting as another study participant sat in a chair approximately 50 inches to the left and 40 inches in front of the participant and quietly completed a questionnaire. Error! Reference source not ound. 2.1 graphically depicts the set-up of the waiting room. The confederate was present when the participant entered the room, and left once the experimenter returned (the experimenter 23  informed the confederate that they were needed in another testing room; at no other point did the experimenter interact with the confederate). The confederate kept her attention on the questionnaire at all times, with the exception of five predetermined points. Embedded within the questionnaire were prompts for the confederate to look up such that the confederate’s timing of looks was roughly consistent across all trials and not elicited in response to the participant’s actions. Three times during the two-minute interval, the confederate looked directly at the participant with a neutral facial expression; once the confederate looked up as if in thought; and once the confederate looked just above the participant3. All looks were brief but untimed. No attempts were made to engage the participant in any type of social interaction. Thus, the live confederate condition consisted of the participant waiting in the presence of another person, introducing the potential for social interaction into the situation.                                                   3 While our intention was to examine the frequency of direct gaze on the confederate as a function of the two confederate conditions, the quality of the recorded video was too poor, and as we show, the actual looks at the live confederate were so infrequent, that a reliable analysis was not viable. 24   Figure 2.1 Graphical set-up of waiting room. The participant (pictured) sat in a chair placed roughly equidistant from the chair in which the confederate sat during the live condition, and the computer in which the video of the confederate was played during the video condition. Due to the location of the participant relative to the confederate, the participant was required to make a head or body movement in order to orient towards the confederate. The remaining 13 participants took part in the videotaped confederate condition. Each participant entered the waiting room to a video playing on a 20-inch CRT monitor, approximately 50 inches to the right and 40 inches in front of the participant. The video that each participant was shown was taken from a recording made during a live confederate session with a different participant. A hidden camera located in a storage box directly to the left of the participant recorded the confederate completing the questionnaire. The video was edited so that it began just before the participant walked past the camera and continued until the confederate was called out of the waiting room. Each video was then played for the next participant in the 25  videotaped confederate group on the computer of a nearby workstation, and was started just prior to the participant entering the room. It is important to note that the lab in which participants were tested in works with a lot of video data, and the waiting room also doubles as a coding room. As such, a video left momentarily unattended at a workstation should not have appeared out of the ordinary for the participant. Each video was shown once. In so doing, any variations in the confederate’s appearance across testing days or sessions was controlled across conditions. Further, the use of multiple videos derived from the live confederate sessions ensured that any difference between live and videotaped confederate groups was not due to subtle, uncontrolled differences in the confederate’s behaviour. The visual and behavioural aspects of the stimulus were very similar in each case such that the only critical difference across groups was whether the confederate was actually present. The participants then completed the search task, and returned to the lab. At this point, each participant completed another camera calibration4. Finally, all participants were debriefed and provided with a written description of the study's purpose and hypotheses.  2.3 Results 2.3.1 Data handling The following analyses were concerned with participants' overt attention to, or looks towards, the confederate and were completed on each participant’s eye tracking video that was recorded while they waited for the experimenter in the waiting room. Each clip began when the participant sat down in the waiting room. The time in which the participant waited for the                                                  4 At the end of the study, participants completed the Autism Quotient questionnaire (Baron-Cohen, Wheelwright, Skinner, Martin, & Clubley, 2001), which contains 50 statements designed to measure the degree to which adults report traits associated with the autistic spectrum. In the published work it was noted that these results were preliminary and conclusions could not be drawn with confidence. As such, I refrain from using it as evidence supporting any proposals within the present thesis. 26  experimenter to return was coded (duration mean: 134.5 s; SD: 13.45 s). Although small variations in wait time occurred across participants, there was no significant difference in wait time across live versus videotaped confederate conditions, t(24) = .18, p = .86.  Videos were recorded and displayed at a rate of 30 frames per second. Videos were coded using a custom Applescript video coder program developed by one of the authors (T.F.). The application works alongside QuickTime to allow the coders to control the speed of the video and permits logging of pre-defined events on a frame-by-frame basis.  Videos were coded to determine how often and for how long participants in each condition looked at the confederate. Three categories of events were coded and are detailed in Table 2.1 and exemplified in Figure 2.2. Head turns towards the confederate, such that the confederate was visible in the participants’ recorded video, were coded in one of two ways. A 'Head turn, fixation on confederate' event was coded when the participant turned towards and fixated the confederate (Figure 2.2a), and represents times in which the participant overtly attended to the confederate. Otherwise, when the participant turned towards, but did not fixate the confederate, a ‘Head turn, no fixation on confederate’ event was coded (Figure 2.2b). These events could reasonably represent acts of both overt and covert social attention to the confederate, as the participant would have overtly turned towards the confederate, but would have only been able to visually attend to the confederate covertly using peripheral vision. To avoid including frames in which only a small portion of the confederate was visible (e.g. only her foot), it was additionally required that the confederate's torso or higher be in the frame for any event coding requiring a head turn towards the confederate. The third coding category consisted of ‘Fixation on baseline object’ events, where the fixation cursor was overlaid anywhere on the baseline object (Figure 2.2c). In the videotaped confederate group, the empty chair in which the 27  live confederate sat during the live confederate sessions was considered the baseline object. For the live confederate group, the blank screen upon which participants in the videotaped confederate group viewed the taped confederate videos was considered the baseline object. Through comparison of gaze behaviour directed towards these two baseline objects, it can be determined whether either location was looked at more often or for longer when no confederate was present.    Figure 2.2 Video frames exemplifying each coding event. A) the participant turned their head towards and fixated (shown by the white circle) the confederate, B) the participant turned their head towards the confederate but did not fixate the confederate, C) the participant fixated the baseline object (blank computer screen in this example).  Table 2.1 Event Coding Criteria Event Coding Criteria 1. Head turn towards confederate Confederate was in the line-of-sight (i.e. participant turned head to look towards confederate). Includes events 1a – 1b. 1a. Head turn, fixation on confederate Fixation cursor on confederate. 1b. Head turn, no fixation on confederate Confederate was in line-of-sight video recording but fixation cursor was not on the confederate. 2. Fixation on baseline object Fixation cursor on baseline object (i.e. participant turned head towards and fixated baseline object).   28  For each event, the start and end time was recorded, and the duration of the event was determined from these time points, which was then summed to provide a measure of total event time. To determine the number of events, any consecutive events of the same category that were separated by 100 ms or less of uncoded frames or frames in which the gaze cursor was temporarily missing were collapsed into one event. This ensured that any frames in which the fixation cursor was missing due to blinks or saccades would not artificially inflate the measure of how often each event occurred. Longer events in which the gaze cursor was temporarily missing from the video (e.g. due to fixations far in the periphery, etc) were not included in the analyses, except when calculating proportions, as detailed below (see Fixation Analysis). 2.3.2 Coder reliability  Videos were coded by one of the authors (K.E.W.L.) and one of two research assistants that were unfamiliar with the specific hypotheses of the study. To calculate inter-rater reliability, Pearson’s correlations were run on the fixation durations and counts from each coder pair for each event category and sub-category (‘Head turn, fixation on confederate’, ‘Head turn, no fixation on confederate', and ‘Fixation on baseline object'). Correlations ranged between r = .86-1.00 (mean = .94; SD = .06). Krippendorff’s alpha, another measure of reliability, was also acceptable for all measures (mean = .81, SD = .13). Remaining analyses were performed on the average value generated from the two coders.  2.3.3 Fixation analysis For all analyses, if Levene’s test of equality of variances then the relevant degrees of freedom were adjusted. Critically, it was first determined that fixations to the baseline object did not differ based on group, based on either overall fixation duration, t(24) = 1.08, p = .29, or total number of fixations, t(24) = 1.66, p = .11. Participants thus did not show a preference to look at 29  either location at which a confederate was positioned for the study. Any differences in looking behaviour to that location based on condition can therefore be attributed to the difference imposed by a live or videotaped confederate. To study overt attention directed to the confederate, ‘Head turn, fixation on confederate’ events were compared. It was found that participants in the videotaped confederate group fixated the confederate more times, t(12.39) = 3.24, p = .01, and spent more time overall, t(12.21) = 2.97, p = .01, fixating the confederate than did participants in the live confederate group. To examine potentially more covert orienting behaviours, ‘Head turn, no fixation on confederate’ events were compared. Participants in the videotaped confederate group also turned their heads towards (but did not fixate) the confederate significantly more often, t(12.66) = 5.66, p < .001, and for a longer overall duration, t(12.72) = 5.67, p < .001, than participants in the live confederate group. Participants in the videotaped confederate condition fixated the confederate significantly more often, t(12) = 2.93, p = .01, and for longer, t(12) = 2.98, p = .01, than they fixated the baseline object. In contrast, participants in the live confederate condition actually fixated the confederate less often than they fixated the baseline object, [fixation count: t(12) = 2.20, p = .05; fixation duration: t(12) = .61, p = .55]. Figure 2.3 displays the mean overall durations for each coded event occurred. 30    Figure 2.3 Mean overall duration of fixations to the confederate, head turns towards the confederate, and fixations on the condition’s baseline object (n=13 for each confederate-type group). Error bars denote standard error. The overall pattern was similar for mean fixation number.  It is clear that participants in the videotaped condition turned towards and fixated the confederate to a greater extent than did participants in the live confederate condition. In this way, our findings imply that computer-based tasks of social attention may generally overestimate participant’s willingness to attend to others in realistic situations, as measured here by head turns and fixations. While computer-based tasks may overestimate the magnitude of social attention effects, it is nevertheless possible that these tasks may accurately capture behaviour after the other person’s image has been acquired within the participants’ line-of-sight (i.e. in our case, after the participant has turned their head towards the confederate). To examine how participants distributed their attention once they had turned their head towards the confederate, the proportion of fixations that were directed to the confederate given that the confederate was in the participants’ line-of-sight were calculated. Of the 13 participants in the live confederate condition, the videos for two participants did not have the confederate in a single frame and thus they were excluded from the following analysis. When calculated as a proportion of the time the 0 5 10 15 20 25 30 35 40 Live confederate  Videotaped confederate Event duration (s) Head turn, fixation on confederate Head turn, no fixation on confederate Fixated baseline object 31  confederate was in the participants’ line-of-sight (including instances where the gaze cursor was missing), there were no significant differences between groups for either the relative frequency of fixations, t(22) = .17, p = .87, or the fixation duration, t(22) = .19, p = .85, on the confederate. Although participants in the videotaped confederate condition turned towards the confederate overall more frequently than did those in the live confederate condition, once the confederate was in their line-of-sight, the distribution of gaze behaviour to the confederate or elsewhere did not differ across conditions. This implies that manipulating the potential for a social interaction by introducing a live or videotaped person primarily influenced their willingness to signal one’s attention to another, as indicated by how often participants turned their head towards the confederate, but it did not alter their general overt looking behaviour after they had acquired the other person within their line-of-sight.  2.4 Discussion  Although there is a wealth of information suggesting that humans show a bias to look towards social stimuli such as other people and their faces, the majority of this research has measured looking behaviour using computer-based tasks that are simplistic and do not involve the participant in any social situation (Birmingham et al., 2008a, 2008b, 2009b; Henderson et al., 2005; Itier, Villate, et al., 2007; Pelphrey et al., 2002). The goal of the current study was to investigate how one’s visual attention to other people within a naturalistic situation is influenced by the possibility of a social interaction emerging. To accomplish this, participant head and eye movements were recorded while they waited to begin an unrelated study, during which time a confederate, acting as another participant, quietly completed a questionnaire nearby. In the live confederate condition, the confederate was physically present in the room, while in the videotaped condition, the confederate was shown in a video on a nearby monitor. The results 32  demonstrate that the willingness of participants to look at another individual is strongly influenced by whether or not that individual is physically present, and as such, whether the confederate is capable of looking back and engaging in a social interaction with the participant. Participants looked at the videotaped confederate significantly more often and for an overall longer duration than other participants did towards the same confederate that was physically in the room. This was primarily due to the participants’ far greater willingness to turn their heads towards the videotaped confederate. The present results exemplify an important difference between the results normally observed using traditional, computer-based tasks to study social attention, and the natural behaviour that we argue is more typical in everyday situations.  By creating a natural situation in which to measure visual behaviour, our study addressed two common limitations of current social attention research, and in so doing, arguably provides a more appropriate measure of how social attention functions in everyday settings. Unlike other paradigms that rely heavily on simplistic, computer-based stimuli and tasks that may not elicit natural behaviour (Kingstone et al., 2003; Kingstone, 2009), our task measured behaviour while participants engaged in a familiar and realistic task: sitting in a waiting room. In attempt to capture spontaneous participant behaviour, no task instructions were provided and all eye movements were recorded before participants believed the study had commenced. Further, both the stimuli and the context surrounding our task were created to be as realistic as possible. For example, in the videotaped confederate condition, although the confederate was presented via a pre-recorded video, this may not have seemed out of the ordinary given the fact that it was played at one of several workstations within a room designated as both a coding and waiting room. The second potential limitation, that social attention should be measured by eliciting a social setting, was also addressed through our manipulation of the physical presence of the 33  confederate. While the videotaped confederate condition more closely resembles previous computer-based tasks measuring social attention, it was only in the live confederate condition in which the opportunity for social interaction was possible. Interestingly, despite the differences in the complexity and realism of the scenarios, the results from our videotaped confederate condition nicely compliment previous findings that demonstrate a bias to look at other people. Critically however, this was not replicated in our live confederate condition. Participants did not show a bias to fixate the confederate, and in fact looked significantly less often to the confederate than to the baseline object (a blank computer monitor). Through the simple act of introducing the potential for a social interaction, visual behaviour changed dramatically.   The mere presence of another person has been shown to influence many measurable behaviours (Guerin, 1986; Zajonc, 1965), though to our knowledge this is the first time that it has been shown that it can influence how an individual looks at that person. Similar factors that have been theorized to drive mere presence effects may have also affected participants' social attention. For example, the physical presence of another may have been sufficient to cause participants to interpret the scenario socially, which in turn may have activated particular social rules and norms (Guerin, 1986). Passively observing images or video of other people may be insufficient to activate these norms. Mutual eye contact may serve as a signal to initiate interaction (Cary, 1978b), which could have been undesirable to our participants. Thus, participants looked away from the live confederate, avoiding the potential for eye contact, while those in the videotaped confederate condition did not need to initiate avoidance behaviour as it was clear that the confederate in the videotape could not look back. Gaze avoidance behaviour has also been observed during situations in which direct communication is not initiated, but there nevertheless exists the potential for verbal or nonverbal interaction, for example when in an 34  elevator with a stranger (Zuckerman, Miserandino, & Bernieri, 1983), or when passing near another individual while walking (Foulsham, Walker, & Kingstone, 2011). Thus, the dramatic difference in looking behaviours across conditions likely reflects two processes: an increased likeliness to look and turn towards the videotaped confederate because the confederate cannot initiate an unwanted social interaction, as well as a decrease willingness to look at or turn towards the live confederate precisely because this possibility of an interaction exists.   A relevant future direction is to examine how social looking behaviour changes as a function of social ability, for instance whether those with relatively weaker social skills are less sensitive to the distinction between observing a real person or their image. Similarly, one’s ability to understand and conform to social norms might be especially important to understanding how the deployment of social attention might change across contexts. Investigation of how social competency influences social attention is an interesting avenue for future research and may be able to shed light on why many measures of social attention within the ASD literature produce inconsistent results. For example, an impairment in social interaction is considered a hallmark of the disorder (American Psychiatric Association, 2000), is often observed in face-to-face interactions early on (e.g. Dawson, Meltzoff, Osterling, Rinaldi, & Brown, 1998), and is the focus of many training programs throughout development (Mesibov, 1984; Ozonoff & Miller, 1995; Weiss & Harris, 2001), all of which suggest a real divergence in social attention for those with ASD in everyday situations. However, when shown side-by-side social and non-social images, participants with ASD show a similar preference to look at the social stimuli as do their non-ASD counterparts (Fletcher-Watson et al., 2009). Only a subtle difference emerges, such that those with ASD show a reduced preference to initially fixate social information (see also Kuhn, Kourkoulou, & Leekam, 2010). Many other lab-based tasks have 35  also failed to find robust differences in social attention and gaze following in those with or without ASD (Nation & Penny, 2008). One reason for the inconsistency in ASD social attention research may be due to the lab-based studies not eliciting the same type of social behaviour as seen in real life because interpretation of the social meaning of the scenario is altered when the possibility of social interaction is removed.   In addition to further investigations concerning how social attention is related to social competency, the results of this study pose many interesting related questions. For example, there is evidence to suggest that eye contact with another is in part mediated by the display rules of one’s culture (Knapp, Hall, & Horgan, 2009). It would be of interest to determine whether results similar to those obtained here would generalize when participants with different cultural values were recruited. An additional line of inquiry directly related to the present study involves better understanding what factors present during the live confederate condition served to influence the looking behaviour of the participants. One possibility is that participants looked less at the confederate when they were physically present because the participants were aware that the confederate could see them looking. This predicts that a very different result might be obtained if participants could camouflage their looking behaviour, e.g., by wearing sunglasses. Finally, while it has been shown that participants look less at live versus videotaped others, it is likely that variations of the present study, such as using two well-acquainted participants or introducing social status discrepancies (Foulsham et al., 2010) between participants could have powerful mediating influences on observed behaviour.   It is important to note that our results do not imply that humans do not possess a bias in real life to attend to other people, in contrast to what is evident in the videotaped confederate condition. However, our live confederate condition provides strong evidence that this behaviour 36  is malleable, and can be influenced by the opportunity for an interaction with the other individual. While more traditional, controlled computer-based tasks may be important in examining the intricacies of why this preference exists, more naturalistic tasks are crucial if researchers want to understand how social attention operates outside of the lab.    37  Chapter 3: Evidence of covert social attention to in a naturalistic environment  The findings from Chapter 2 illustrate a clear distinction in how people overtly attend to others, depending on whether they are presented on screen (i.e. as a representation of a person) or in real life. Participants looked significantly less at a live person than to a video of the same confederate, and further looks to the person were less frequent than to the baseline object, which is consistent with participants inhibiting any overt signal of attentional focus directed toward the confederate. Notably, the most pronounced effects were observed in the measures of participants’ head turns towards the confederate, i.e. when they looked towards but did not fixate the other person. Though participant’s fixations to the confederate dramatically differed across conditions (both in terms of duration and frequency), this was driven by the participant’s willingness to turn towards the confederate: once in the line of sight of the participant, there were actually no differences between live and video-taped conditions in terms of relative frequency or duration of fixations made to the confederate. In the preceding chapter, it was therefore suggested that the potential to interact with another individual may most strongly influence the observer’s willingness to signal their attentiveness to another person, seen as a head turn towards the confederate.   That the physical presence of live others may prompt observers to signal their attention differently than they would to an image or video is exceptionally important to understanding the generalizability of social attention effects to natural everyday situations and demands further investigation. For instance, it suggests that participants have a general understanding of the power of their gaze behaviour to signal their attentional focus, i.e. that their eye (and head) direction can be interpreted by others and used as a communicative tool.  38   Chapter 3 builds off of this concept by asking whether covert attention is used in order to maximize the communicative utility of an individual’s looking behaviour. If people looked to others indiscriminately, the meaning behind a look would be overwhelmingly more difficult to interpret. In order to use gaze effectively within naturalistic environments, I suggest that people must rely on some underlying process by which they acquire information in order to determine whether to look (or not) to another person. The following chapter tests whether this is accomplished by people covertly attending to others before they publically display their attentional focus.   A version of Chapter 3 has been submitted for publication. 3.1 Introduction  The morphology of the human eye is unique in that the dark of the iris is surrounded by the high-contrast of the white sclera. This is divergent from nonhuman primate eyes, whose darker sclera blends in with their iris, thereby camouflaging the direction of their attention from predators or competitors (Kobayashi & Kohshima, 1997, 2001b; Perrett & Mistlin, 1990). Put simply, the evolution of the high-contrast human eye has facilitated communication by making eye position easier to discriminate (Ando, 2002; Ricciardelli, Baylis, & Driver, 2000), and evidence shows that people use their eyes as a powerful signal to communicate both a desire to initiate interactions (Cary, 1978b), as well as to facilitate ongoing interactions (Emery, 2000; Kleinke, Staneski, & Berger, 1975; Perrett & Emery, 1994; Saxe, 2006; Tomasello, Carpenter, Call, Behne, & Moll, 2005). In so doing, the heightened visibility of the human eye resulted in the sacrifice of gaze camouflage (Kobayashi & Kohshima, 2001a, 2001b).  Though it is clear that eye movements are used to communicate to others within social scenarios, did this actually come at the cost of camouflage? While non-human primates rely on 39  the low-contrast of their eyes to disguise their focus, rather than losing this ability altogether, humans may have adapted an alternate method of discretely processing visual information. In fact, we take as a working assumption that effective communication with the eyes may actually require that humans first have some way to discretely assess their immediate social surroundings in order to determine whether or not subsequent signaling with the eyes would be necessary or appropriate. We propose that the human covert attentional system is the ideal candidate for discretely processing social information in that it can provide observers with the attentional camouflage that was lost with the evolution of the high-contrast human eye.   Traditionally, the function of covert attention – that is, attending without a related eye movement – has been considered to be trivial, likely because it is often thought that under normal circumstances, people attend to where they are looking. Indeed, it was once believed that covert attention was merely a byproduct of oculomotor planning (Rizzolatti, Riggio, Dascola, & Umiltà, 1987; Rizzolatti, Riggio, & Sheliga, 1994). While it has since been demonstrated that covert attention can be at least partially dissociated from oculomotor planning (Belopolsky & Theeuwes, 2012; Casarotti, Lisi, Umiltà, & Zorzi, 2012; D. T. Smith, Schenk, & Rorden, 2012; D. T. Smith & Schenk, 2012), its functional utility has not been seriously entertained. In recent years, the idea that covert attention might serve a social function has been hinted at, for instance as a way of monitoring an aggressive other (Belopolsky & Theeuwes, 2012), or to hide one's own intentions from others (Klein et al., 2009). However, to our knowledge, it has never been demonstrated that people use covert attention in order to monitor others in social settings.   In the present study, we test the hypothesis that people covertly attend to others within social environments in order to facilitate appropriate overt looking behaviour. We did this by recording via hidden camera the responses of pedestrians to a simple action made by a 40  confederate. Of interest were the responses from pedestrians who were not initially looking at the confederate, i.e. were not overtly attending to the confederate. As pedestrians approached, the confederate verbalized a greeting while his hand was raised to the side of his head signalling either a wave, or while holding a phone as if to answer a call. Though we reasoned that pedestrians would be unable to discriminate the confederate's looking behaviour using their peripheral vision from the distance tested (Loomis, Kelly, Pusch, Bailenson, & Beall, 2008), it was prudent to test this as laboratory tasks have often shown strong attentional effects of another’s gaze (Frischen, Bayliss, & Tipper, 2007; Nummenmaa & Calder, 2009; Senju & Johnson, 2009b). Thus either hand raise action was accompanied by the confederate looking at the pedestrian or staring straight ahead. It was hypothesized that rather than the confederate's looking behaviour influencing responses, the confederate's final hand action would be most influential. We reasoned that when the verbal greeting was accompanied with a wave, it would signal an intention to interact, which should elicit a looking response from the pedestrian (for a host of social based reasons, including but not limited to, acknowledging the greeting, or to collect more information about the confederate's intentions). In contrast, answering a phone is a private action to which an appropriate response by a pedestrian would be to signal one's inattention by avoiding any looks to the confederate (e.g. Wu, Bischof, & Kingstone, 2014; Zuckerman et al., 1983). Covert attention is the only viable mechanism for people to make this subtle peripheral discrimination, which in turn would result in a look to the confederate or not.  In sum, despite pedestrians not looking at the confederate at the start of the action, we hypothesize that they will covertly attend to his action and, critically, tailor their response to the confederate's action such that they will look up at the confederate more frequently when his hand is raised to form a wave than to answer the phone.  41  3.2 Methods 3.2.1 Equipment and procedure Pedestrians were sampled from the two locations on the University of British Columbia's Vancouver campus - one indoor and one outdoor (Figure 3.1). A residence dining hall was used as the indoor location, where a walkway cut between a wall and a row of study rooms, leading to the building exit. The outdoor location was a narrow path, bordered by garden on either side, which passed in between two buildings. Importantly, both locations had a linear flow of pedestrian traffic (i.e., where pedestrians could only walk straight, in two opposing directions) and oncoming pedestrians were easily visible. In both locations, the confederate was situated in the flow of pedestrian traffic without obstructing pedestrian movement.    42   Figure 3.1 Example camera angles from the Outside (A) and Inside (B) locations. Confederate is grey shaded figure, approaching pedestrians are represented as white shaded figures. The confederate was a male undergraduate (20 yrs, 5'9") from the University of British Columbia, and was dressed in neutral clothing and wore nothing to obstruct his eyes (e.g., a hat, glasses, avoided squinting). The confederate stood casually, using a black smart phone, and occasionally looked straight ahead to search for distant pedestrians that roughly met selection criteria (see below). When the confederate felt that a pedestrian met selection criteria, the confederate would re-engage with his phone at waist level until the pedestrian was within a pre-designated distance agreed upon prior to the start of the study, at which point he would initiate an 43  action (described below). Representative distance measurements that were taken following pedestrian data collection estimated that the confederate initiated the action an average of 1.52 m (SD = .30 m) in front of the pedestrian. A relatively close distance was chosen for two reasons. First, research suggests that as pedestrians approach a stranger, they will divert their overt attention (both their head and eyes) away from the individual (Fotios, Yang, & Uttley, 2015; Goffman, 1963; Patterson, Webb, & Schwartz, 2002). Second, classic proxemics research suggests that the space around a person can be divided into areas of interpersonal space, with 1.2 m to 3.7 m representing 'social space' in which people are comfortable engaging in conversation and other casual interactions (Hall, 1968). Thus, the confederate aimed to initiate the action within a distance that is not only considered social space (i.e. in which a greeting might be made to another), but that overt attention is nevertheless diverted away from non-familiar others.  The action performed by the confederate was a greeting that involved both a hand and an eye component, the specifics of which were as closely matched as possible so as to avoid any differences in large motion signals across conditions. For the hand action, the confederate raised his empty right hand near his ear as if to wave (i.e. palm forward), or raised the same hand and brought his cell phone to his ear (Figure 3.2). The arm motion in both cases mimicked the typical action of answering a phone, and the final position (wave or answering phone) was held static until the pedestrian passed. A forward facing head position was maintained throughout. In addition, the confederate either looked straight ahead (i.e. avoided eye contact), or fixated the pedestrian at the start of the action, and followed the pedestrian with his eyes until they approached too near to continue to do so. In all conditions, the confederate said “Hey” in a friendly tone as his hand action ended. Once the pedestrian passed, the confederate ended the action by withdrawing his hand and returning to using the phone near his waist. 44   In addition, a 'no action' baseline condition was culled from pre-recorded video sessions. Pedestrians were selected who met the selection criteria but passed the confederate while he was not performing any action. Thus, confederate's behaviour in these clips depicted natural waiting behaviour, and prominently featured the actor looking down at his phone, while occasionally looking up and away in the distance (i.e. not at nearby pedestrians).   Figure 3.2 Final hand positions for the conditions in which the confederate raised his hand to answer a phone (top) or wave (bottom), with eyes positioned either straight ahead (left) or tracking the passing pedestrians (right).  Events were recorded using a high-definition (1080p) video camera placed so that it was partially obstructed from the participant’s view (i.e. stored in a backpack, with an opening for the lens). The camera was placed behind (outdoors: 15.21 m; indoors: 6.67 m) the confederate and to 45  his left (outdoors: 6.70 m, 2.92 m), at a similar angle to the confederate (outdoors: 23.73˚; indoors: 23.66˚). An experimenter sat beside the hidden camera and recorded participant descriptives (e.g. sex, visual description) and which condition was run. Differences in camera-to-confederate distance were necessary to discretely position the camera and experimenter. Though locations differed in the distance of the camera from the confederate due to environmental constraints, confederate size was approximately equated through the use of camera zoom.  3.2.2 Pedestrian sampling Data was collected over 12 sessions (7 Outside, 5 Inside). During each testing session, all conditions were run, in a pseudo-random order, documenting roughly ten pedestrians consecutively for each condition before moving to the next condition. Efforts were made to select only pedestrians who were walking alone, whose eyes were visible, and who were not wearing earphones or using a portable electronic device. In addition, the confederate delayed repeating an action immediately following a sampled pedestrian so as to avoid another pedestrian observing the behaviour from a distance. As participation in the study involved only observation in public space, no consent from selected pedestrians was required and no debriefing was provided. The study was approved by the University of British Columbia's Research Ethics Board.  3.3 Results 3.3.1 Data handling   The video recording was cut into short clips commencing with the participant entering the frame and ending when the participant exited the frame. Any instance in which the pedestrian was looking at the confederate prior to his action were discarded, as were instances in which the confederate's action started before the participant entered the frame, as coders would be unable to 46  ensure that participants were not overtly attending to the confederate just prior to the start of the action. Instances in which pedestrians were blocked during critical moments were also discarded, as were any instances caught with repeat pedestrians. In total, 483 videos were recorded that contained sufficient information to allow for subsequent coding. Two coders who were naïve to the hypotheses of the study coded all videos for pedestrian looking behaviour, timing, and environment details5. As multiple looks were relatively infrequent, looking behaviour was coded in a binary fashion: following the confederate action, it was coded whether or not the pedestrian looked at the confederate. Looks were counted even if the pedestrian passed the confederate and turned back to look at him, though these instances were rare. Videos were excluded if one or both coders noted that the pedestrian did not meet criteria (100 videos, 20.70%).   Coder reliability was assessed using Krippendorff's alpha and was good for all measures (Table 3.1). If coders disagreed on whether the pedestrian looked, the participant was excluded from analysis (n = 50), thus conservatively limiting analyses only to instances where both coders agreed on whether the pedestrian looked or not. Analyses were therefore based on 333 pedestrians.                                                       5 Pedestrian ethnicity was also coded with good reliability (Krippendorff's Alpha = 0.82). The majority of pedestrians were either rated as Caucasian (42.64% of sample based on Coder 1's ratings) or East Asian (40.24% of the sample). Limiting analyses to only these two ethnic groups shows a small effect of ethnicity on pedestrian looking frequency, with East Asian pedestrians looking less than Caucasian, p < .05 (see Patterson et al., 2007 for a similar pattern). As this did not interact with our critical manipulations, ethnicity was not included as a factor in the reported analyses. Reliability was also good for pedestrian age group (Krippendorff's Alpha = 0.88) and environmental crowding (rated on a 10-point scale, Krippendorff's Alpha = .77). Due to a relative lack of variability in these factors, however, they were excluded from analysis. 47  Table 3.1 Coder reliability Variable Krippendorff's Alpha Looks to confederate following action 0.73 Time to pass confederate (from start of action) 0.73 Time to exit frame following action initiation 0.95  3.3.2 Pedestrian and video descriptives Of the 333 pedestrians, 168 (50.45%) were female and 172 (51.65%) were recorded from the outdoor location. The majority of the pedestrians sampled were judged to be between 16-29 (88.59%), with all other pedestrians being judged 30 or older (11.41%). The confederate raised his hand to wave for 136 (40.84%) of pedestrians. Of those, the confederate looked straight ahead for 63 (46.32%) cases while he looked directly at 73 (53.68%) of the passing pedestrians, The confederate raised his hand to answer his phone for 115 (34.53%) pedestrians, for which he looked straight ahead for 57 (49.57%) cases or at the pedestrian for 58 (50.43%) cases. There were 82 pedestrians who passed the confederate while he performed no action (24.62%).   Average clip duration was 9.54 s (SD = 3.36 s). Based on the results from either coder, Time to pass the confederate (from the start of the action) and Total time to react (from start of action to exiting frame) did not vary as a function of confederate action or looking behaviour, all ps > .05.  3.3.3 Did confederate actions influence overt looking frequency?  The primary question was whether pedestrians would change looking behaviour in response to the confederate's action. As such, analyses were first conducted on only the 'Wave' and 'Phone' conditions. Owing to the dichotomous nature of the dependent variable, looking behaviour was analyzed using a loglinear analysis with location (outside, inside), confederate 48  hand action (wave, phone), and confederate eye position action (looking straight ahead, or at the pedestrian) as factors. Loglinear analysis can be thought of as an extension of Pearson's chi-square analysis when examining the relationship between more than two categorical variables. In a loglinear analysis, models are built and tested to find the least complex model (i.e. with the fewest factors) that best accounts for the variance in the observed frequencies. The only major assumptions of the analysis are that the observations are independent, and that the expected frequencies for each cell of the contingency table (i.e. each condition) should be large enough (greater than 5) to permit a reliable analysis (Field, 2013). A stepwise backwards elimination procedure was used to determine which of the factors and their interactions significantly reduced model fit. In this way, a fully saturated model is first tested, and then interactions and factors are eliminated in a stepwise fashion (i.e. the highest order interaction effect is removed first, then if this does not have a significant effect on model fit, the second highest-order effect is removed, etc) until such a point where eliminating a factor reduces the model fit significantly. Performing this analysis revealed that only the effect of confederate hand action on pedestrian looking behaviour had a significant effect on the model, χ2(1) = 38.87, p < .001. This was due to pedestrians being more likely (odds ratio: 5.20) to look at the confederate following a wave than when he lifted his phone and said 'Hey'. Removing any other factors or their interactions had no significant effect, indicating that location and confederate looking behaviour (or their interactions) did not impact pedestrian looking responses. The final model with only pedestrian looking behaviour and confederate action resulted in a non-significant likelihood ratio of χ2(12) =11.97, p = .45, which indicates that the model including only this factor represents a good fit of the data.  49  3.3.4 How do overt looking frequencies compare against baseline?  Though it is clear that overt looking rates change as a function of confederate action, it is unclear from the above data what 'normal' looking rates would be, and how those rates would relate to that observed when passing a confederate making an action. Of particular interest was how looks in response to the confederate raising his phone compared to baseline looking behaviour from the pedestrians. A chi-square analysis of looking behaviour with confederate action (wave, phone, no action) as a factor revealed a significant association, χ2(2) = 63.98, p < .001. To explore this further, the original 3x2 chi-square was partitioned to reveal that looking behaviour was more likely (odds-ratio: 6.49) for the wave condition than in the phone and no-action conditions combined, χ2(1) = 61.85, p < .001. Interestingly, looking frequencies in phone and no action conditions were comparable, χ2(1) = 2.81, p = .13 (odds-ratio: 1.80). Table 3.2 reports the values for each condition; Figure 3.3 graphically presents the main effect of confederate action on looking responses.  Table 3.2 Pedestrian looking frequency across confederate eye position and action conditions  Phone  Wave  No Action  Looked straight ahead Looked at pedestrian Looked straight ahead Looked at pedestrian Looked down Pedestrian looked 16 17 40 52 15 Pedestrian did not look 41 41 23 21 67 N 57 58 63 73 82 Looking frequency (%) 28.07 29.31 63.49 71.23 18.29  50   Figure 3.3 Frequency of pedestrian looks made to the confederate as a function of confederate hand actions, compared against a no action condition. Only a confederate wave elicited greater looking than when no action was performed. 3.4 Discussion  In social environments, there may be no more important source of information to focus on than the actions of other people nearby, as they can dynamically alter one's own behaviour. Yet, monitoring the intentions of others by looking at them is not always appropriate or welcome. As such, it was hypothesized that humans have come to rely on covert attentional deployment to monitor nearby others, thereby enabling them to process other's actions while limiting their overt looking responses. To test this, the present study assessed whether a confederate's action – raising his hand to either wave or answer his phone – would elicit different looking responses in passing pedestrians. Confederate eye gaze and testing locations were also varied. Though pedestrians were not overtly attending at the time that the action started, looking responses to the confederate varied significantly based on the confederate's action: pedestrians were nearly five times more likely to look in response to a wave than to answering a phone. Confederate looking behaviour and testing location did not significantly influence responses. The 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% No Action Raises Phone Wave Percentage of pedestrians who looked at confederate 51  difference in pedestrian looking response rates across action conditions required that pedestrians first covertly process the confederate's action in order to tailor their own response appropriately, thereby demonstrating for the first time that people use their covert attention as a way of observing and assessing other's intentions without overtly signaling this to others nearby.  Through the use of covert attention, pedestrians in the present study were able to modulate their looking behaviour in order to signal attention in instances when it was socially appropriate. Looking in response to a wave may have served various, not independent, purposes, such as gaining more information about the other person's intentions (e.g. 'Is he waving at me?) or acknowledging the greeting by trying to establish mutual gaze. In support of the latter view, signaling one's acknowledgment of the other's greeting, or reciprocal recognition, is considered a standard social convention (Duranti, 1997), and serves to reinforce positive social relationships (Goffman, 1971), even in nonhuman primates (De Marco, Sanna, Cozzolino, & Thierry, 2014), and thus may be initiated even when pedestrians were uncertain that they were the target of the greeting. Though the exact intentionality of the look to the confederate is ambiguous, the critical point is that the pedestrians permitted themselves to react overtly only in the condition where the confederate's action could be interpreted as an invitation to interact. When the confederate's action was covertly determined to be a potential communicative signal, pedestrians responded in turn by using their looking behaviour as a communicative tool.   Note that looking behaviour was not simply a reaction to a verbal greeting and hand raise. If that had been the case, pedestrians should have looked equally often in both wave and phone conditions, and least often when the confederate did nothing. However, looks were infrequent both when the confederate answered the phone and when he did nothing. Further, the lack of looks in the phone condition should not be taken as evidence that pedestrians were not attending, 52  as several recent findings suggest that people avoid looking at strangers when they believe their actions might be visible (and interpretable) by the other individual. For example, Gallup and colleagues (2012) have shown that pedestrians are more likely to follow another’s gaze when walking behind or beside a group of lookers, as compared to when they walk in front (see also Gallup, Chong, & Couzin, 2012). Not only are people aware of the ability of their gaze to signal information, when in the presence of strangers people tend to assume a gaze avoidance strategy. As such, the infrequency of looks to the confederate when he answered his phone is better described as pedestrians avoiding subsequent looking behaviour following covert examination of the confederate. This inhibition was accomplished effectively, so much so that looking rates in response to the confederate answering his phone did not differ from when he performed no action at all.  As anticipated, looking behaviour in the present study was unaffected by whether the confederate stared straight ahead or if he focused on the pedestrian while they passed. While we will limit drawing many conclusions based on a non-significant effect, it bears mention in so much as it contrasts with many laboratory tasks suggesting that people are adept at detecting eye gaze directed towards them (Itier, Villate, et al., 2007; Senju, Hasegawa, & Tojo, 2005; Senju & Hasegawa, 2005; von Grünau & Anston, 1995). As the confederate maintained head position across conditions, we suspect, as others have argued, that the visual angle subtended by the eyes was too small, and acuity too poor in peripheral vision to reliably detect changes in gaze direction (Loomis et al., 2008). Future investigations of the effects of eye gaze on attentional orienting may provide insight into whether this social cue is useful within different naturalistic conditions (e.g. when already attending near another’s face).  53   Not only do the present findings draw attention to the need for future research into the social influences on visual behaviour, they also lay the groundwork for better understanding of how covert attention is directed to people in everyday scenarios. For example, did pedestrians choose to covertly attend to the confederate (perhaps when passing or because of the initiation of an action), or was the confederate already prioritized within the attentional system owing to him being a social stimulus? There is little field work that speaks to this point, though laboratory based tasks would argue that people have an at least partially non-volitional attentional bias towards social stimuli (Birmingham et al., 2009b; Devue et al., 2012; Laidlaw, Badiudeen, Zhu, & Kingstone, 2015; Laidlaw, Risko, & Kingstone, 2012). Several researchers have argued that attention may be proactively biased by highly relevant or highly valuable stimuli (B. A. Anderson et al., 2011b; B. A. Anderson, 2013; Todd et al., 2012; Wieser, McTeague, & Keil, 2011), and one could easily argue that nearby others reflect a perfect example of such a stimulus. What remains to be seen is whether these same automated biases are present within naturalistic environments, or if instead people strategically shift their covert attention to other people when convenient or necessary.  Though it has long been known that attentional and oculomotor systems can operate independently (Eriksen & Yeh, 1985; Juan, Shorter-Jacobi, & Schall, 2004; Posner, 1980), there has been little emphasis placed on the role that covert attention plays within social environments. This omission is likely due in part to the manner in which social attention has so often been studied: using images of people who cannot look back. In real life, looking to another signals information to that person, who themselves can react, creating a cascade of responses from both parties. The awareness that one's looking behaviour can be interpreted by others in real life constrains the way in which people direct their overt attention around others (Cary, 1978b; 54  Gallup, Chong, et al., 2012; Gallup, Hale, et al., 2012; Gobel, Kim, & Richardson, 2015; Laidlaw, Foulsham, Kuhn, & Kingstone, 2011; Wu et al., 2014), thereby creating a need for another mechanism – covert attention – to process important social information. In contrast, there are no social consequences of looking at images. In instances where looking serves no communicative role, as is the case in many traditional laboratory tasks of social attention, it may be that the decoupling of covert from overt attention is largely unnecessary, and as such, the utility of covert social attention has gone largely unstudied. The present study represents an initial step toward a new line of research focused on how covert social attention operates in a naturalistic environment and guides overt looking responses within social environments.  Finally, the present findings also highlight the importance not only of looking to communicate with others, but also the subtle but meaningful way in which people use the absence of looks as a communicative signal onto itself. While the value placed in averted gaze from other people varies depending on the culture, the appropriate use and withdrawal of looks to another emerges as an important social norm within many groups (Argyle & Cook, 1976; A. McCarthy, Lee, Itakura, & Muir, 2006, 2008; Rossano, Brown, & Levinson, 2009), and violations of these norms may be interpreted to signal developmental or neurological impairments (Moukheiber et al., 2010; Tanaka & Sung, 2013). As was observed in the present study, not looking overwhelmingly communicates a lack of attentiveness or signals an unwillingness to interact further; the use of gaze in this way is becoming a frequently observed social phenomenon within naturalistic environments (e.g. Cary, 1978a; Foulsham et al., 2011; Gallup, Chong, et al., 2012; Gallup, Hale, et al., 2012; Goffman, 1963; Laidlaw et al., 2011).    55  Chapter 4:  Covert attentional bias to distracting representations of social stimuli The previous chapter demonstrates that covertly attending to other people plays an important mediating role in overt looking behaviour in everyday situations. While it is clear that covert attention is directed to other people, the findings do not discriminate between the possible reasons for why covert attention was oriented to the confederate. Perhaps the confederate’s greeting prompted pedestrians to orient, or perhaps covert attention was already directed to the confederate owing simply to his status as a nearby, socially relevant object. Stated more broadly, Chapter 3’s results beg the question of whether covert attention is directed to social stimulus only when necessary, or if social stimuli are prioritized by the attentional system regardless of their necessity to a current task or goal. Chapter 4 is inspired by this question, and uses a saccadic trajectory paradigm to test whether the mere presence of a social stimulus biases an observer’s attentional landscape, thereby drawing covert attention towards itself. Note that the paradigm’s fine level of measurement mandates the use of digitally presented images of social stimuli, rather than real people. It is not intended for the results of this study to directly address how behaviour was elicited in real life, but instead to explore a related concept within another quadrant of Figure 1.1: covert orienting to computer-based stimuli. Within the larger scope of this document, moving into the lab provides another advantage, which is that it allows for the mechanisms underlying social orienting to be probed in a way not yet easily accomplished using more realistic stimuli. 56  Further, it serves to introduce research exploring the second major purpose of this document: to explore volitional and non-volitional aspects of social attentional orienting. 6 In saccadic trajectory tasks, participants are asked to make a single eye movement to a non-social target, that is either presented alone or nearby a distractor object. Even when saccades land directly on the target, a nearby distractor will nevertheless influence the curvature, or trajectory, or the saccade, often making the saccade curve away from the distractor stimulus (see Van der Stigchel, Meeter, & Theeuwes, 2006). Orienting attention to a non-target location can influence saccade trajectories (Sheliga, Riggio, & Rizzolatti, 1994; Van Der Stigchel & Theeuwes, 2005), and the more relevant a distractor, the greater its influence will be on trajectory (van Zoest & Donk, 2005), presumably because attention to the distractor increases with distractor relevancy. Thus, trajectory measures provide a way of measuring covert attentional deployment without requiring that the distractor be task-relevant or even influence pertinent task performance (i.e. accuracy or RT). In Chapter 4, I investigate whether the social status of a distractor will have a greater impact on saccade trajectory than would be observed using a comparable but non-social distractor.  A version of Chapter 4 has been published: Laidlaw, K.E.W., Badiudeen, T.A., Zhu, M.J.H., & Kingstone, A. (2015). A fresh look at saccadic trajectories and task irrelevant stimuli: Social relevance matters. Vision Research, 11, 82-90.                                                  6 Throughout this document, I am operating under the working hypothesis that it is more reasonable that lab/life differences in orienting behaviour are elicited from downstream interpretation of similar initial representations, rather than proposing an extreme view that images of people elicit immediately distinct activation patterns than what would be seen with real people. It is admittedly an assumption that the mechanisms activated by the presentation of a social image are similar to those recruited by a real person, and worthy of future investigation.  57  4.1 Introduction  When a rapid eye movement, or saccade, is made towards a target, the path that the saccade takes is often slightly curved (Viviani, Berthoz, & Tracey, 1977). The magnitude and direction of this curvature can be influenced by the presence of nearby non-target objects. Relevant (Sheliga, Riggio, Craighero, & Rizzolatti, 1995; Sheliga et al., 1994; Sheliga, Riggio, & Rizzolatti, 1995) or even task-irrelevant (Doyle & Walker, 2001; McSorley, Haggard, & Walker, 2004; Van der Stigchel & Theeuwes, 2005) non-target objects that are presented near a saccade's goal can change the curvature of a saccade in systematic ways. At its core, a saccade's trajectory can be interpreted as reflecting target selection and distractor inhibition within the oculomotor system. By examining what features of a target or a distractor influence a saccade's trajectory, one can infer what stimulus properties are prioritized or are considered salient by the oculomotor system during target selection and saccade planning.   In general, a distractor whose features attract attention will influence the trajectory of a saccade aimed to a nearby target (e.g. Nummenmaa, Hyönä, & Calvo, 2009; Theeuwes & Van der Stigchel, 2009; Van der Stigchel, Mulckhuyse, & Theeuwes, 2009). To explain this behavioural effect, it is often assumed that in the oculomotor system, likely at the level of the midbrain superior colliculus (SC)7, a priority map represents attended objects based on their low-                                                 7 In Chapter 4, emphasis is placed on the role of the priority map within the SC, but it should be noted that the SC represents one of the later regions in which signals converge from various visual and motor regions, including but not limited to the frontal eye fields, the dorsolateral prefrontal cortex, and lateral intraparietal area (LIP). It follows that signals coding for a distractor's salience and relevance need not originate in the SC but rather feed into it to improve target discrimination. For example, firing in the LIP appears to code for an object's relative value and may even distinguish targets and distractors (Schütz et al., 2012). According to some researchers (Fecteau & Munoz, 2006), the SC may be an ideal candidate to integrate signals of salience and relevance from across multiple regions in order to form a priority map critical for influencing visuomotor behaviour, though other regions such as the LIP & FEF likely also house 58  level saliency and their goal-related relevance (Fecteau & Munoz, 2006; Godijn & Theeuwes, 2002; McSorley et al., 2004). Each attended location or object is represented by a population of neurons that encode a movement vector to the target. The greater the object's combined salience (for example, strong stimulus intensity Bell, Meredith, Van Opstal, & Munoz, 2006) and relevance (e.g., its similarity to a target object Ludwig & Gilchrist, 2003; or proximity to the goal McSorley, Cruickshank, & Inman, 2009), the stronger its initial activation will be upon the priority map. Populations representing separate but nearby objects will overlap within the map, shifting the overall activity distribution to generate a weighted vector average based on the strength of their respective activation. The result is a saccade whose trajectory represents a combination of that which would be generated in response to the presentation of either the distractor or the target in isolation. Saccade accuracy can be improved through active enhancement of the target’s representation, and possibly through inhibition of the non-target representation (Al-Aidroos & Pratt, 2010; Van der Stigchel, 2010; Walker, McSorley, & Haggard, 2006; Wang, Kruijne, & Theeuwes, 2012; White, Theeuwes, & Munoz, 2012). According to some inhibitory accounts, it is thought that as time passes, inhibition to the distractor shifts the overall activity within the priority map such that peak activity is further away from the distractor's true location, which results in a saccade that initially deviates away from the target and the distractor’s locations (Van der Stigchel et al., 2006).   To date, the study of saccadic trajectories has primarily relied upon within-task manipulations of simplistic target and distractor stimuli in order to manipulate the relative priority of the distractor to the participant. For example, a distractor can be made more relevant by either directly requiring participants to attend to it in order to determine the saccade goal (e.g.                                                                                                                                                              priority maps that may work in concert to produce saccadic behaviour (e.g., see Zelinsky & Bisley, 2015). 59  Sheliga, Riggio, & Rizzolatti, 1995), or by making it more similar to the target (e.g. by sharing its color, Ludwig & Gilchrist, 2003; or shape, Mulckhuyse, Van der Stigchel, & Theeuwes, 2009). Studies of this kind have established that saccade trajectories are more strongly affected by the distractor when it is arbitrarily made relevant for an experimental task. If, however, the overarching goal of this line of research is to establish how the oculomotor system behaves in everyday life, then one (of many) important avenues to explore is whether trajectory modulations can be observed in response to distractors whose relevance is defined more broadly than just within the task itself. The present studies examine whether a distractor that is inherently meaningful, not just within the task-at-hand but in everyday life, can elicit stronger trajectory deviations when compared to a distractor which lacks that general relevance but shares the same low-level visual properties. To test this, images of faces and unrecognizable scrambled faces were used as distractor stimuli. Both stimuli were task irrelevant, but while the former is socially relevant outside the paradigm itself, the latter is not.   Social stimuli were chosen as a test of whether the oculomotor system is sensitive to task-irrelevant distractor relevance primarily because of the strong evidence that faces are treated as relevant social stimuli in other paradigms. Even from early infancy, people pay special attention to faces over non-face stimuli (Farroni et al., 2005; Johnson, Dziurawiec, Ellis, & Morton, 1991; Mondloch et al., 1999). Faces, especially when presented upright, have been shown to attract (Devue et al., 2012; Langton et al., 2008; Theeuwes & Van der Stigchel, 2006) and hold attention (Bindemann, Burton, Hooge, Jenkins, & de Haan, 2005), and are detected over non-face stimuli, even under difficult viewing conditions (Devue, Laloyaux, Feyers, Theeuwes, & Brédart, 2009; Mack, Pappas, Silverman, & Gay, 2002). This attentional bias to attend to faces may be in part due to their strong activation of specialized face areas such as the fusiform gyrus 60  (or fusiform face area; FFA, Kanwisher, McDermott, & Chun, 1997; G. McCarthy, Puce, Gore, & Allison, 1997; Rhodes, Byatt, Michie, & Puce, 2004). Even in more unconstrained viewing conditions, faces are looked at more often than would be expected based on their low-level saliency (Birmingham et al., 2009a), and demonstrate their social relevance by acting to guide attention to other relevant features in a scene (Jasso & Triesch, 2008; Saxe, 2006; Tomasello et al., 2005). Note, however, that this evidence of strong prioritization of faces does not necessarily predict that within the oculomotor system, representations of task-irrelevant social stimuli are enhanced upon the priority map (e.g. would cause greater interference within a saccadic trajectory paradigm). The advantages for face versus non-face stimuli may stem from privileged processing at other levels, for example at the FFA or superior temporal sulcus, and this information may or may not be easily accessible during saccade planning and execution. Thus, that faces are treated as a special, socially relevant stimulus in other tasks makes them an ideal test case for determining whether oculomotor planning is also affected by relevance that is not defined by the task itself.   A handful of trajectory-based studies have diverted from using simplistic target and distractor stimuli (e.g. basic geometric shapes, lines), though only a small number have used images of faces, the majority of which employed the face as a central attentional cue rather than as a distractor (Hermens & Walker, 2010; Nummenmaa & Hietanen, 2006; West, Al-Aidroos, Susskind, & Pratt, 2011). Thus, as in many other non-trajectory tasks, the face is the focus of attention, and therefore these studies cannot be used to speak to whether task-irrelevant social stimuli influence oculomotor planning. However, in one of the few studies where faces were used as peripheral distractor stimuli, only faces which displayed threatening emotional expressions elicited stronger saccadic trajectories when compared to non-face stimuli (Schmidt, 61  Belopolsky, & Theeuwes, 2012). In other words, emotional (especially threat-based) salience, not social faces more generally, affected saccade trajectories, which the author’s suggest may be due to a direct fast connection between the amygdala and superior colliculus (LeDoux, 1996). Given the literature reviewed above demonstrating that faces are generally prioritized by the attentional system at other levels of processing, Schmidt et al.’s implicit conclusion - that the social relevance of faces bears no influence within the oculomotor system - is worth further exploration. If true, then these results imply that both the oculomotor system's ability to process the social relevance of a given distractor, and its sensitivity to influences of social relevance found elsewhere in the brain, are highly constrained.   However, to propose that the oculomotor system is insensitive to social stimuli based on the null results of Schmidt and colleagues (2012) could be premature. Despite their finding that a neutral distracting face did not influence saccade metrics, there are several reasons why general face (and by extension, social) information may still be prioritized by the oculomotor system. First, the authors report average trajectory deviations, yet it is known that deviations change across saccadic reaction times (RTs), with greater deviation away from the target and distractor at longer RTs (McSorley, Haggard, & Walker, 2006). As such, it may be that a face-based effect was averaged out when trials were collapsed across all response times. Alternatively, Schmidt and colleagues may have failed to find an effect of the neutral face distractor on trajectory because the time period they examined was suitable for detecting fast subcortically generated emotional-salience effects, but was too short to observe social relevance effects. In support of this, recent evidence has demonstrated that whereas salience impacts saccade metrics (Van der Stigchel, Meeter, & Theeuwes, 2007) even for rapidly executed saccades, relevance information plays a more dominant role after a delay  (Schütz, Trommershäuser, & Gegenfurtner, 2012). 62  Further, the relevance of a face stimulus may be manifested not as an initial boost in the distractor's representation, but as a perseverance of the signal overtime, consistent with findings demonstrating that faces hold attention to their location (Bindemann et al., 2005). This information could be difficult to observe if longer time periods were not examined separately.   In the present paper, findings are presented from two studies that together demonstrate a significant influence of a social stimulus – a distracting face – on saccadic trajectory. These results run contrary to what could be concluded from existing trajectory literature and suggest instead that the social relevance of a face is influential in oculomotor planning and execution. In Experiment 1, upright faces, which are known to engage many processes unique to face processing, were tested for their ability to cause greater saccade deviation when compared to inverted face distractors. In Experiment 2, the results of Experiment 1 are compared to findings using scrambled versions of the face stimuli used in Experiment 1 in order to determine whether faces, regardless of their orientation, might be prioritized within the oculomotor system over meaningless colour- and luminance-matched objects. Both studies expand on previous work in two ways. First, they provide a detailed analysis of saccadic trajectory effects at various RTs, exploring whether previous face-based effects may have been averaged out and missed. Further, by employing a fixation onset event (Ross & Ross, 1980), average participant RT was delayed so that any resulting differences in trajectory after longer distractor processing times could be examined. 63  4.2 Experiment 1: Upright and inverted faces 4.2.1 Methods 4.2.1.1 Participants  Participants were 18 volunteers (age range = 17-25 years) from the University of British Columbia. All participants gave informed written consent and participated in exchange for course credit or monetary remuneration. Thirteen participants were female, 16 were right handed and all reported normal or corrected-to-normal vision. Work was carried out in accordance with the Code of Ethics of the World Medical Association (Declaration of Helsinki). 4.2.1.2 Apparatus   Eye movements were monitored using a desktop-mounted EyeLink 1000 eye-tracker (SR Research, Ontario, Canada) recording at a sample rate of 1000 Hz. Saccade start and end points were defined using velocity and acceleration thresholds of 30˚/s and 8000˚/s2, respectively. A standard 9-point calibration and validation procedure was completed at the start of each block, and within blocks when necessary. Calibration and validation were repeated until an average measurement error under 1.0˚ was obtained. Stimuli were presented on a 17-inch CRT monitor with 1024x768 pixel resolution and a 60 Hz refresh rate. Viewing distance was held constant at 60 cm with the use of a chin and forehead rest.  Stimuli were presented against a black background. The target object was a white cross (1.08˚ x 1.08˚), presented 8.84˚ above or below central fixation. The distractor (3.95˚ x 5.38˚) was chosen equally and randomly from four colour faces (two male, two female; taken from the Face Database of Minear & Park, 2004), and was presented with equal probability to the left or right of the target, at 45 angular degrees from the target. The eyes of each face were aligned to the center of the image, and the faces were elliptically cropped without removing any facial 64  features. All images were equally likely to be shown upright and inverted, as well as mirror reversed. The fixation region was defined by a centrally presented dark grey annulus (radius of 2.15˚). A small white dot (0.70˚) at the center of the annulus served as the central onset stimulus, used to delay participant RTs (explained further in Procedure, below). 4.2.1.3 Procedure  Figure 4.1 shows a breakdown of the experimental procedure. Targets could appear above or below fixation with equal probability. For one third of the trials, no distractor appeared and this served as a means for collecting baseline trajectory measures from which distractor-present trials could be compared. For the remainder of the trials, a distractor onset simultaneously with the target. Distractor faces appeared with equal probability 45 angular degrees to the left or the right of the target, the same distance (8.84˚) from the centre of the screen as the target. Trials began with the appearance of a central grey annulus, which participants were instructed to fixate within for a randomly determined duration between 500-1000 ms. If fixation was not detected after 1500 ms, or if fixation was not maintained within the central annulus for the duration of the fixation period, a red X appeared in the center of the screen for 400 ms and the trial then began afresh. If there were three successive false starts, then the initial eye tracking calibration and validation procedure was repeated, following which the trials resumed.  65   Figure 4.1 Procedure for both Experiments 1 and 2. Participants fixated within the central grey annulus (start area). In half of the trials, a central dot appeared inside the grey annulus (to delay RTs), followed 0-200 ms later by the appearance of the target (and distractor; middle, top panel). In the other half of the trials, the target (and distractor) onset 0-200 ms prior to the central dot appearing (middle, bottom panel). In two-thirds of all trials, a distractor onset simultaneously with the target. The target was a white cross; the distractor was an upright, or inverted face (Experiment 1), or a scrambled, non-face stimulus (Experiment 2). Participants were instructed to make a single saccade from fixation to the target as soon as the target appeared and to be as accurate as possible.   The timing between the onsets of the target (and distractor) and central dot were varied using a fixation onset paradigm. This paradigm served to increase the range of participant RTs in order to facilitate a time-based analysis and, critically, to increase RT in order to allow for an in-depth examination of trajectory patterns at later time periods. Although a fixation offset paradigm – whereby a stimulus at fixation is removed prior to target onset – is a more commonly used procedure for manipulating response latency, its effect serves to decrease rather than to increase RTs (Saslow, 1967). As such, it was ineffective to use a fixation offset procedure to meet our goals. On the other hand, the onset of a stimulus at fixation produces the opposite effect of increasing RTs (Cabel, Armstrong, Reingold, & Munoz, 2000; Ross & Ross, 1980). RT 66  increases as the delay between target onset and fixation point onset increases (Ross & Ross, 1980). As there was no fixation point at the start of the trial, asking participants to fixate within the central grey annulus ensured that participants were fixating at the location where the central onset would appear.  In half of the trials, the target (and distractor) appeared first, followed by the onset of the central dot. In the remaining half of the trials, the display sequence of the target (and distractor) and the central dot was reversed, such that the dot appeared at fixation first, which was then followed by the onset of the target (and distractor). The interval between the appearance of the central dot and the target (and distractor) was randomly determined for each trial in 50 ms steps from 0 to 200 ms. Thus, the SOA between target (and distractor) and central dot onset ranged between -200 ms to 200 ms in 50 ms intervals, with negative values denoting trials in which the target (and distractor) appeared prior to the central dot, and positive values denoting trials in which the central dot appeared prior to the target (and distractor). A SOA of 0 represents simultaneous onset of the target (and distractor) and the central dot. Thus, while the onset of a stimulus prior to the target might serve as a warning stimulus in some tasks, the effect of the central dot as a warning stimulus in this study would be especially limited, as its appearance relative to the target (and distractor) varied between -200 and 200 ms, i.e. was equally likely to occur before or after the target (and distractor). Participants were instructed to look to the target as quickly and as accurately as possible using one eye movement. Trials were separated by 800 ms. Participants completed 10 practice trials, followed by 12 blocks of 48 trials, for a total of 576 experimental trials.  67  4.2.2 Results 4.2.2.1 Data handling  Saccadic curvature was calculated using the quadratic fit method (Ludwig & Gilchrist, 2002). Each saccade was rescaled to travel a common absolute distance, and the best fitting quadratic polynomial was determined. The amplitude of the saccade's curvature was measured using the quadratic coefficient, which is reported here in degrees of visual angle. The average trajectory during distractor-absent trials was subtracted from the trajectories collected from the distractor-present trials, thereby compensating for idiosyncratic deviations in baseline trajectories across participants and generating a measure of the effect of the distractor on curvature. This was done separately for upward and downward saccades, as trajectories are known to vary depending on saccade direction (Viviani et al., 1977). Trajectories deviating away from the distractor were assigned negative values, while trajectories deviating towards the distractor were assigned positive values. 4.2.2.2 Trial exclusion  Trials were excluded if RTs were below 100 ms or above 500 ms (2.91% of all trials), if participants' first saccade went to the distractor (0.53%), did not land at the distractor or within a 4.30˚ diameter of the target (12.28%) or if an individual's RT or trajectory curvature was 2.5 standard deviations over or under their mean value for that condition (2.99%). 4.2.2.3 Trajectory time course 4.2.2.3.1 Saccadic reaction time  For all analyses reported, if Mauchly's test of sphericity or Levene's test of equality of variances was significant (conservatively set at p ≤.25), degrees of freedom and p-values were adjusted using Greenhouse-Geisser (if ε ≤ .70) or Huynh-Feldt (if GG ε > .70) adjustments 68  (Girden, 1992). Each participant's distractor-present data was sorted based on RT and quintiled such that bin 1 represented the fastest 20% of their trials, and bin 5 represented the slowest 20% of their trials.  A repeated measures ANOVA was performed with distractor type (upright, inverted) and RT bin (1-5) as within-subject factors. It revealed only a main effect of RT bin, F(1.19, 20.14) = 166.95, p < .001. Importantly, there was no differential influence of RT on the different types of distractors, suggesting that any distractor effects of saccadic curvature are not related to RT differences.  4.2.2.3.2 Saccadic trajectories  The mean trajectory for each RT bin was determined for each participant for upright and inverted distractor conditions (Figure 4.2). A repeated measures ANOVA was conducted with distractor type (upright, inverted face), and trajectory bin (1-5) as within-subject factors. There was a significant main effect of trajectory bin, F(2.10, 35.74) = 7.97, p = .001 revealing greater deviation away from the distractor location (relative to baseline) as RT increased8. Interestingly, there was no significant effect of distractor type, F(1, 17) = .004, p = .95, nor was there a significant interaction between distractor type and trajectory bin, F(3.50, 59.55) = 2.11, p = .11. Thus, although the distractor influenced saccadic trajectory relative to when there was no distractor, there was no significant differentiation for upright versus inverted faces.                                                  8 To examine if the use of four exemplar faces resulted in participant's habituating to the distractor faces, we analyzed saccadic trajectory from the first as compared to the second half of the experiment. There were no significant differences between experiment halves, ps > .40, indicating that participants did not perform differently with distractor experience. 69   Figure 4.2 Saccadic trajectory deviations relative to no-distractor trials for trials in which the distractor was an upright or inverted face. While saccades increasingly deviated away from the distractor location as reaction time increased, there was no effect of distractor type. 4.2.2.4 Fixation onset effect  Using a repeated-measures ANOVA, the influence of the fixation onset paradigm on RTs using distractor type (upright, inverted, or distractor absent) and SOA (-200, -150, -100, -50, 0, 50, 100, 150, 200 ms) as factors was evaluated. There was a main effect of SOA, F(1.54, 26.26) = 39.21, p < .001, such that RTs increased with SOA (Ross & Ross, 1980). A main effect of distractor type was also noted, F(1.29,21.86) = 4.79, p = .032, which represented a non-significant difference for the effect of fixation point onset on RT to be larger for the no distractor condition, (M = 252.72, SD = 40.98) compared to the distractor-present condition (upright: M = 246.42, SD = 40.83; inverted: M = 246.70, SD = 38.93), all ps > .10. The magnitude of the fixation onset effect was unaffected by distractor orientation. This pattern of results is consistent with previous findings that the remote distractor effect (Walker, Deubel, Schneider, & Findlay, -0.25 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 100 150 200 250 300 350 400 Saccade Trajectory (degrees) Reaction Time (ms) Upright Face Inverted Face 70  1997) is strongest when a fixation point offsets, and is actually absent when the fixation point remains on screen (Honda, 2005). The interaction was not significant (p > .05).  4.2.2.5 Error analysis  Saccades rarely landed at the distractor location, and there was no significant difference in the number of erroneous saccades landing at the upright versus the inverted face, p > .05. In addition to making erroneous saccades to the distractor, participants could have also made erroneous looks to a location that contained neither a distractor nor the target (i.e. to blank screen-space). The percentage of 'saccade to nothing' errors was calculated and submitted to a repeated-measures ANOVA with distractor type (absent, upright, inverted) as a within-subject factor. Results showed a significant effect of distractor type, F(2,34) = 9.27, p = .001, which was due to more errors made when a distractor was present than when there was no distractor (upright vs. absent: t(17) = 3.22, p = .005; inverted vs. absent: t(17) = 4.01, p = .001; upright vs. inverted p > .05). Error performance supports a lack of differentiation between upright and inverted face distractors.  4.2.3 Discussion  The results from Experiment 1 show a characteristic deviation away from the task-irrelevant distractor that is typical in time course analyses of saccade trajectory (e.g. McSorley et al., 2006). This shows that the distractor's presence has a significant influence on saccade execution. Interestingly however, the present study revealed no evidence that upright faces were treated differently than inverted faces. This stands in contrast to results using other paradigms in which upright faces are processed differently than inverted face, typically with upright faces showing a distinct advantage in recognition-based tasks (Valentine, 1988). Face orientation effects are often interpreted as evidence that upright faces receive specialized or additional 71  processing via face-sensitive brain regions like the FFA (Yovel & Kanwisher, 2005). Importantly, however, face inversion effects are not consistently demonstrated in face detection tasks where additional face processing is unnecessary (Bindemann & Burton, 2008; Kanwisher, Tong, & Nakayama, 1998). Critically, the present study did not require participants to engage in a face recognition task; the distracting faces were purely task irrelevant. Thus, as the face stimuli were task-irrelevant, in-depth face processing may not have occurred.   Although people do not have as much everyday experience with inverted versus upright faces, there is little doubt that inverted faces are still social stimuli. Experiment 1's results do not demand the rejection of the hypothesis that socially-defined relevance is represented within the oculomotor system. The possibility remains that social stimuli are prioritized more than non-social distractor stimuli, and that the more in-depth processing necessary to differentiate between upright and inverted faces is not automatically engaged when the faces are presented as task-irrelevant distractors. In Experiment 2, upright and inverted distractor faces were replaced with scrambled versions of the same stimuli used in Experiment 1. An additional experiment was performed rather than including a third, scrambled face condition in a modified version of Experiment 1, because doing so would have substantially increased trial number and it was suspected that it was already approaching a time point where participants would be fatigued, i.e. we were concerned that participant fatigue would compromise the data. By comparing the results from each study, it will be possible to determine whether distracting faces received any prioritization over non-socially relevant stimuli that are nevertheless matched on other low-level visual features (e.g. size, contrast, and luminance).  72  4.3 Experiment 2: Scrambled faces 4.3.1 Methods 4.3.1.1 Participants Participants were 18 volunteers (age range = 18-27 years) from the University of British Columbia that had not participated in Experiment 1. All participants gave informed written consent and participated in exchange for course credit or monetary remuneration. Thirteen participants were female, 16 were right handed and all reported normal or corrected-to-normal vision.  4.3.1.2 Apparatus and procedure  Apparatus and procedure were identical to Experiment 1 with the exception that instead of using images of upright or inverted faces as distractors, participants were shown scrambled versions of the same faces. Scrambled images were created using a two-dimensional Fast Fourier Transform and subsequent phase randomization and reconstruction using the same spatial frequencies, luminance, and contrast as the original image (West, Anderson, Ferber, & Pratt, 2011). Thus, for one third of trials, only the target appeared, while on the remaining two-thirds of the trials the target was accompanied by a nearby scrambled face. 4.3.2 Results  The focus of the following analyses was to compare results from Experiment 2 (scrambled face) with those from Experiment 1 (face), though analyses of the results from Experiment 2 alone are also reported where appropriate. As Experiment 1 revealed no significant difference between upright and inverted distractor faces, distractor type from Experiment 1 was collapsed and analyses were performed to compare differences in face versus scrambled face 73  distractors. Performing the trajectory analyses below with only upright faces or only inverted faces from Experiment 1 did not produce a meaningful change to the reported results.  4.3.2.1 Trial exclusion  Trials were excluded if RTs were below 100 ms or above 500 ms (7.63% of all trials), if participants' first saccade went to the distractor (0.16%), did not land at the distractor or within a 4.30˚ diameter of the target (16.75%) or if an individual's RT or trajectory curvature was 2.5 standard deviations over or under their mean value for that condition (2.64%). 4.3.2.2 Trajectory time course 4.3.2.2.1 Saccadic reaction time  As with Experiment 1, a repeated-measures ANOVA with RT bin (1-5) showed an expected main effect, F(1.24, 21.11) = 173.67, p < .001, where RTs differed based on the binning procedure. Results were compared from Experiments 1 and 2, by performing a mixed-factor ANOVA with distractor type (face, scrambled face) as a between-subject factor, and RT bin (1-5) as a within-subject factor. This revealed only a main effect of RT bin, F(1.22, 41.44) = 339.61, p < .001. Importantly, there was no differential influence of RT on the different types of distractors, suggesting that any effects of saccadic curvature are not related to RT differences. 4.3.2.2.2 Saccadic trajectories  The mean trajectory for each RT bin was determined for each participant. A repeated-measures ANOVA with trajectory bin (1-5) was performed on Experiment 2’s data, demonstrating a main effect of trajectory bin, F(2.43, 41.23) = 6.96, p = .001. Figure 4.3 shows saccadic curvature across all RT bins for Experiments 1 and 2. Trajectories from both experiments were compared in a mixed factor ANOVA with distractor type (face, scrambled face) as the between-subject factor, and trajectory bin (1-5) as the within-subject factor. There 74  was a significant main effect of distractor type, F(1, 34) = 4.62, p = .04, with greater deviation away from face than scrambled face distractors. There was also a significant main effect of trajectory bin, F(2.19, 74.38) = 13.27, p < .001, with greater deviation at longer RTs. Finally, there was a significant interaction between these factors, F(2.19, 74.38) = 3.25, p = .04. After Bonferroni correction for multiple comparisons, follow-up independent t-tests at each trajectory bin revealed a significant difference at the longest RT bin, t(29.06) = 3.67, p = .005. Thus, when participants were exposed to the distracting face and non-face stimuli for an extended time period, there was significantly greater saccadic trajectory deviation away from the face versus the non-face distractor (Figure 4.3).  Figure 4.3 Saccade trajectories (in degrees of visual angle) as a function of RT for face and scrambled, non-face stimuli. Positive trajectory values indicate greater deviation towards the distractor than baseline; negative values indicate greater deviation away from the distractor than baseline. Upright and inverted faces were shown to not differ and have been collapsed together. Face distractors elicited greater deviation away at longer SRTs than did scrambled, non-face distractors.  As one additional interest of the present study was in determining whether faces retained attention more so than scrambled stimuli, the trajectory values from the slowest two RT bins for -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 100 150 200 250 300 350 400 Saccade Trajectory (degrees) Reaction Time (ms) Faces Scrambled Faces 75  both faces and scrambled faces were compared. As inhibition is likely not sustained indefinitely (McSorley, Haggard, & Walker, 2009), evidence of saccade trajectories returning to baseline might be expected at the longest RTs. If attention is maintained at the distractor location, however, such as in the salient face condition, it would be reasonable to expect that inhibition should also be maintained in order to facilitate target selection. Thus, it was anticipated that the slowest RT bin for the saccades made in the presence of a scrambled face may show evidence of returning to baseline, whereas this would not be the case when the distractor was a face. That is, the slowest time bin should actually deviate away less than the second slowest time bin. Indeed, a paired samples t-test of the two slowest RT bins confirms this prediction, faces: t(17) = 1.65, p > .05; scrambled faces: t(17) = 2.51, p = .022. 4.3.2.3 Fixation onset effect  For Experiment 2 alone, mean RTs across SOAs (-200, -150, -100, -50, 0, 50, 100, 150, 200 ms) and distractor presence (absent, present) were submitted to a repeated-measures ANOVA. There was a main effect of SOA, F(1.96, 33.27) = 26.08, p < .001, such that RTs were slower when the fixation point onset after target (and distractor) had already appeared. There was also a main effect of distractor, F(1,17) = 6.40, p = .022, which was due to small but significantly slower responses when only a target was present versus when a distractor was also present. The interaction was not significant, p > .05. Once again the fixation onset paradigm had the anticipated and desired effect on RTs.  To compare the effects of distracting face and scrambled face images, mean RTs across SOAs were submitted to a mixed factor ANOVA with distractor type (face, scrambled face), distractor presence (absent, present) and SOA as factors. There was a main effect of SOA, F(1.87, 63.58) = 64.27, p < .001, such that RTs were slow when the fixation point onset after 76  target (and distractor) onset. There was also a main effect of distractor presence, F(1, 34) = 11.66, p = .002, such that distractor-present trials were significantly faster than distractor-absent trials (see discussion in Experiment 1). No other effects or interactions were significant (all ps > .05). 4.3.2.4 Error analysis  No comparison could be made in erroneous saccades made to the target within Experiment 2 alone. Thus, the error rates across Experiments 1 and 2 were compared. Though it was very rare for saccades to land on the distractor, there was nevertheless a trend for participants to make more erroneous saccades to the distractor when it was a face (M = 0.98%, SD = 1.20%) versus when it was a scrambled face, (M = 0.37%, SD = 0.53%), t(23.28) = 1.99, p = .059, which broadly supports the view that faces capture overt attention more so than do non-face distractors. Because error rates were so low, however, no conclusions will be made based on this finding alone.  In addition to making erroneous saccades to the distractor, participants could have also made erroneous looks to a location that contained neither a distractor nor target (i.e. to blank screen-space). The percentage of 'saccade to nothing' errors was calculated for trials with and without a distractor and submitted to a mixed-factor ANOVA with distractor presence (present, absent) as a within-subject factor and distractor type (face, scrambled face) as a between-subject factor. Results revealed that participants made significantly more erroneous saccades that did not land at any object when a distractor was presented alongside the target (M = 16.83%, SD = 10.67%) versus when it was not (M = 13.92%, SD = 9.38%), F(1, 34) = 26.71, p < .001. No other main effects or interactions were significant, (all, ps > .05). Thus, the presence of a 77  distractor made participants less accurate overall in their saccades, but the identity (e.g. face or scrambled face) of the distractor did not impact error rates.  4.4 General discussion  The current experiments demonstrated that a distracting face has a greater impact on saccadic trajectory than a scrambled version of the same image, but only later in saccade planning (i.e. when RTs were longest). This effect was not specific to upright faces; inverted faces were similar to upright faces, but different from scrambled, non-face distractors. Even though the distractor's identity (face v. non-face) was irrelevant to the task, when RT was long, face and non-face distractors produced measurably different effects on saccade execution. This suggests that the broad social relevance of a face may have impacted the strength of the distractor's representation within the oculomotor system’s priority map.   While it has been shown that other features that impact the relevance of a distractor also create more interference, previous reports have primarily manipulated relevance as it is defined within the task. For example, when a distractor onsets in the same colour as the saccade target, its relevance to the task at hand increases due to its similarity to the target (Ludwig & Gilchrist, 2003). As a result of these goal-driven signals, the distractor that shares the target's colour produces greater trajectory modulations. Similarly, when a distractor shares the location of the target on other trials within the same study, it becomes more task-relevant compared to one which onsets at a location where a target never appears. As such, the distractor at the possible target location may be afforded an initial boost in activation within the priority map, which alters its impact on resultant saccadic trajectories (McSorley, Haggard, et al., 2009). What these previous studies do not speak to is whether distractors that vary in their relevance to the 78  participant beyond the current task will also impact the strength of that distractor's interference during saccade planning. Here, it is shown that it does.   These findings are broadly supported by previous trajectories studies that have examined how threatening or taboo distractors impact saccade trajectory. For example, semantically salient (e.g. taboo) words have been shown to hold attention longer at their location, causing deviations away at long distractor-target SOAs (Weaver, Lauwereyns, & Theeuwes, 2011). Similarly, emotional scenes also cause greater deviation away when compared to neutral scenes (Nummenmaa et al., 2009). Threatening or emotional stimuli can be considered broadly relevant, as they might signify an immediate threat to the observer. Unlike these previous studies, which presumably relied on an emotional reaction to the distractor to elicit stronger oculomotor interference, the present results are the first to show that relatively emotionally neutral stimuli can cause greater deviation away from their location. Interestingly, it has been found across several studies that the effects of the emotional status of the stimulus is only represented in trajectory measures if given sufficiently long processing time, which was true to a lesser extent in the present results as well, and is consistent with findings that information about an object's value or relevance is integrated within a priority map later than information concerning its salience (Schütz et al., 2012). The relative slowness of these effects on saccadic trajectory suggest that task-irrelevant distractor-specific details may not be immediately available but instead become integrated over time into the distractor's representation within the priority map. This stands in contrast to findings of rapid face detection and/or processing, which occurs within as little as 100 ms (Braeutigam, Bailey, & Swithenby, 2001; Crouzet, Kirchner, & Thorpe, 2010), possibly due to the involvement of fast subcortical face-sensitive regions which include the superior colliculus (Johnson, 2005). Indeed, others have suggested subcortical activation to 79  social stimuli is impaired in those with ASD, which could account for behaviour differences in social orienting (Kleinhans et al., 2011). However, these studies did not always control social and emotional levels of the stimuli, making it difficult to parse out any role of fast subcortical face-specific routes independent of their emotional content. Further, the idea that the subcortical route through the SC via the amygdala plays a major role in processing affective and biologically significant stimuli has recently been challenged, and additionally it has been shown that cortically-mediated visual processing can occur as rapidly as supposedly subcortically-mediated emotional processing (Pessoa & Adolphs, 2010). This leads us to make a cautionary note about whether the difference in the timing of our effects compared to findings of rapid face detection is due to ‘slow’ cortical versus ‘fast’ subcortical face processing, respectively. While it is possible that the oculomotor system or the attentional networks feeding into this system display activity consistent with a sensitivity to biological or socials stimuli, future investigations will need to distinguish between the social and emotional relevance of stimuli and also focus on how this information reaches the priority map in order to more confidently conclude what is driving rapid versus slower-building changes in stimulus processing.  Though Schmidt and colleagues (2012), did not find trajectory differences between neutral faces and non-face distractors, the present results can nevertheless be reconciled with theirs. Considering the effect reported here is most pronounced in long-latency saccades, their choice to average all RTs together to generate an overall trajectory measure may have masked any differences that could have been present in their data. The present finding of greater deviation away from an upright or inverted face over a non-face stimulus is also broadly consistent with studies outside of the saccadic trajectory literature that demonstrate a strong attentional bias to attend to faces. These results also support previous results suggesting that 80  faces retain attention (Bindemann et al., 2005), such that faces caused greater deviation away at the longest saccadic latencies. Within a context in which faces are task-irrelevant however, there is little direct evidence of this attentional maintenance, and even less documenting its time course, which the present study provides. As suggested by McSorley, Haggard, and colleagues (2009), inhibition of the distractor may not be maintained indefinitely. Rather, inhibition may reach a peak, and then slowly release and return closer to baseline levels. This would manifest as an increase in saccadic trajectories away from the distractor and target, followed later by a gradual return to the participants' distractor-absent baseline level. Indeed, this was observed for scrambled distractor trials: trajectories collected from the slowest RT bin deviated away from the distractor less than did the trajectories from the second slowest RT bin. This was not the case for the face trajectories (upright and inverted faces combined), however, suggesting that inhibition was maintained for faces longer, likely due to stronger competition from the distractor face than the scrambled face.   While the observed effects are described in the context of distractor inhibition, which is a common viewpoint in the field (Laidlaw & Kingstone, 2010; McSorley, Haggard, et al., 2009; Van der Stigchel et al., 2007; Walker et al., 2006), it is worth noting that recently there has been some debate about the mechanisms underlying the deviation of saccades away from a distractor. Extracellular recordings of distractor-related activity in monkey SC failed to show early differentiation in spike rate when saccades deviated towards or away from the distractor (White et al., 2012). Some have speculated that other areas, such as the frontal eye fields or posterior parietal cortex, may play a role in 'storing' some types of inhibitory tags that are transmitted to the SC only right before saccade execution (e.g., for memory-based inhibition, Belopolsky & Van der Stigchel, 2013). Others have speculated that distractor inhibition may be related less to 81  top-down inhibition and more to distractor-related disinhibition of the SC via the substantia nigra pars reticulata. However, in the same task, White and colleagues also found strong correlations between distractor activation and deviation just prior to saccade execution. Though this related activity may be too late to affect trajectory, it is worth noting that stimulation of the SC within that short time window has been shown to cause deviation towards the distractor (McPeek, Han, & Keller, 2003), suggesting that it is at least plausible that changes in activity (i.e., suppression) so close to saccade execution may also be responsible for saccade deviation. Others have argued that 'Mexican-hat' shaped lateral interactions could account for some instances of deviation away from the distractor (Wang et al., 2012). More research will be necessary to determine the exact mechanism behind deviation away from a distractor. Importantly, the conclusions of the present studies need not be tied to a particular manner by which inhibition is applied. The core result of the current experiments is that faces are considered more relevant than non-face stimuli by the oculomotor system, which arguably increases distractor-related activity within the SC that subsequently influences oculomotor behaviour.   Here, face orientation did not influence results, which has been reported elsewhere (Langton et al., 2008; Ro, Russell, & Lavie, 2001; Theeuwes, Van der Stigchel, & Olivers, 2006). A pilot study using a similar experimental procedure but with post-stimulus masks and briefer presentation times (to increase task difficulty) confirmed that participants were easily able to distinguish between the distractor types and were significantly more accurate at identifying the distractor type than would be expected by chance9. As such, it can be concluded confidently that                                                  9 Twelve naive participants (age range = 18-21 years, seven female, 11 right handed, all with corrected-to-normal or normal vision) completed a distractor-identification experiment. Stimuli were identical except that distractor stimuli were programmed to be presented for 75 ms, and then masked for 250 ms by a black and white random pattern mask in order to make the identification task more challenging. Further, distractors were presented on every trial and could 82  participants were able to distinguish between the upright and inverted faces, but that face orientation did not differentially impact saccade trajectory. When face inversion effects are observed elsewhere, researchers have suggested that it may be due in part to upright faces receiving 'privileged' specialized processing by face-sensitive brain regions such as the FFA. However, face inversion effects appear to be strongest within recognition or discrimination tasks (Freire, Lee, & Symons, 2000; Yin, 1969), and have not been as consistently reported within simple face detection tasks (Bindemann & Burton, 2008; Kanwisher et al., 1998). In the present study, faces were task-irrelevant, suggesting that participants may have merely detected them as faces rather than processed them in-depth, which could explain the lack of an inversion effect in the present results. An alternative possibility is that while differentiation between upright and inverted faces occurred within other brain regions such as the FFA, these signal differences were lost or not well represented at the level of the oculomotor system's priority map. Future research could manipulate the depth by which the distractor face stimuli are processed, thereby enabling one to better understand the role that prior processing within face-specific regions plays in determining the strength of a distractor's representation within the oculomotor system's priority map.   Finally, it is prudent to address a potential limitation of the present study, which is that while we have observed differences between face and non-face stimuli, it remains possible that what is being observed is not specific to the processing of social relevance. Though we aimed to                                                                                                                                                              be upright, inverted, or scrambled faces, with equal probability. No fixation onset procedure was used (e.g. no grey annulus onset). Participants maintained central fixation and indicated via key press after the trial which distractor had appeared; feedback was provided after each response. Three blocks of 64 trials were analyzed. Analyses revealed that overall, participants correctly identified the distractor significantly more often than chance, (chance performance: 33.33%; correct range: 70% - 90%), and for each of the three distractor types participants selected the correct distractor significantly more often than the other two options (all comparisons, ps < .05).  83  claim that social relevance is the main driving difference between the results of the two experiments, the lack of an inversion effect for faces in Experiment 1 coupled with the fact that we did not directly compare trajectory behaviour in response to another non-social but meaningful distractor (e.g. cars, houses) permits the possibility that these effects reflect a more general influence of object meaningfulness on distractor prioritization rather than social relevance. With this qualification in mind, the current study nevertheless demonstrates that at least for saccade planning and initiation, relevance generated based on an object's meaningfulness (or more specifically, possibly social stimulus relevance) is incorporated into the distractor's representation, but relatively late, suggesting that rapidly processed face information is not the sole carrier of the prioritization information; instead, slower-building relevance information may feed into the priority map at a later time. Future studies concerned with whether these effects translate to even more realistic task paradigms and to social stimuli showing different expressions will continue to provide a better understanding of what features within the environment guide a viewer's attention and actions.    84  Chapter 5: Social attention to features – Evidence for non-volitional orienting to the eyes, but not the mouth of face images While the first half of this thesis has primarily explored the initial selection of social stimuli, both overtly and covertly, the following chapters focus instead on how attention is directed within a social stimulus (i.e. a face) following the initial look to the person. Unlike in Chapter 2 and 3, where people appeared less willing to look at the confederate, there is reason to suspect that overt looking behaviour may play a stronger role after a person is initially fixated. For one, there is no longer much need to camouflage one’s attentional focus using covert orienting. Further, selection within a stimulus may occur due to a need to process detailed information (e.g. small facial expressions or shifts in gaze). This latter point is supported by the many computer-based studies in which participants show a distinct preference to look to the eyes of others, which is arguably the facial feature with the greatest social and visual detail (Baron-Cohen, 1995; Janik, Wellens, Goldberg, & Dell’Osso, 1978; Walker-Smith et al., 1977; Yarbus, 1967). What has been unclear from these previous findings is whether the bias to overtly attend to the eyes represents a volitional preference, or a non-volitional ‘pull’. In Chapter 5, I explore this question by presenting images of faces to participants while they attempted to follow basic instructions on where to look within the face. Chapter 5 uses a computer-based task for an important reason: the inherent pre-selection of stimuli within computer-based tasks is ideal for studying how attention is oriented within a social stimulus once that stimulus has been selected for further inspection. Unlike in real life, where participants choose where and when to look, computer-based paradigms are designed to focus participant’s attention on the relevant stimuli.  85  A version of Chapter 5 has been published: Laidlaw, K.E.W., Risko, E.F., & Kingstone, A. (2012). A new look at social attention: Orienting to the eyes is not (entirely) under volitional control. Journal of Experimental Psychology: Human Perception and Performance, 38, 1132-1143. 5.1 Introduction When looking at another’s face, fixations tend to cluster around the internal features, which include the eyes, mouth, and nose (Henderson et al., 2005; Walker-Smith et al., 1977). Of these, the eyes are the most frequently fixated (Henderson et al., 2005; Pelphrey et al., 2002; Walker-Smith et al., 1977; Yarbus, 1967). Even when faces are presented within the context of a complex scene, the eyes receive more looks than would be expected given their size and salience (Birmingham et al., 2008a, 2009b). That people show a bias to attend to another’s eyes is not altogether surprising. After all, the eyes provide the looker with valuable social information concerning the person’s intentions, emotions, and attentional focus (Baron-Cohen, 1995). What is surprising is how little is known about how this eye bias is controlled: what mechanisms underlie the tendency for attention to be directed to the eyes of another? The aim of this chapter is to explore whether the bias to attend to the eyes is non-volitionally elicited by the stimulus itself, or if instead this bias is volitionally controlled. Orienting to the eyes may be reflexive, such that the eyes may ‘draw’ attention non-volitionally to their location in a manner analogous to an abrupt onset or an irrelevant singleton (Theeuwes, Kramer, Hahn, & Irwin, 1998). If this is true, then attention may be non-volitionally oriented to the eyes, regardless of task. Evidence for this strong reflexive stance is sparse and largely indirect. For example, it might be argued that because the eyes can direct attention reflexively to a gazed-at location (Driver et al., 1999; Friesen & Kingstone, 1998; Frischen et al., 2007; Kuhn 86  & Kingstone, 2009), the eyes must also non-volitionally draw attention to themselves (Itier, Villate, et al., 2007). However, orienting attention is not a unitary phenomenon; rather orienting is thought to reflect three behaviourally and neurologically dissociable components: disengagement, movement and engagement (Posner, Petersen, Fox, & Raichle, 1988; Posner & Petersen, 1990). That the eyes can non-volitionally shift attention does not require that they also non-volitionally engage attention. Similarly, findings that direct gaze can delay the deployment of attention in visual search only supports a bias to continue to look to the eyes, (Senju & Hasegawa, 2005; von Grünau & Anston, 1995), but does not speak to whether attention is engaged non-volitionally. Supporting the notion that there may be a volitional component is work such as Itier et al. (2007) and Birmingham et al. (2008a) which demonstrate that task demands can influence the extent to which the eyes are fixated. Of course, because volitional control of attention can modulate orienting to the eyes does not necessitate the conclusion that orienting to the eyes is normally or entirely volitional, just as the finding that expectations and context can influence orienting to abrupt onsets does not demand the conclusion that orienting to onsets is typically or entirely volitional (Folk, Remington, & Johnston, 1992; Theeuwes, 2010; Yantis & Jonides, 1990).  In sum, although it is clear that there is a bias to attend to the eyes, current evidence cannot distinguish between a non-volitional or volitional account for this behaviour. Specifically, it is unclear whether orienting to the eyes is entirely under volitional control, or whether this process is also partly driven outside of one's ability to control. To differentiate between these alternatives, it is necessary to devise a task that pits the putative non-volitional processes that draw overt attention towards the eyes against volitional processes to look away. Such a test would reveal if looking at the eyes is truly non-volitional, or if it is controllable by top-down 87  motivations or goals. The present study introduces a task that provides precisely such a test: the "Don't Look" (DL) task. In the current experiments, participants are shown a series of faces. In one condition they are instructed to look at the faces normally (Free Viewing condition), and in the other condition they are instructed to avoid either the eyes (DL: Eyes), or as a control, the mouth (DL: Mouth). While the Free Viewing condition provides a measure of natural looking behaviour to the eyes, the DL: Eyes condition requires participants to work against the natural inclination to orient to the eyes, thereby placing volitional and any supposed automated processes in direct competition. The DL: Mouth condition provides a suitable control and comparison condition as there is also a bias to attend to the mouth of faces (Henderson et al., 2005; Walker-Smith et al., 1977), but avoidance of this feature may reveal different mechanisms controlling gaze behaviour than those that underlie attention to the eyes. It is worth noting that our DL conditions represent an interference paradigm akin to that proposed by Jacoby (1991) in his process dissociation procedure, whereby the contribution of any automatic factors is associated not with improved performance on a task, but instead contributes to increased errors. Thus, any inability to “not look” provides evidence for a non-volitional contribution to orienting to eyes. 5.1.1 Experiments overview A decrease in fixations to the eyes during the DL condition would provide evidence for some volitional control of orienting to the eyes, but would be insufficient in and of itself to eliminate the possibility that the bias to attend to the eyes is not also partly beyond one's control. To determine whether orienting to the eyes is at least somewhat outside of volitional control, the frequency and duration of looks to the eyes when told to avoid the feature can be compared against two measures: i) against the frequency and duration of looks to the mouth during the DL: 88  Mouth condition, and ii) against measures derived to represent chance performance (e.g. the frequency and duration of looks to the eyes had participants’ fixations been randomly distributed, described in detail in Data Handling; see also Bindemann, Scheepers, & Burton, 2009). First, by comparing performance across the DL: Eyes and DL: Mouth conditions, a relative measure of the extent of volitional control as a function of feature type can be determined. If the bias to look at the eyes is a result of the eyes drawing overt attention to themselves in a non-volitional manner, and this 'automatic' component is stronger than what may be driving attention to the mouth, then performance in the DL: Eyes condition should be worse (e.g. more errors should be made) than performance in the DL: Mouth condition. Otherwise, performance in the DL: Eyes condition should be comparable to that demonstrated in the DL: Mouth condition. Additionally, and most critically for our purposes, performance in both conditions can also be compared against chance. If fixations to the eyes in the DL: Eyes condition decrease but nevertheless remain above chance levels, then this would support the hypothesis that although volitional control can modulate orienting, there exists a significant non-volitional component that drives attention to the eyes. The same requirements stand for determining whether looks to the control condition, the mouth, are driven by volitional and/or non-volitional mechanisms. If, however, participants suppress looking at either DL feature to levels at or below those predicted by chance, then this would suggest that orienting to that feature is fully under the participants’ volitional control. Based on the existing body of evidence concerning the importance that eyes and eye gaze play in people's everyday lives, the hypothesis is that orienting to the eyes, albeit modifiable by volitional control, is at least partly beyond an individual's control, and will therefore manifest as a bias to look to the eyes even during the DL: Eyes condition. In contrast, the DL: Mouth 89  condition will reveal that any bias to look to the mouth during Free Viewing will not be due to automated orienting, and will instead be driven in a volitional manner. In accordance with these two predictions, participants should therefore make more looks to the eyes during the DL: Eyes condition than participants will look to the mouth during the DL: Mouth condition. 5.2 Experiment 1: Upright faces 5.2.1 Methods 5.2.1.1 Participants Thirty-two students (mean age: 21.22 years) took part in the study in exchange for course credit or remuneration. Participants were divided into two 'Don't Look' groups (see Stimuli and Procedure): the DL: Eyes group (10 females, 6 males), and the DL: Mouth group (11 females, 5 males). Due to an insufficient number of male participants, gender analyses were not performed. The participants' ethnicity and cultural background were not recorded. All participants gave informed written consent.  5.2.1.2 Stimuli and procedure Participants sat 60 cm away from a 17-inch monitor. Eye movements were recorded using an Eyelink 1000 desktop-mounted eye tracker (SR Research). Stimuli were portrait photos (from Minear & Park, 2004) of 40 young Caucasian adults (20 males, 20 females) taken against a white background. Faces were presented with an average visual angle of 12.73˚ x 18.43˚. All faces displayed a neutral expression. Images were divided into two sets, each containing 10 males and 10 females. Participants viewed each set once. The order of the photographs within each set was randomized and sets were counterbalanced across conditions. Each block started with a standard nine-point eye tracking calibration and validation procedure. A screen then appeared which detailed the task instructions for that condition. Every 90  participant completed two counterbalanced conditions: (1) Free Viewing, in which they were instructed to look at the faces as they normally would, and (2) DL, in which they were told to completely avoid looking at a particular feature of the face. Half of the participants were instructed to avoid looking at the eyes (DL: Eyes), and as a control, the other half of participants were told to avoid looking at the mouth (DL: Mouth). Each trial commenced with the onset of a drift-check point, which randomly appeared at one of the four corners of the screen and was used to ensure that the eye-tracking set-up was still accurate. The drift check point also served as an initial fixation point to ensure that all participants did not start the trial fixating the face. This first fixation was excluded from analyses. After the drift check point offset, a photo onset and was displayed for five seconds. Blocks were separated by a self-paced break.  5.2.2 Results 5.2.2.1 Data handling  For the purposes of data analyses, regions of interest (ROI) were hand-drawn around the eyes and mouth for each face. The top left panel of  Figure 5.1 shows the ROIs used for one representative image. Across participants, for the Free Viewing and DL conditions, the mean area of the eyes ROI was significantly smaller than the area of the mouth ROI, both conditions: t(31) = 80.31, p < .001. Thus, any increase in errors in the DL: Eyes condition relative to the DL: Mouth condition cannot be attributed to the eyes taking up a larger area than the mouth.  Though for the main hypothesis it was not necessary to take into consideration the differing sizes of eye and mouth ROIs, it is common to area normalize when ROIs are directly compared (Bindemann et al., 2009; Birmingham et al., 2008a, 2008b; Fletcher-Watson et al., 2008). Further, certain normalization procedures can provide additional, non-redundant information over and above that provided by using non-normalized data. Here, only normalized 91  analyses are reported unless not logical to do so (e.g. when analyses involve fixations across the whole screen and trial, see Gaze Behaviour across Free Viewing and Don't Look Blocks). To normalize, the percentage of all fixations on the screen that were to the ROI were divided by the pixel area of the ROI, also expressed as a percentage of area of the full screen (Bindemann et al., 2009; Fletcher-Watson et al., 2008). A normalized value of one indicates that the region was fixated as much as would be expected if the participant's fixations had been random; for example, if 10% of all fixations landed on an ROI that was 10% of the total screen, the area normalized value would equal one. A value significantly less than or greater than one indicates avoidance or selection of the area, respectively.  Here, we interpret values significantly greater than one to indicate a bias to attend to that region, whereas values less than or equal to one demonstrate suppression of any previously observed bias, and/or avoidance of the region. Thus, comparisons of the normalized values against chance performance avoid relying on comparisons of non-normalized error rates against a baseline of zero. Comparison against "no looks" is inappropriate as it sets an unreasonably low baseline criterion: even if fixations were distributed randomly across the screen, some portion of fixations would be to the to-be-avoided feature. Such a comparison therefore sets too liberal a definition for non-volitionally-driven looking. Therefore, the present use of chance behaviour as a comparison defines non-volitionally activity conservatively; if comparisons demonstrate that gaze to the DL feature is significantly above chance, then this is strong evidence of non-volitional component to attentional orienting. It is of course true that any gaze behaviour that is observed to be significantly above chance levels will also be significantly greater than zero (e.g. 92  "no looks"). It is worthwhile to note, however, that the effects and all conclusions reported here using normalized values are identical when non-normalized values are used10.  Where Levene's test for equality of variances was significant (set liberally at α = .25 for tests of assumption violations), adjusted degrees of freedom and p-values are reported. Unless indicated, analyses were completed with fixation count (number of fixations to the ROI) and overall fixation dwell time (total time spent on the ROI across all fixations). 5.2.2.2 Free Viewing block  In order to examine the components that contribute to a general preferential bias to look at the eyes of other faces, it should be demonstrated that a bias toward the eyes can be observed during the Free Viewing block. Consistent with previous literature (Henderson et al., 2005; Walker-Smith et al., 1977; Yarbus, 1967), separate mixed-factor ANOVAs on fixation count and dwell time, with ROI type (within; eyes, mouth) and group (between; DL: Eyes, DL: Mouth) as factors, revealed that participants in the Free Viewing condition made more fixations, normalized values: F(1, 30) = 55.14, p < .001, and looked longer overall, F(1, 30)= 47.09, p < .001, on the eyes than on the mouth. The main effect of group and its interaction with ROI type were not significant, either for fixation count or dwell time (all, ps > .05).   To examine whether participants showed a bias to attend to both the eyes and/or the mouth, normalized values were compared against one. All tests were significant, both for fixation count, [eyes: t(31) = 8.97, p < .001; mouth: t(31) = 7.34, p < .001] and dwell time, [eyes: t(31) = 8.31, p < .001; mouth: t(31) = 6.48, p < .001]. Although participants looked more at the eyes than the mouths during Free Viewing, looks to both features were nevertheless above                                                  10 This excludes analyses of normalized values against one, as there is no equivalent comparison to be made using non-normalized values. This also excludes analyses between early and late viewing periods: normalization is required in order to equate for the difference in duration of these two time periods.  93  chance levels, indicating that participants showed a consistent bias to look at both the eyes and the mouths of the images. Figure 5.1 illustrates participant performance as a heat map of fixations over the stimuli, whereas Table 5.1 lists the non-normalized and normalized performance values across all conditions.   Figure 5.1 Heat maps displaying the distribution of all participants' fixations over an amalgamated face (generated from combining all faces used in the Experiment) for each of the three viewing conditions in Experiment 1. Warmer colours denote greater concentration of fixations. Top left quadrant provides an example image of an upright face used in Experiment 1; yellow lines depict representative regions of interest used for data analyses.   94  Table 5.1 Experiment 1: Upright faces. Mean per trial fixation number and dwell times onto ROIs; performance difference between Free Viewing and the relevant Don't Look condition   Non-normalized Values Normalized Values  Eyes ROI Mouth ROI Eyes ROI Mouth ROI Condition Number Dwell Number Dwell Number Dwell Number Dwell Free Viewing 2.63 938.78 0.60 198.17 44.67 51.66 6.06 6.73 Don’t Look: Eyes 0.32 96.49 0.91 492.66 6.53 5.37 12.02 15.84 Don’t Look: Mouth 2.05 947.83 0.05 12.46 41.35 49.12 0.53 0.41 Relevant Condition Difference 2.31 842.29 0.56 185.71 38.14 46.29 5.52 6.32  5.2.2.3 Don’t Look block 5.2.2.3.1 Full viewing period  It has been demonstrated that there is a significant bias to attend to the eyes and mouths of the images in the present task, and these results support previous findings that this bias is strongest for the eyes relative to the mouth. Given this, results from the DL conditions are examined to determine whether the biases observed in Free Viewing reflect a purely volitional process, or whether a non-volitional component also significantly influences behaviour.   First, performance across the DL: Eyes and DL: Mouth groups was compared. Recall, it was predicted that if attention is non-volitionally drawn to the eyes more than to the mouth, then participants in the DL: Eyes group should perform worse (e.g. make more errors) than participants in the DL: Mouth group. Consistent with this, it was found that participants in the DL: Eyes group made significantly more fixations, t(15.34) = 3.02, p = .01, and spent more time, t(15.35) = 2.87, p = .01, on the eye ROI than participants in the DL: Mouth group did on the mouth ROI (Figure 5.2 and Figure 5.3). This analysis provides a relative measure of the non-95  volitional orienting to the different features: the eyes attracted overt attention relatively more than did the mouth.   To determine whether either group showed a bias to attend to their respective DL feature over what would be expected by chance alone, normalized looking behaviour in both groups was compared against one. If attention to the eyes is purely volitional, then participants should be able to avoid the eyes when instructed to do so, resulting in looks to the eyes that will be at or below levels that would be observed if the images were fixated randomly. Otherwise, non-automated orienting beyond the participants' control should cause participants to make erroneous fixations to the feature that they are told to avoid. Participants in the DL: Eyes group made significantly more errors than would be expected had fixations been random, both in terms of fixation number, t(15) = 2.80, p = .01, and overall dwell time, t(15) = 2.54, p = .02, indicating an inability to eliminate the bias to attend to the eyes volitionally (i.e., via instruction). In contrast, those in the DL: Mouth group actually made significantly fewer errors than would be expected by chance [count: t(15) = 2.23, p = .04; dwell: t(15) = 3.19, p = .01], demonstrating a clear and strong ability to volitionally suppress the bias to attend to the mouth. In sum, although participants showed a bias to attend to both features during Free Viewing, only the DL: Eyes group was unable to suppress this bias, indicating that there exists a non-volitional component to the typical preferential bias to look at the eyes of others.  96   Figure 5.2 Non-normalized average fixation dwell time per trial to the to-be-avoided feature (eyes or mouth) during the Don't Look block for upright (Experiment 1) and inverted (Experiment 2) faces. In all figures, error bars denote standard error. For this and all figures, in this chapter measures of fixation count to the to-be-avoided feature show the same trends as is demonstrated with dwell time.   Figure 5.3 Normalized average fixation dwell time per trial into the to-be-avoided feature (eyes or mouth) during the Don't Look block for upright (Experiment 1) and inverted (Experiment 2) faces. Chance performance is indicated by a value of one (e.g. horizontal axis).  0 20 40 60 80 100 120 140 E1: Upright Faces E2: Inverted Faces  ROI Dwell Time (ms) Don't Look: Eyes Don't Look: Mouth 0 1 2 3 4 5 6 7 8 E1: Upright Faces E2: Inverted Faces Normalized ROI Dwell Time  Don't Look: Eyes Don't Look: Mouth 97  5.2.2.3.2 Early versus late viewing periods  Volitional processes are typically thought to operate more slowly than automatic processes, thus it is possible that evidence for automatic orienting to the eyes would be stronger early on, whereas volitional processes might not be made available until later in the trial. As such, we compared performance during the first second of viewing (the early viewing period; i.e. any fixations ending within 1000 ms) against performance after the first second of viewing had passed (the late viewing period; i.e. any fixations starting after 1000 ms). For consistency, analyses reported here are from normalized values. Analyses performed on the proportions of fixations to a region reveal in the same effects as those reported here.   In an ANOVA with group (DL: Eyes; DL: Mouth) and viewing period (early, late) as factors, the main effect of DL group was significant, [count: F(1,30) = 11.04, p = .002; dwell: F(1,30) = 11.70, p = .002]. For dwell, but not for fixation count, there was a significant main effect of viewing period, [count: F(1,30) = 2.42, p = .13; dwell: F(1,30) = 4.68, p = .04], as well as a significant interaction between viewing period and group, [count: F(1,30) = 2.05, p = .16; dwell : F(1,30) = 4.16, p = .05]. Investigation of the interaction revealed that for the DL: Eyes group, the overall duration of errors made within the early viewing period was significantly greater than those made within the later viewing period, t(15) = 2.12, p = .05; this was not the case with the DL: Mouth group (p > .05). Figure 5.4 displays normalized dwell time across early and late viewing periods.  While the errors made by the DL: Eyes group in the early viewing period were proportionally longer overall than those made within the later viewing period, this does not preclude the possibility that errors (both in terms of number and duration) were greater than chance in both viewing periods. To explore this possibility, normalized values were compared to 98  one. In the DL: Eyes group normalized values for both viewing periods were significantly greater than one for fixation count, [early viewing: t(15) = 3.08, p = .01; late viewing: t(15) = 2.17, p = .05], and dwell time, [early viewing: t(15) = 3.07, p = .01; late viewing: t(15) = 2.13, p = .05]. In contrast, in the DL: Mouth group, fixation count and dwell times were not different from random levels within the first second of viewing (all, ps > .05) and were significantly less than one within the late viewing period, [count: t(15) = 2.17, p = .05; dwell: t(15) = 2.84, p = .01], indicating that participants suppressed looks to the mouth across the full trial. Note that while a distinction can be made between error rates that are comparable to chance and error rates that are significantly below what could be expected by chance (such that the former represents suppression of the bias exhibited in Free Viewing, while the latter represents suppression and active avoidance of the feature), the critical question is whether gaze behaviour during the DL period was above chance levels or not. As such, the two DL groups clearly show different behaviour: the DL: Eyes group fails to suppress the bias to attend to the eyes during any viewing period, whereas the DL: Mouth group was able to suppress the bias to attend to the mouth across both viewing periods.   Taken together, the DL condition demonstrates that participants cannot fully suppress the bias to attend to the eyes when explicitly told to do so, even after the first second of viewing has passed. However, participants can readily suppress attention to the mouth when instructed. While there is a bias to attend to both the eyes and mouth regions during Free Viewing, only orienting to the eyes is driven in part by non-volitional processes, as revealed by the DL data. 99   Figure 5.4 Normalized average fixation dwell time per trial into the to-be-avoided feature (eyes or mouth) during the Don't Look block for upright (Experiment 1) and inverted (Experiment 2) faces, separated by early (fixations ending within 1000 ms) and late (fixations starting after 1000 ms) viewing periods. Chance performance is indicated by a value of one (e.g. horizontal axis).  5.2.2.4 Gaze behaviour across Free Viewing and Don't Look blocks  Providing participants with DL instructions clearly influenced gaze behaviour towards the to-be-avoided feature. In the following analysis we assessed whether the DL instructions also influenced more general measures of gaze behaviour. To examine this question, mixed-factor ANOVAs were performed on fixation count and average fixation duration (note, this is not fixation dwell time on a particular region as dwell time to the screen is constrained by the constant duration of the trial, but instead represents the average duration of all fixations), taken across the whole trial, with block condition (Free Viewing, DL) and group as factors. Note that these values are not normalized, as fixations to the whole screen were investigated across the same time frame and thus cannot be corrected for ROI size or time frame discrepancies. Analyses showed that during the DL condition, participants made fewer fixations that were of a 0 2 4 6 8 10 12 14 16 Don't Look: Eyes Don't Look: Mouth Don't Look: Eyes Don't Look: Mouth E1: Upright Faces E2: Inverted Faces Normalized ROI Dwell Time Early Viewing Late Viewing 100  longer average duration than those made during Free Viewing [count: F(1,30) = 40.88, p < .001; average duration: F(1,30) = 17.18, p < .001]. There was no main effect of group or an interaction between group and block condition (all, ps > .05). The increase in average fixation duration (and associated decrease in fixation count) is consistent with the proposal that avoiding a particular facial feature requires the participant to volitionally control their overt attention, as subsequent saccades must be willfully planned during fixations to successfully avoid the DL region. 5.2.3 Discussion  The results of Experiment 1 are clear. They replicated the strong bias to look at the eyes while viewing a face under Free Viewing instructions. In the novel DL conditions, while all participants reduce their fixations to the to-be-avoided feature, participants who were instructed to avoid the eyes nevertheless fixated the eyes more often and for longer than would be expected had they simply fixated the screen randomly. In contrast, participants who were told to avoid the mouth fully suppressed looking to the mouth to below levels expected via randomly fixating the screen, both in terms of number of fixations and overall dwell time. In short, one cannot fully control looks to the eyes, but one can control looks to the mouth. This was observed for both fixation number and fixation duration. It was observed across all five seconds of viewing, and it was observed again in both the first second and later portion of the trials. The findings are therefore robust, and our analyses are sensitive to the changes that occur.   It is critical to point out that the results observed within the DL condition cannot be attributed to differences in fixation rates to the two features during Free Viewing. One might wish to make the argument that because participants looked more to the eyes than to the mouth during Free Viewing, it is not surprising that they would continue to do so during the DL condition. Critically, this explanation relies on the hypothesis that participants' ability to 101  volitionally avoid looking at a feature is positively related to how much they attend to that feature naturally: that if a feature is frequently looked at naturally, one would be unable to fully suppress looking to that feature, presumably because a component of this natural tendency to look was outside of the person's volitional control. This hypothesis is of course precisely what Experiment 1 was designed to test and to our knowledge, has not been reported before. Experiment 1 reports that there was a general bias to fixate the eyes during Free Viewing, but when instructed to avoid the eyes participants were unable to fully do so. The implication is that looks to the eyes are not under full volitional control: otherwise, participants should have been able to eliminate any observed bias to look to the eyes. In contrast, participants instructed to avoid looking to the mouth, an instruction which served as our control condition, showed complete suppression of looks to the region, despite also demonstrating a significant bias to look to the mouth in Free Viewing. Indeed, participants could suppress looking at the mouth to below levels that would be predicted by chance. This indicates that the bias to look at the mouth is under volitional control, and it provides converging evidence for our conclusion that looks to the eyes are not under full volitional control. Note also that by testing looks to the eyes and mouth against random fixation rates and dwell times for that region, any differences in Free Viewing base rates between eyes and mouth are irrelevant.  Together these results converge on the conclusion that while the bias to look at the eyes of the face is sensitive to volitional control, this bias cannot be controlled entirely. Thus, there is a non-volitionally component involved in orienting to the eyes. Experiment 2 extends these results by testing whether normal face orientation is a critical factor for these effects to be observed. 102  5.3 Experiment 2: Inverted faces The aim in Experiment 2 is to investigate why participants are unable to fully avoid looking at the eyes, despite instructions to do so. To accomplish this, we replicated Experiment 1 but inverted the faces. This change allowed testing of two competing explanations for why orienting to the eyes is in part automatically driven: a higher-order face processing explanation and a lower-level stimulus saliency explanation. Face inversion disrupts the engagement of brain mechanisms responsible for holistic or configural face processing (Maurer, Le Grand, & Mondloch, 2002; Yovel & Kanwisher, 2005) and also impairs gaze orienting (Dahl, Logothetis, Bülthoff, & Wallraven, 2010; Farah, Tanaka, & Drain, 1995), which requires attention to first be directed to the eyes. More recent evidence also suggests that upright faces bias attention towards themselves, but that inverted faces do not (Olk & Garay-Vado, 2011). It is therefore possible that initial holistic processing of a face serves to focus attention to the eye region, possibly in order to facilitate information extraction. If the engagement of these face processing mechanisms contributes to the automatic bias to look at the eyes, then the pattern of results observed in the DL condition in Experiment 1 should be abolished in Experiment 2. Specifically, looks to the eyes within the DL: Eyes group should be comparable to looks to the mouth within the DL: Mouth group and neither group should show a bias to attend to the respective to-be-avoided feature.  The competing account is that fixations to the eyes are driven by low-level saliency differences between the eyes and the rest of the face. For example, with respect to gaze cuing, there is evidence to suggest that the luminance differences between the sclera and iris within the eye are used to determine gaze direction (Ando, 2002). It is possible that similar saliency-based mechanisms are responsible for the eyes initially attracting attention. Because face inversion 103  should not affect the presence of any such low-level differences, this explanation predicts that the results in Experiment 2 should be identical to those in Experiment 1. Note that while viewing inverted faces represents a departure from what would be normally experienced in everyday life, it provides a critical test between competing explanations for the behaviour observed in Experiment 1. By disrupting normal face processing mechanisms, it is possible to determine whether or not these processes influence non-volitionally orienting attention to the eyes under normal, more natural circumstances.  5.3.1 Methods 5.3.1.1 Participants Thirty-two new students (mean age: 21.03 years; DL: Eyes group was 14 females, 2 males; DL: Mouth group was 13 females, 3 males) were recruited as before. As in Experiment 1, no gender analyses were performed due to the lack of male participants. The participants' ethnicity and cultural background were not recorded. 5.3.1.2 Stimuli and procedure  Stimuli, procedure and data handling were identical to Experiment 1, except that all images were flipped vertically so that all faces were presented in an inverted orientation.  5.3.2 Results 5.3.2.1 Data handling  ROIs were redrawn for each inverted face11. For both Free Viewing and DL conditions, the mean area of the eyes ROI was smaller than the area of the mouth ROI, both: t(31) = 37.18, p < .001. There was no significant difference between the areas of any of the ROIs across                                                  11 Within the analysis software used for these studies, once ROIs were defined, they could not be easily manipulated (e.g. inverted). This led us to re-draw our regions and confirm that their areas did not differ significantly from those used in Experiment 1.  104  Experiment 1 and 2 (all, ps > .05). As in Experiment 1, analyses using normalized values are reported, but all effects and conclusions are identical when non-normalized values are analyzed. The procedure for normalization was identical to Experiment 1.  5.3.2.2 Free Viewing block  Figure 5.5 illustrates participant performance as a 'heat map' of fixations made in the Free Viewing condition (top right panel), as well as in the two DL conditions (bottom panels), whereas Table 5.2 lists the non-normalized and normalized performance values across all conditions. As observed in Experiment 1, participants in the Free Viewing block made more fixations, F(1,30) = 17.72, p < .001, and looked longer overall, F(1,30) = 16.01, p < .001, to the eyes than to the mouth. The main effect of group and its interaction with ROI type were not significant (all, ps > .05). 105   Figure 5.5 Heat maps displaying the distribution of all participants' fixations over an amalgamated face (generated from combining all faces used in the Experiment) for each of the three viewing conditions in Experiment 2. Warmer colours denote greater concentration of fixations. The top left quadrant shows an example image of an inverted face used in Experiment 2; yellow lines depict representative regions of interest used for data analyses.  106  Table 5.2 Experiment 2: Inverted faces. Mean per trial fixation number and dwell times onto ROIs; performance difference between Free Viewing and the relevant Don't Look condition   Non-normalized Values Normalized Values  Eyes ROI Mouth ROI Eyes ROI Mouth ROI Condition Number Dwell Number Dwell Number Dwell Number Dwell Free Viewing 1.50 512.83 1.04 337.29 25.25 28.67 10.44 11.66 Don’t Look: Eyes 0.09 25.22 0.71 305.87 2.04 1.40 8.58 10.10 Don’t Look: Mouth 1.96 1141.83 0.07 20.08 46.71 60.62 1.05 0.66 Relevant Condition Difference 1.42 487.60 0.96 317.21 23.21 27.26 9.39 11.00   When normalized values were compared against one, both fixation count and dwell time were significantly greater than one, to both the eyes [count: t(31) = 6.84, p < .001; dwell: t(31) = 6.34, p < .001], and mouth, [count: t(31) = 11.24, p < .001; dwell: t(31) = 10.69, p < .001]. Thus, participants fixated both regions significantly more than would be expected had fixations been randomly distributed across the image, suggesting that there was a bias to fixate both the eyes and mouth during Free Viewing. Despite face inversion, participants continued to show a bias to attend to both features, and participants continued to look more at the eyes than the mouth of the faces.  5.3.2.3 Don't Look block 5.3.2.3.1 Full viewing period  If non-volitional orienting to the eyes depends on the face being presented in a natural, upright position, then two results are predicted: first, that looking behaviour across the DL: Eyes and DL: Mouth groups should not differ. If face inversion does not disrupt the effect observed in Experiment 1, the DL: Eyes group should continue to look more and for longer at the to-be-avoided feature than would the DL: Mouth group. Consistent with this prediction, Figure 107  5.2Figure 5.2 and Figure 5.3 illustrate that unlike in Experiment 1, gaze behaviour to the to-be-avoided region was not significantly different depending on whether participants were told to avoid the eyes or the mouth of the face, either based on fixation count, t(18.17) = 1.07, p = .30, or dwell time, t(18.59) = 1.19, p = .25.   Further, it was predicted that if face inversion disrupts non-volitional orienting to the eyes then unlike in Experiment 1, participants in the DL: Eyes and DL: Mouth groups would look to the to-be-avoided feature no more and for no longer than would be predicted by randomly fixating the screen. Alternatively, and contrary to the hypothesis, if upright face viewing is unnecessary in order for overt attention to be non-volitionally oriented to the eyes, then the DL: Eyes group should show significantly more and longer looks to the eyes than predicted as compared to if fixations were randomly distributed. Normalized values revealed that the gaze behaviour to the eye ROI by participants in the DL: Eyes group did not differ significantly from one, either in terms of fixation number, t(15) = 1.19, p = .25, or overall dwell time, t(15) = .69, p = .50. Gaze behaviour to the mouth ROI by participants in the DL: Mouth group was also not different from one for either fixation number, t(15) = .19, p = .85, or dwell time, t(15) = 1.64, p = .12. Thus, both groups were able to suppress their bias to look to the eyes when the face was inverted. It is clear that face inversion had a critical influence on participants' ability to not look to the eyes. 5.3.2.3.2 Early versus late viewing periods  An ANOVA with viewing period (early, late), and group (DL: Eyes; DL: Mouth) revealed a main effect of viewing period on both fixation count and dwell time, [count: F(1,30) = 5.63, p = .02; dwell: F(1,30) = 4.91, p = .03], such that proportionally more errors were made to the to-be-avoided region within the first second of viewing than were made during the later 108  viewing period. There was no effect of region or a significant interaction between region and time (all, ps > .05). Normalized dwell time results are displayed in Figure 5.4Figure 5.4.   Despite making more errors to the DL region during the first second, when both viewing periods were compared against one for each group, errors made during the first second were not significantly different from one for either the DL: Mouth group, [count: t(15) = 1.56, p = .14; dwell: t(15) = 1.29, p = .22], or DL: Eyes group, [count: t(15) = 1.91, p = .08; dwell: t(15) = 1.57, p = .14]. In the late viewing period, gaze behaviour also did not differ from one for either the DL: Eyes or DL: Mouth groups, [DL: Eyes - count: t(15) = .68, p = .50; dwell: t(15) = .28, p = .79; DL: Mouth - count: t(15) = .92, p = .37; dwell: t(15) = 1.53, p = .15]. Thus, even though more errors were made during the initial viewing period, gaze to either DL region did not exceed random levels in either phase, and DL groups did not differ from each other. 5.3.2.4 Gaze behaviour across Free Viewing and Don't Look blocks  In the DL block, participants made fewer fixations that were of a longer duration than those made during Free Viewing, [count: F(1,30) = 70.82, p < .001; average duration: F(1,30) = 32.12, p < .001, neither area normalized]. This dovetails with Experiment 1 and suggests the engagement of volitional control. No other main effects or interactions were significant, (all, ps > .05). Thus, the gaze behaviour results with upright faces were replicated with inverted faces.  5.3.3 Discussion In Experiment 1, participants in the DL: Eyes group were less able to avoid the to-be-avoided feature than were participants in the DL: Mouth group, and showed an inability to fully suppress gaze behaviour to the to-be-avoided eye ROI to levels equal to or below chance. In Experiment 2, this difference across DL groups was abolished by inverting the faces; both groups were able to decrease looking to the to-be-avoided feature to chance levels. Both groups 109  showed proportionally more erroneous fixations and dwell time to the to-be-avoided region within the first second of viewing compared to the later viewing period, but neither time period showed significantly more looks to the DL feature than would be expected randomly. The similarity in performance between the DL: Eyes and DL: Mouth groups eliminates the possibility that low-level saliency triggered fixations to the eyes in Experiment 1, and suggests that automatic overt attentional orienting to the eyes may be associated with upright face processing. Formal comparisons across experiments, below, support this. 5.3.4 Comparison across experiments 5.3.4.1 Free Viewing  To compare results across experiments, mixed factor ANOVAs of ROI (eyes, mouth), group (DL: Eyes, DL: Mouth), and face orientation (upright, inverted) revealed a main effect of ROI, [count: F(1,60) = 72.40, p < .001; dwell: F(1,60) = 62.95, p < .001], and face orientation, [count: F(1,60) = 6.18, p = .02; dwell: F(1,60) = 5.81, p = .02]. Main effects of ROI and face orientation were superseded by an interaction between these factors, [count: F(1,60) = 14.38, p < .001; dwell: F(1,60) = 12.80, p = .001]. All other main effects and interactions were not significant (all, ps > .05). The interaction revealed that participants in Experiment 2, compared to Experiment 1, looked less often, t(56.65) = 3.22, p = .002, and for a shorter overall duration, t(56.18) = 3.07, p = .003, at the eyes. In contrast, participants in Experiment 2, compared to those in Experiment 1, looked more, t(59.72) = 4.04, p < .001, and for longer, t(61.15) = 3.70, p < .001 at the mouth. In short, while there was a bias to look at the eyes over the mouth in both experiments, this bias was reduced for inverted faces. Importantly, as was apparent from the Free Viewing analyses in Experiment 2, a bias to attend to the eyes over the mouth was still observed, 110  indicating that while this bias was reduced compared to that demonstrated in Experiment 1, it was not eliminated by inverting the faces.  5.3.4.2 Don't Look 5.3.4.2.1 Full viewing period  The conclusions arrived at by examining each experiment separately were supported by the results of univariate ANOVAs on fixation count and dwell time to the DL region, with group (DL: Eyes; DL: Mouth) and face orientation (upright, inverted) as factors. There was a main effect of group, [count: F(1,60) = 10.21, p = .002; dwell: F(1,60) = 9.62, p = .003]. There was also a significant main effect of face orientation on fixation dwell time, [count: F(1,60) = 3.30, p = .07; dwell: F(1,60) = 4.08, p = .05]. Main effects were mediated by a significant interaction between face orientation and DL group, [count: F(1,60) = 5.26, p = .03; dwell: F(1,60) = 5.27, p = .03], which, as detailed in the results from each experiment, is due to the fact that only when faces were upright were participants in the DL: Eyes unable to avoid looking at the eyes; DL: Mouth participants were able to avoid looking at the mouth regardless of face orientation. This suggests that holistic or configural face processing influences the process of automatically driving overt attention to the eyes of upright faces.  5.3.4.2.2 Early versus late viewing periods  An ANOVA with face orientation (upright, inverted), viewing period (early, late) and group (DL: Eyes, DL: Mouth) revealed a main effect of viewing period, [count: F(1,60) = 6.93, p = .01; dwell: F(1,60) = 8.86, p = .004], indicating that overall, proportionally more errors were made to the to-be-avoided region within the first second of viewing than were made during the remainder of the trial. The main effect of group was also significant, [count: F(1,60) = 12.00, p = .001; dwell: F(1,60) = 12.46, p = .001]. While the main effect of face orientation was not 111  significant, (count and dwell, both ps > .05), there was a significant interaction between face orientation and group, [count: F(1,60) = 4.99, p = .03; dwell: F(1,60) = 6.82, p = .01], which is elaborated within the above sections, 'Don't Look: Full Viewing Period': participants were able to avoid looking at the to-be-avoided feature in all instances except when told to avoid the eyes when viewing upright faces. There was also a significant interaction between viewing period and group for dwell time, [count: F(1,60) = 2.38, p = .13; dwell: F(1,60) = 4.31, p = .04], simply revealing that across both studies, the DL: Eyes group made proportionally longer errors within the early viewing period than in the late, t(31) = 2.59, p = .02, whereas the difference across viewing periods for the DL: Mouth group was not quite significant, t(31) = 1.86, p = .07. Although this result shows that errors were proportionally longer in the early than the late viewing period for the DL: Eyes group, this does not undermine the analyses from each experiment revealing distinct differences in gaze behaviour based on face orientation. Specifically, when faces were upright, but not when faces were inverted, errors were greater than what would be predicted by chance in the DL: Eyes group during both viewing periods. No other interactions were significant, (all, ps > .05). 5.3.4.3 Gaze behaviour across Free Viewing and Don't Look blocks  Across both face orientations, participants made fewer fixations that were of a longer duration during the DL condition than they did during Free Viewing, [count: F(1,60) = 109.85, p < .001; average fixation duration: F(1,60) = 48.01, p < .001, neither area normalized]. No other main effects or interactions reached significance (all, ps > .05). Thus, although face orientation had a profound effect on behaviour to the to-be-avoided eye region, it did not influence general gaze behaviour across the whole image; the decrease in number and increase in duration of 112  fixations during the Don't Look block was not specific to avoiding any particular feature or to a specific face orientation.  5.3.5 Discussion  Beyond confirming conclusions made within Experiments 1 and 2, comparisons across face orientations revealed two additional findings. First, that when viewing an inverted face, the bias to attend to the eyes is reduced relative to when viewing an upright face. This finding compliments the main results: as the bias to attend to the eyes during upright face viewing is in part due to a non-volitional component (shown in Experiment 1) that is contingent on faces being presented upright (Experiment 2), it follows that looks to the eyes during Free Viewing of inverted faces might be reduced. Given that attention is not non-volitionally oriented to the eyes when faces are inverted, it is reasonable to then expect that more fixations would naturally land on the mouth, as it is also one of the most frequently fixated features of a face. Note an additional possibility is that our strictly defined ROIs may have contributed to this finding. Due to our specific DL instructions, we defined our ROIs to include only the eyes or mouth, whereas it is common that an ROI include surrounding regions (Birmingham et al., 2008a, 2008b; Henderson et al., 2005; Williams & Henderson, 2007). That face inversion does not always decrease fixations to the eyes (e.g. Williams & Henderson, 2007, but see Guo, Robertson, Mahmoodi, Tadmor, & Young, 2003 for evidence of decreased eye fixations to inverted faces when viewed by rhesus monkeys) may therefore be due to the use of a more liberal criteria for what is to be considered ‘eyes’: when faces are inverted, it is possible that fixations become less precise such that, as observed here, fixations to the eyes themselves (but not necessarily the region around the eyes) are reduced. Regardless of the cause of this decrease, critically for the 113  present purposes, this bias to examine the eyes was still present and still strong when viewing inverted faces.  Second, examination of performance during the DL condition revealed that when considered together, participants generally made proportionally more errors to the DL ROI within the first second of viewing than within the remainder of the face viewing period. These results mirror those reported in the analyses of Experiment 2, and for dwell time measures (but not fixation count) in Experiment 1. Although not significant when analyzed alone, in Experiment 1 the number of errors in the DL: Eyes group were numerically greater during early viewing than in the late viewing period, consistent with the cross-experiment analyses. More errors during early viewing are consistent with the view that the recruitment of volitional resources to control attention is not instantaneous, and that perhaps volitional control increases over time. This does not, however, suggest that our finding of non-volitional orienting to the eyes during upright face viewing was caused only by these early errors: recall that when compared against chance fixation behaviour, the DL: Eyes group viewing upright faces showed a bias to attend to the eyes across both viewing periods.  5.4 General discussion  In both experiments, participants showed a bias to look at the eyes during Free Viewing, demonstrating that this bias is not constrained to upright face viewing. When instructed to avoid a particular 'Don't Look' feature, participants in both experiments and in both DL groups were successful in reducing the number of fixations to that region. Regardless of face orientation, participants that were told to avoid the mouth were able to accomplish this, demonstrating few errors (i.e. looks to the 'Don't Look' feature). Critically, this was not the case for participants in the DL: Eyes group. When faces were inverted, participants in the DL: Eyes group were able to 114  avoid the eyes successfully, reducing fixations to the feature to random levels. However, when faces were presented upright, participants in the DL: Eyes group made more and longer fixations on the eyes than participants in the DL: Mouth group fixated on the mouth, and erroneous fixations were significantly higher than would be expected by chance.   Three major conclusions can be drawn from these results. First, they demonstrate that orienting to the eyes is largely under the control of the participants. Second, the preferential bias towards the eyes that is observed for upright face viewing is not entirely under volitional control: there exists a small but theoretically important non-volitional component that serves to orient overt attention to the eyes, even when participants are instructed to avoid doing so. Finally, Experiment 2 demonstrates that the non-volitional component of this bias toward the eyes is not due to low-level saliency differences between the eyes and the rest of the face: when the same images were inverted (Experiment 2), the significant differences in participants' ability to avoid the eyes were now abolished.  It is also worth noting that the results of Experiment 2 suggest that holistic or configural face processing may contribute to this bias to orient toward the eyes. When holistic/configural processing was disrupted by stimulus inversion, participants were now able to avoid the eyes. This novel result provides an important clue as to why people look at the eyes of others. There exists strong evidence that faces need to be upright in order to effectively engage the neural mechanisms (e.g. the FFA) responsible for holistic or configural face processing (Farah et al., 1995; Yovel & Kanwisher, 2005). Given this, the results of Experiment 2 suggest that the bias to look at the eyes is driven in part by activation of these face processing mechanisms. Moreover, as holistic or configural processing is often associated with expertise (Dahl et al., 2010; Gauthier & Tarr, 1997), the association between holistic/configural face processing and humans' non-115  volitional bias to look at the eyes could reflect both activation of expert face knowledge and attentional orientating to areas of high information. One can speculate that activation of face processing mechanisms drives overt attention to the eyes, which represent a region of high information on faces. Inversion disrupts the first stage of this model rendering it more vulnerable to volitional control. Indeed, it would be interesting to determine whether similar interactions between expertise and attention occur in other domains.   If expertise influences attention being driven to the eyes, it would be of interest to use the DL task to examine whether experts in other domains also show non-volitional orienting to specific features (Kundel, Nodine, Krupinski, & Mello-Thoms, 2008). In relation to the current study, it may be informative to investigate whether differences in orienting to the eyes during Free Viewing are associated with increased or decreased ability to avoid the eyes during DL. For example, persons with ASD may show an aversive response to direct gaze (Dalton et al., 2005), which may abolish non-volitional orienting to the feature. Likewise, recent evidence has demonstrated that while Western Caucasian participants tend to fixate the eyes directly, Eastern Asian participants view faces more globally and fixate the nose most frequently (Blais, Jack, Scheepers, Fiset, & Caldara, 2008). This might suggest that these participants would be better able to overtly avoid looking at the eyes during DL tasks, but may continue to covertly attend to the region. However, as the results from Blais and colleagues (2008) relate to differences found with Eastern Asian participants that had been residing in a Western culture for an average of just one week, it is unclear whether any differences would be observed in the present study, where the majority of participants would either have been native to, or long-time residents of, North America. 116   Future research could also explore the role that upright face processing plays in triggering automatic orienting to the eyes. For example, the inversion of the eyes, rather than the face itself, may be of critical importance. It may also be true that while direct gaze non-volitionally orients attention, averted gaze does not. Similarly, investigation of whether other facial features capture attention non-volitionally in particular contexts may also be of interest: while a neutral mouth does not capture attention, does this change if the mouth is smiling? It is clear that the use of the DL task could be very informative in future studies.   Finally, note that while face inversion influenced non-volitional orienting to the eyes in the present experiments, there was no significant impact of face inversion on saccadic trajectory measures reported in Chapter 4's Experiment 1. Though the tasks are superficially similar (i.e. both involve a social stimulus), in actuality they vary substantially. The level of face processing required to complete each task was very different, which itself appears to be important when considering whether a task will elicit inversion effects. More specifically, face inversion effects are not found consistently for simple face detection tasks compared to facial identification or recognition tasks (Bindemann & Burton, 2008; Kanwisher et al., 1998; though see, e.g. Langton et al., 2008).  In the present study, participants were requested to directly attend to the face for an unspecified task, which likely prompted participants to engage further with the stimulus than in Chapter 4, where participants were specifically told to ignore the distracting face stimulus (thereby rendering in-depth face processing unnecessary). Consistent with this view, a recent saccadic trajectory study employing the participants' own versus unfamiliar faces as distractors did elicit face inversion effects (Qian, Gao, & Wang, 2015), suggesting that when prompted to distinguish between facial identities, inversion can influence saccadic trajectory measures. Further, though the present study found evidence of face inversion effects on how overt attention 117  is allocated to the eyes of the face when that face is already attended to, Chapter 4 focused instead on a) covert attention to  b) faces more generally that were presented c) as smaller stimuli in the periphery. This may suggest that the level or manner by which attention is allocated to a social stimulus influences the manner by which it is processed (see Chapter 7).  5.4.1 The 'Don’t Look' task As a method in and of itself, the DL task provides an important complement to existing paradigms addressing the control of overt attention. Many common cognitive tasks that assess volitional attentional control are designed to measure a single volitional response. For example, in an antisaccade task, individuals try to look in the direction opposite from a stimulus onset (Everling & Fischer, 1998; Munoz & Everling, 2004). The struggle to ignore the stimulus onset is revealed by incorrect saccades towards the onset, or delayed correct RTs away from the onset: the greater the number of incorrect saccades or the longer the delay, the greater the evidence that orienting is not completely under volitional control. As people's experience with faces can last much longer than a few hundred milliseconds, however, use of such an intentionally simplistic task would miss critical information about how attention to the eyes is directed within more natural viewing periods. The DL task expands on this research by providing a method to explore attentional control for longer response periods using natural complex stimuli. Thus, the control has to be sustained for an extended period. Interestingly, behavioural changes observed across comparable DL and Free Viewing conditions mirror those found in more basic tasks: fixation durations were lengthened for DL compared to Free Viewing conditions in a manner analogous to how RTs are delayed during antisaccades compared to prosaccades (Olk & Kingstone, 2003). While the DL task has been used here to investigate overt attention to the eyes of another, the potential for the DL task to explicate other questions of attentional control beyond those in the 118  social attention domain (e.g. investigating visual responses to emotional stimuli) is readily apparent.  5.4.2 Conclusion When looking at images with people, individuals show a profound bias to look at eyes. While evidence for a strong volitional component of this bias was observed, the present investigation provides evidence that this bias is at least partly outside the control of the looker. Furthermore, these results suggests that the tendency towards looking to the eyes is modulated by the engagement of holistic or configural face processing mechanisms. Future work aimed at understanding social influences on attentional control will further allow for a better understanding of how cognition is shaped by one's social life.    119  Chapter 6: Exploration of the utility of non-volitional social attention to the eyes In the previous chapters, I have demonstrated that attention is directed to real people and their representations, and also that some aspects of social attention are driven in a manner beyond the observer’s control. This has shed light on whether, and how, attention is directed to people or their features by suggesting that social attentional behaviour is generalized across contexts, despite differences in its deployment. There are many questions about the deployment of social attention that are left to be answered, some of which are discussed in the future directions of Chapter 7. In the final experimental chapter, I explore one pertinent question to come out of the previous work: for what purpose is attention so readily directed to other people, and in particular, to their eyes?   In posing this question, I hasten to add that the benefits for attending to people are numerous, and a single chapter cannot begin to pay full service to the question of social attention’s utility. As such, Chapter 6 should be regarded as an exemplary piece in which to demonstrate not only that social attention is functional, but also to explore how the differences in covert and overt attention may relate to this functionality. To accomplish this, I build off of Chapter 5’s finding that orienting to the eyes is partially non-volitional and ask whether the bias to look to the eyes is due to the eye’s importance in other social cognitive processes. More specifically, in Chapter 6, I test whether overt orienting to the eyes is necessary for improved facial learning, or if instead any form of orienting (i.e. covert) would facilitate later recognition. The conclusions of this chapter demonstrate that the divergent roles of covert and overt orienting are not limited to instances involving signaling, and suggest that even further distinctions in their utility may become evident in the future.  120   A version of Chapter 3 has been submitted for publication. 6.1 Introduction  When looking at an image of a face, people will most often look to the internal features, including the eyes, mouth, and nose (Henderson, Falk, Minut, Dyer, & Mahadevan, 2001; Walker-Smith et al., 1977; Yarbus, 1967). Of those features, the eyes are the most frequently fixated and this bias persists across task demands (Henderson et al., 2005; Janik et al., 1978; Langton, Watt, & Bruce, 2000; Schyns, Bonnar, & Gosselin, 2002). That people will so frequently attend to this feature raises the question of whether this behaviour may be functionally beneficial. Face learning in particular appears to rely heavily on information provided within the eyes (Itier, Alain, Sedore, & McIntosh, 2007; O’Donnell & Bruce, 2001; Schmalzl, Palermo, Green, Brunsdon, & Coltheart, 2008; Sekiguchi, 2011). For instance, recognition performance is impaired when the upper face or eye region, but not the lower face or mouth region, is masked during initial encoding (McKelvie, 1976). Similarly, when select features are exposed via a 'bubbles' technique, participants appear to rely on information presented within the eyes in order to successfully identify faces (Gosselin & Schyns, 2001; Schyns et al., 2002; Vinette, Gosselin, & Schyns, 2004). Recognition is also improved when the face is learned with direct gaze as opposed to when the eyes are averted or closed (Farroni, Massaccesi, Menon, & Johnson, 2007; Hood, Macrae, Cole-Davies, & Dias, 2003).  As evident from the studies cited above, a common theme in testing the importance of the eyes in face learning is to manipulate the visibility of the feature to the observer. An interesting, relatively unexplored question to emerge from this work however, is whether the benefits conferred from having the eyes visible is due to participants fixating and attending to the region, or if instead attention being allocated to the eyes - without a corresponding shift in gaze - is 121  sufficient to yield improvements. Though people will often shift their eyes and their attention at the same time - that is, they deploy attention overtly - it is also true that oculomotor and attentional mechanisms are dissociable and can work independently (Hunt & Kingstone, 2003a, 2003b; Juan et al., 2004; Posner, 1980). This means that attention can be directed covertly, without a subsequent eye movement. For this reason, the common practice of testing the importance of the eye region to face learning by manipulating the visibility of the eyes confounds the role of covert and overt orienting. When the eyes are masked, attention can be neither overtly nor covertly directed to that feature; when the eyes are made visible, the observer is able to attend to them in both manners. In sum, the current findings do not establish whether a shift in attention to the eyes of a face without a corresponding shift in fixation is sufficient to produce the beneficial effects of eye processing to face learning.  There are reasons both for and against thinking that fixating the eyes is critical for effective face learning. While attending either overtly or covertly enhances visual processing (Gazzaley et al., 2005; Moran & Desimone, 1985; Polk et al., 2008), orienting attention overtly allows for greater visual acuity, greater colour processing, and also for serial processing of information that is not otherwise available using peripheral vision. The eyes and surrounding area are highly detailed, and show great variability across individuals in terms of their colour and shape; these details may therefore be best processed when fixated directly. A study by Henderson, Williams, and Falk (2005) showed that compared to when participants maintain central fixation, freely viewing faces during encoding – which meant that participants looked most often, but not exclusively, to the eyes – led to better performance on a subsequent recognition task. Rightfully, Henderson and colleagues concluded that fixations in general improve face learning. Considering the strong eye bias during free viewing, however, these 122  results may be suggestive of a functional role of fixating the eyes in particular: performance was still worse in the central fixation condition even though the feature was available to be covertly attended. Thus, fixations on the eyes may serve a unique function in face learning.  An argument against this viewpoint, however, is that the behaviour of looking to the eyes may occur in service of directing attention more generally to this area, i.e. it may not be strictly necessary to fixate the eyes in order to effectively encode facial features. There is some evidence from face recognition literature that looking near, but not necessarily on, the eyes is an effective recognition strategy (Hsiao & Cottrell, 2008; M. F. Peterson & Eckstein, 2012; Sæther, Belle, Laeng, Brennen, & Øvervoll, 2009); the same may also be true for encoding. Further, one study that restricted parafoveal visual information during face encoding found that even participants who did not show a strong natural bias to fixate the eyes nevertheless looked to the region when parafoveal information was unavailable, implying that the eyes are the source of parafoveal focus under unrestricted circumstances (Caldara, Zhou, & Miellet, 2010). Thus, covertly attending to the eyes to process both eye information and relational information (i.e. inter-ocular distance, distance from the nose, etc) could be accomplished without looking directly at the feature, and could support holistic processing known to be important for subsequent recognition performance (Richler, Cheung, & Gauthier, 2011) .    The aim of the current paper was to explore whether the eye advantage during face learning requires direct fixation, or if instead directing only covert attention to the eyes can elicit the same benefit. To accomplish this, we tested face recognition performance following face learning where participants either restricted fixating the eyes or focused only covert attention on the region.  123  6.1.1 Studies overview Experiment 1 aimed to determine whether simply avoiding looking - that is, restricting overt attention - to the eyes would result in a detriment, as has been observed in other tasks in which the eyes were masked or not otherwise available (McKelvie, 1976). Participants were shown a series of unaltered faces and were given one of two viewing instructions, either Free Viewing, in which they were to look at the faces naturally, or Don't Look (DL), in which they were told to look naturally but to avoid looking at a particular feature. For half of the participants, they were asked to avoid looking at the eyes (DL: Eyes), whereas the other half of participants were asked to avoid looking at the mouth (DL: Mouth). As the feature they were to avoid looking at was not removed or masked, covert attention was unrestricted and could conceivably be directed anywhere on the face. Participants were later given a recognition test in which they were tasked with reporting if the faces shown were new or previously seen in the first half of the experiment. If overt attention to the eyes is necessary for superior face learning, then the recognition of faces encoded during the DL: Eyes condition should be worse than for all other conditions, even when compared to the DL: Mouth group that was similarly restricted in their viewing patterns, but to a different feature. If, however, covert attention to the eyes can be effectively used to encode faces, then presumably, groups should not differ in their recognition performance. In addition, half of the participants were told of the upcoming recognition test prior to face encoding, while half were not. It was reasoned that while looking to the eyes may be beneficial, later successful face recognition may be possible using an alternative strategy involving attending to key features elsewhere on the face. As such, any proposed detriment to avoiding the eyes may be limited to the condition in which participants are unaware of the task and thus cannot compensate in some other manner.  124   For Experiment 1, though covert attention was free to be directed anywhere on the face, even to features where the participant was restricted from looking, it is unclear where covert attention was directed. Did attention stay linked with oculomotor behaviour, or instead did participants try to attend peripherally to the features they were told to avoid? In order to more directly probe whether covert attention to the eyes would be sufficient to incur a face learning benefit, Experiment 2 asked participants to avoid making any fixations (i.e. maintain fixation on a central point) but to direct their covert attention to either the eyes or the mouth. In one block, they were told to covertly attend to the eyes, whereas in the other encoding block, they were told to covertly attend to the mouth. A recognition test like that used in Experiment 1 followed, once again with or without participants being informed of the test ahead of time. If covertly attending to the eyes incurs any advantage during face learning, then participants should perform better for faces in which they attended to the eyes as compared to faces that were presented when attention was to be directed to the mouth. And, similar to Experiment 1, if this benefit is modulated by strategic acquisition of information during encoding, then the benefit may be enhanced when participants are informed in advance of the recognition test.  6.2 Experiment 1: Feature fixation avoidance effects on recognition performance 6.2.1 Methods 6.2.1.1 Participants  Results from 104 participants (Mage = 20.78, SD = 5.20; 78 females) are reported, all of whom completed the experiment in exchange for course credit or remuneration12. Participants                                                  12 We did not restrict participation to a particular cultural or ethnic group, or limit participation based on residence in a Western culture. Based on self-identified ethnicity, Experiment 1 included 39 East Asian, 17 Caucasian, and 48 participants who listed either non-specific, mixed, or alternate ethnicities. Experiment 2 included 12 East Asian, 14 Caucasian, and 13 participants of other ethnicities. Although some studies report differences in the extent of eye bias in Western 125  were divided equally into two memory conditions (uninformed or informed of an upcoming recognition test), and two DL instruction conditions (DL: Eyes and DL: Mouth). All participants gave informed written consent, 6.2.1.2 Stimuli and procedure  Participants sat and positioned themselves in a head and chin rest so that viewing distance was 60 cm from a 17 inch monitor. Eye movements were recorded using an EyeLink 1000 desktop-mounted eye tracker (SR Research), which recorded at a sampling rate of 1000 Hz. Stimuli were taken from the Face Database (Minear & Park, 2004, image database found at http://agingmind.utdallas.edu/facedb), and consisted of 100 (50 male, 50 female) colour portraits of Caucasian adults between 18 and 29 years old, photographed against an off-white background. All faces presented displayed a neutral expression. To maintain a sense of realism to the images, hair, clothing, and jewelry were not cropped or altered. Both encoding and recognition phases contained an equal number of men and women. Images were pseudo-randomized into four sets of 25 images that were counterbalanced across both encoding (1 set for Free Viewing, or FV; 1 set for DL) and recognition phases (2 previously viewed sets plus 2 new sets). Image order was randomly determined within each block (FV encoding, DL encoding, and recognition).   The experiment consisted of two computer portions: an encoding and a recognition phase. Prior to beginning, participants in the informed condition were told in advance that they would be looking at a series of faces and would then be asked to complete a recognition memory test that would consist of these as well as new faces following the experiment's first phase. Participants in the uninformed condition were not told in advance about the recognition phase.                                                                                                                                                              and East Asian participants (Blais et al., 2008; Jack et al., 2007), it is important to consider that this does not necessarily suggest that cultural differences eliminate fixations to the eyes, simply that they are reduced. Based on the lack of available information about our participant’s specific ethnicities, analyses relating to ethnicity or cultural differences were not possible.  126  The encoding phase started with a standard nine-point eye tracking calibration and validation procedure, which was repeated until the average error in validation was less than 1.5˚ of visual angle. Each participant completed a FV and a DL block during encoding. The order of the blocks was counterbalanced. In the FV block, participants were told to look at the faces as they normally would look at a face. The same instructions were delivered in the DL block with the caveat that participants were also told, either, not to look at the eyes (DL: Eyes) or not to look at the mouth (DL: Mouth) of the faces. Thus, the to-be-avoided feature was a between-participant manipulation. FV and DL blocks were separated by a self-paced break.  Immediately after an encoding phase, participants completed a recognition phase. In it, participants were asked to look at the faces however they wished, and to report whether the presented face was new (not seen previously) or old (seen in the FV or DL block). It was not necessary to indicate whether the face was from a particular encoding block. Participants were told to respond as quickly and as accurately as possible, pressing the 1 or 2 on the number pad of the keyboard (response-key association for old and new counterbalanced across participants). The recognition phase consisted of 100 images, half of which were previously seen in the Encoding Phase. Recognition trials were divided across three blocks, separated by a self-paced break between blocks.  All images were preceded by a fixation point that onset randomly at one of the four corners of the image, which ensured that participants did not start the trial by fixating the image. Images were shown for five seconds within the encoding phase and until response (or 20 seconds, whichever was less) within the recognition phase.  127  6.2.2 Results 6.2.2.1 Data handling   Regions of interest (ROIs) were hand-drawn just around the outside of the eyes and mouth (see Figure 6.1 for example). Across all images, the mean area of the eyes ROI was significantly smaller than the mean area of the mouth ROI, t(99) = 5.40, p < .001. To account of the different sizes of our ROIs, the regions were area normalized and analyses were performed on these normalized values (Bindemann et al., 2009; Birmingham et al., 2008a, 2008b; Fletcher-Watson et al., 2008; Laidlaw et al., 2012). Area normalization was accomplished by dividing the percentage of all fixations (either fixation count or dwell time) that landed within the ROI by the percentage of the ROI's area relative to the area of the screen. Thus, a normalized value of 1 indicated that the region was fixated as much as would be expected if fixations had been randomly positioned across the screen. For example, if 10% of all fixations land within an ROI that is also 10% of the screen area, the normalized value becomes 1. A value significantly less than or greater than 1 indicates avoidance or biased selection of the area, respectively.   Where Mauchly's test for sphericity or Levene's test for equality of variances was significant (liberally set at α = .25 for assumption violation tests), we report adjusted degrees of freedom and p values. Any fixations that began before the onset of the face (e.g. at the fixation point) or persisted after the face offset were trimmed to be within the face-viewing period.  Both fixation dwell time and fixation number were analyzed for all measures. However, these measures mimicked each other, much like what was observed with a similar Don't Look paradigm in Laidlaw et al. (2012). As such, to streamline reporting, results are reported for dwell 128  time only.  Figure 6.1 Example face stimulus and heat maps displaying the distribution of participant fixations during the Encoding Phase for each of the three viewing conditions. Top left image shows representative ROIs used for data analysis (yellow lines) and were not shown during testing. Heat maps are presented over an averaged face. Images adapted from those found in the Face Database (Minear & Park, 2004). 6.2.2.2 Encoding phase 6.2.2.2.1 Did participants follow viewing instructions?  To test looking behaviour and see whether participants adhered to viewing instructions during the encoding phase, normalized fixation number and dwell times to the eyes and mouth ROIs were submitted to a mixed-factor ANOVA with viewing instruction (FV, DL) and face feature (eyes, mouth) as within-subject factors, and DL instruction (DL: Eyes, DL: Mouth) and whether they were informed about the recognition test (uninformed, informed) as between-129  subject factors. The three-way interaction between viewing instruction, face feature, and DL instruction was significant, F(1,100) = 126.40, p < .001, as well as lower-order interactions and main effects involved, all ps < .001. Whether participants were informed or not of the recognition test did not significantly affect dwell time, nor did this significantly interact with any other factors, all ps > .05.   To explore the three-way interaction, follow-up analyses for FV and DL trials revealed a significant main effect of face feature for both FV, F(1,102) = 200.78, p < .001, and DL trials, F(1,102) = 41.63, p < .001, However, dwell times to DL faces were only influenced by DL instruction, F(1,102) = 33.62, p < .001; FV faces were unaffected by participants receiving different DL instructions in the other encoding block, ps > .05. For the DL faces, there was also a significant interaction between face feature and DL instruction, F(1,102) = 157.10, p < .001. For DL faces, the interaction of face feature and DL instruction was examined via independent t-tests and showed that those told to avoid the mouth looked at the feature less than those told to avoid the eyes, t(51.14) = 8.13, p < .001, and similarly, those told to avoid the eyes looked at that feature less than those told to avoid the mouth, t(52.62) = 10.12, p < .001. Table 6.1 provides the area normalized and non-normalized fixation counts and dwell times to the eye and mouth regions across FV and DL instructions; it clearly demonstrates that participants were able to reduce looks to the instructed feature; Figure 6.2 shows the area normalized values for the encoding phase13.                                                   13 Though not critical to the analyses of the present study, additional examination showed that the results of the Encoding Phase for Experiment 1 replicated the results of Laidlaw et al. (2012), specifically for upright faces: while looks and overall dwell time to the eyes and mouth decreased when participants were told to avoid the feature, only participants told to avoid the eyes showed a small but significant bias to continue to attend to the don't look feature  130   Figure 6.2 Normalized fixation dwell times to the eye and mouth ROIs across different viewing instructions during the encoding phase of Experiment 1. There was a bias to attend to the eyes over the mouth during Free Viewing, but participants did well to avoid the feature instructed in either DL condition. Error bars represent standard error throughout. Table 6.1 Average non-normalized and normalized fixation number and dwell times to eyes and mouth ROIs during encoding  Non-normalized Values Normalized Values  Eyes ROI Mouth ROI Eyes ROI Mouth ROI Condition Number Dwell Number Dwell Number Dwell Number Dwell FV 3.34 1005.35 0.67 191.89 35.16  40.54  6.01  6.67 DL: Eyes 0.25 64.32 1.29 591.40 3.02  2.49  15.96  19.97  DL: Mouth 3.57 1421.29 0.06 13.10 44.78  54.98  0.58  0.43  Note. ROI = region of interest; FV = Free Viewing DL = Don’t Look; bolded values denote behaviour to feature told to avoid.  6.2.2.3 Recognition phase 6.2.2.3.1 Did participants remember faces differently based on encoding instructions?  Overall, participants averaged 74.26% correct responses (SD = 10.31%) in the Recognition test (i.e. labeling previously unseen faces as new and faces from the encoding block 0 10 20 30 40 50 60 70 Eyes ROI Mouth ROI Normalized ROI Dwell Time Free Viewing DL: Mouth DL: Eyes 131  as old), which was significantly greater than chance, t(103) = 23.99, p < .001. To determine if participants' recognition performance was affected by whether they had viewed the faces previously within the FV or DL Encoding blocks, a log d discriminability index was calculated for both FV and DL images, using the same set of new images. Log d is used in a similar way to the more common d' sensitivity index such that better discrimination performance is denoted by a higher score, but is a more reliable index when trial number is relatively smaller (Brown & White, 2005). Three participants did not make any errors so to include them, .5 was added to the scores for all participants, as recommended by Brown and White when dealing with extreme performance. Log d is calculated as                                                                   where Hits and Correct Rejections refer to accurately categorizing old faces as old and new faces as new, respectively. Misses and False Alarms refer to inaccurately categorizing old faces as new and new faces as old, respectively. As the focus of the present experiment was on accuracy rather than speed, and accuracy results were not contradicted by reaction time results (i.e. the behaviour reported does not reflect a speed/accuracy trade-off), reaction times are not reported but are instead available in the Appendix for both Experiments 1 and 2.   A mixed factors ANOVA with encoding viewing instruction (shown in FV or DL encoding block), DL instruction (DL: Eyes, DL: Mouth), and whether they were informed of the recognition test (uninformed, informed) as factors was performed on participant's log d values. The highest-order significant interaction was between encoding viewing instruction and specific DL instructions, F(1,100) = 11.81, p = .001. Follow-up analyses of the interaction revealed that while participants given different DL instructions did not differ on their recognition performance for faces originally viewed during FV encoding, t(102) = .16, p = .87, participants in the DL: 132  Eyes group performed significantly worse than did participants in the DL: Mouth group for faces previously seen during the DL encoding block, t(96.55) = 2.23, p = .03. Returning to the main ANOVA, there was also a significant main effect of encoding viewing instruction, F(1, 100) = 51.03, p < .001, showing better performance when the face was originally viewed during the FV encoding block than in the DL encoding block. There were no other significant main effects or interactions, ps > .05. Results from the recognition test are represented in Figure 6.3.     Figure 6.3 Discrimination performance at determining if faces presented in the recognition phase were previously seen during encoding or were new for Experiment 1. Performance is shown for faces originally seen in Free Viewing, under DL: Mouth, or under DL: Eyes viewing instructions. Performance was worse for faces seen in DL than FV conditions; further, participants performed worse for faces originally seen in the DL: Eyes than in the DL: Mouth group. 6.2.2.3.2 Did participants look at faces differently based on encoding instructions?  Overall, participants dwelled on the faces for an average of 1080.28 ms (SD = 619.92 ms) and made 5.25 fixations (SD = 2.46). A mixed-factor ANOVA with face category (seen previously during FV, DL, or a New face), DL Instructions (DL: Eyes, DL: Mouth), and whether they were informed of the recognition test (informed, uniformed) as factors revealed only a 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Disciminability (logd) Recognition Phase Free Viewing DL: Mouth DL: Eyes 133  significant main effect of face category, F(2,200) = 5.29, p = .006, which was due to participants making very slightly more fixations and overall looking slightly longer to DL faces than FV, t(103) = 2.29, p = .02, or New faces, t(103) = 3.14, p = .002, but no difference between FV and New faces, ps > .05. No other main effects or interactions were significant, ps > .05. Though significant, these differences represent fractions of a fixation (.17) and small differences in dwell time (~40 ms) and should be interpreted as such.  To test whether looking behaviour changed as a function of initial viewing instructions, normalized fixation count and dwell times were compared across face category (new, FV, DL), DL instruction (DL: Eyes; DL: Mouth), whether they were informed of the recognition test (uninformed, informed), and face feature (eyes, mouth) using a mixed-factor ANOVA. There was a main effect of face feature, F(1,100) = 119.85, p < .001, and an interaction between face feature and face category, F(1,100) = 5.74, p = .004. This interaction represented only numerically small changes in time or number of fixations directed to the eyes or mouth across conditions; more interestingly to the current question, there were always more looks to the eyes than to the mouth for all face types, all ps < .001. These analyses demonstrate that overall, there was little change in looks directed to the eye and mouth ROIs depending on what instructions were followed during encoding. 6.2.3 Discussion   The aim of Experiment 1 was to determine whether there would be a recognition memory cost associated with not looking at the eyes of faces during encoding, as compared to either looking at faces naturally, or avoiding another feature, the mouth. When looking at faces naturally, the eyes were looked at most and for the longest. Participants followed encoding instructions, dramatically reducing looking to the eyes in the DL: Eyes condition, or to the mouth 134  in the DL: Mouth condition. The inclusion of the DL instructions themselves had a detrimental effect to recognition performance, as overall recognition was worse for DL faces than FV faces. Importantly, participants were worse at discriminating faces that were originally viewed under DL: Eyes viewing instructions than those viewed under DL: Mouth instructions (as compared to new faces). Being informed of the recognition test in advance did not influence looking during encoding, or subsequent recognition performance. These results are in line with previous findings that demonstrate the functional utility of the eyes in memory performance (Henderson et al., 2005; McKelvie, 1976; M. F. Peterson & Eckstein, 2012; Sekiguchi, 2011). Critically, however, these findings also show that limiting overt attention to the eyes, without necessarily restricting covert attention, is sufficient to compromise face learning.   Although participants in Experiment 1 were free to direct their covert attention across the whole of the face, they may nevertheless have avoided attending to the DL feature. Support for this view comes from the common finding that attentional and oculomotor behaviour is often tightly linked. Further, in Experiment 1, participant performance was the worst for faces seen in the DL: Eyes block presumably because attention in general was diverted away from the eye region, which contained valuable information that is normally used during face learning. If, unintentionally, the DL instructions prompted participant to not only avoid looking at the eyes, but also to avoid attending to them altogether, then this means that Experiment 1's findings might actually underestimate the effectiveness of the eye region during face learning. If covert attention is directed to the eyes, even when overt looking is restricted, it is possible that this will afford some encoding advantage, compared to when attention is directed elsewhere on the face. Experiment 2 directly tested this possibility.  135  6.3 Experiment 2: Isolated covert attention to facial features and its influence on recognition  Experiment 2 aimed to determine whether covertly attending to the eyes would be sufficient to generate a memory advantage, even when looks to the feature are restricted. While in Experiment 1, participants were told to avoid looking at a particular feature while otherwise moving about the face normally, in Experiment 2, participants were asked to do the reverse. Specifically, fixation was maintained at a central point equidistant to the eyes and mouth, and only covert attention was to be shifted to either the eyes or the mouth. To maximize power, the encoding viewing instructions were manipulated within-subjects: participants were told to maintain fixation while either covertly attending to the eyes, or in a counter-balanced block, the mouth. If directing covert attention to the eyes facilitates face learning, then participants should perform better at discriminating new faces from old faces originally encoded when covertly attending to the eyes than when covertly attending to the mouth. Participants were once again either informed or not of the upcoming recognition test prior to the encoding block.  6.3.1 Methods 6.3.1.1 Participants  Results from 39 participants (Mage = 21.51 years, SD = 4.56; 28 females) are reported, all of whom completed the experiment in exchange for course credit or remuneration. All participants gave informed consent. 6.3.1.2 Stimuli and procedure  Stimuli and procedure were the same as that used in Experiment 1 with the following exceptions. Rather than having a FV and one DL block during encoding, in Experiment 2, each participant completed two encoding blocks, during which they were told to keep their eyes 136  fixated at a central red dot (.32˚ x .32˚) that was equidistant from the center of the eyes and mouth and move their covert attention to a particular feature. In the Attend: Eyes block, participants were told to keep eyes fixated at the central position, but to move their covert attention to focus on the eyes of the faces. Covert attention was described as using peripheral vision to attend or focus while still keeping one's eyes at a different location. In the Attend: Mouth block, the instructions were adjusted so that now participants were asked to covertly attend to the mouth. If fixation was not maintained within a 4.2˚ x 4.4˚ellipse around the red fixation point, the face was replaced with a black and white noise mask that also had the central fixation point visible. Once fixation returned to the central position, the face was once again presented. If fixation was not maintained for a total of 10 seconds, the trial was terminated and excluded from analysis. Faces were shown for five seconds each. The two encoding blocks were counterbalanced across participants and separated by a self-paced break.   To motivate participants to shift and maintain their covert attention to the indicated feature, they were told that they would be completing a probe detection task and that occasionally, a small black dot (.32˚ x .32˚) would appear in the feature they were told to covertly attend to. Upon detection, participants were told to press the space bar as quickly as possible. For a given face, the black dot was positioned in either the left or right eye, or the left or right corner of the mouth, depending on which feature participants were told to attend to. Probes were always equidistant from the central fixation point and occurred pseudo-randomly on 6 faces per Encoding block (i.e. 24% of the time). On a probe trial, the probe would appear at a pseudo-random period between 1000 ms and 4000 ms after face presentation. If a response was not made within 1000 ms, the probe was removed from the face and the trial continued.  137   The recognition portion of the experiment was the same as that use in Experiment 1. Participants were told that there were no viewing restrictions for the recognition phase.  6.3.2 Results 6.3.2.1 Data handling   In addition to using ROIs traced directly around the eyes and mouth to examine looking behaviour to those features, generalized rectangular regions around the eyes (including eyebrows and eyelids) and mouth (including cheek on either side of the mouth) were drawn for Experiment 2 (visible in Figure 6.1). These were used in the 'error' analysis of the encoding phases: whether participants moved their eyes towards a region they were told to covertly attend to when they should have been maintaining fixation on a central point. The general region around the eyes was significantly larger than the general region around the mouth, t(99) = 20.64, p < .001. Thus, as with Experiment 1, data was area normalized following the same procedure as that of Experiment 1. Participants were excluded if they failed to respond at all to the probe detection task (n = 4). Faces were excluded from both encoding and recognition analyses if during encoding the trial was terminated due to participants not maintaining fixation for a prolonged duration (i.e. the noise mask was presented for a total of 10 seconds; 3 trials). Trials in which responses were made in within 100 ms (anticipations) were excluded from encoding or recognition analysis (4 trials). The first and last fixations were trimmed to the time during which the face was presented (e.g. was clipped if it began prior to or finished after the presentation of the face).  As with Experiment 1, the manipulation of whether participants were informed of the upcoming recognition test or not did not significantly influence the reported results from Experiment 2, all ps > .05. To streamline the reported results, findings are therefore collapsed 138  across this condition. Fixation dwell times are reported; results testing fixation number mirrors those for dwell time. 6.3.2.2 Encoding phase 6.3.2.2.1 Did participants covertly attend to the instructed feature?  Number of probes caught and the reaction times for those responses were calculated for the encoding block. On average, participants detected the probe on 76.75% of probe trials (SD: 19.98%) and took 508.75 ms (SD = 99.16 ms). Overall, probe detection was high, with participants detecting an average of 9.13/12 probes. Based on paired-samples t-tests, the percentage of probes caught and the response times did not significantly differ across attend instructions (eyes, mouth), ps > .05.   Fixation errors to the eyes and the surrounding regions (see Figure 6.1) were also examined as an additional measure of covert attention. As covert attention precedes an overt shift of the eyes, and the two processes are often tightly linked, it follows that participants would make more erroneous saccades to a feature (and surrounding area; errors may not be precise) if it was being attended than if the same feature was not attended. It is important to note that these errors do not constitute instances in which the feature itself was actually fixated, as a mask would replace the face any time fixation left the central zone; it merely represents oculomotor inhibitory errors. The area normalized dwell times to the generalized eyes and mouth ROI (face feature area) were submitted to a within-subjects ANOVA with attend instructions (eyes, mouth) as the other factor. There was a main effect of attend instructions, F(1,38) = 6.00, p = .02, a main effect of face feature area, F(1,38) = 12.30, p = .001, and a significant interaction between these factors, F(1,38) = 21.63, p < .001. Paired-samples t-tests showed that participants looked more towards the eyes in the Attend: Eyes block than in the Attend: Mouth block, t(39) = 4.21, p < 139  .001, while they looked towards the mouth more in the Attend: Mouth block than in the Attend: Eyes block, t(38) = 3.01, p = .005. Overall then, there is evidence that participants followed the instruction to attend to a particular feature while maintaining central fixation.  6.3.2.3 Recognition phase 6.3.2.3.1 Did participants remember faces differently based on encoding instructions?  Overall, participants averaged 63.38% correct responses (SD = 6.74%) in the recognition test (i.e. labeling previously unseen faces as new and faces seen during encoding as old), which was significantly greater than chance, t(38) = 12.40, p < .001. To test whether attending to the eyes or mouth of a face during encoding influenced later recognition, two logd scores were calculated for each participant: one for faces originally seen during Attend: Eyes instructions, and one for faces originally seen during Attend: Mouth instructions, using responses to new faces as false alarm and correct rejection values. Values were compared across levels of encoding attend instructions (seen previously during Attend: Eyes or Attend: Mouth) via paired-samples t-test and showed no significant difference, t(38) = .27, p > .05, Figure 6.4.  140   Figure 6.4 Discrimination performance at determining if faces presented in the recognition phase were previously seen during encoding or were new for Experiment 2. Performance is shown for faces originally seen in Attend: Eyes or Attend: Mouth viewing instructions. There were no differences in recognition performance. 6.3.2.3.2 Did participants look at faces differently during recognition based on encoding instructions?  Overall, participants dwelled on the faces for an average of 1373.09 ms (SD = 605.60 ms) and made 6.17 fixations (SD = 2.17). A within-subjects ANOVA with face category (seen previously during Attend: Eyes, Attend: Mouth, or New) did not reveal any significant differences in total dwell time, F(2,76) = .71, p > .05. How participants attended during encoding did not reliably influence recognition performance.   To test whether looking behaviour changed as a function of initial viewing instructions, normalized fixation count and dwell times to the eyes or mouth were compared across face category (Attend: Eyes, Attend: Mouth, New), and face feature (eyes, mouth), using a within-subject ANOVA. There was a main effect of face feature, F(1,38) = 32.92, p < .001, and an interaction between face feature and face category, F(2,76) = 3.91, p = .02. The main effect of 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Recognition Phase Discriminability (logd) Attend: Mouth Attend: Eyes 141  face category was not significant, p > .05. This interaction represented that compared to new faces, participants spent slightly longer looking at the eye ROI for faces previously seen in Attend: Mouth t(38) = 2.31, p = .03, or Attend: Eyes conditions, t(38) = 2.25, p = .03. Again, there were always more looks to the eyes than to the mouth for all face types, all ps < .001. These analyses demonstrate that overall, there was little change in looks directed to the eye and mouth ROIs depending on what instructions were followed during encoding 6.3.3 Discussion  The results of Experiment 2 demonstrate that focusing covert attention to the eyes did not improve recognition performance as compared to when attention was covertly directed to another facial feature, the mouth. During the encoding phase, participants successfully followed the covert attention directions: not only were probes frequently caught at the to-be-attended feature, there were also more saccade errors directed to the feature when participants were asked to shift their covert attention to it. This latter point is especially revealing because covert attention is known to precede overt shifts of the eyes (Deubel & Schneider, 2003; Hoffman & Subramaniam, 1995) and the two often move together naturally. It follows that if covert attention was already directed to a particular location, it might serve to 'pull' the oculomotor system in the same direction occasionally. Though the mouth is frequently fixated, it does not appear to be as informative of a facial feature for neutral, static face learning (Tanaka & Farah, 1993; Walker-Smith, 1978). As such, if covert attention was sufficient to generate an encoding advantage, it should have been evident when compared against the Attend: Mouth condition. Despite the clear evidence that participants did shift their attention to the specified feature during encoding, subsequent discrimination performance in the recognition phase did not differ based on which feature they attended to. This strongly suggests that directing only covert attention to the eyes is 142  insufficient to provide a facial recognition advantage, as compared to when covert attention is directed elsewhere on the face. 6.4 General discussion The current studies assessed whether it is necessary to fixate and attend to the eyes of a face in order to confer a facial encoding advantage, as measured in a subsequent recognition task, or if instead covertly attending to the feature would generate a benefit. To test this, two studies examined the utility of overtly versus covertly attending to the eyes during face learning. In Experiment 1, participants were asked to look at a series of faces either without specific instructions, or while avoiding looking at either the eyes, or the mouth. Immediately following this task, participants performed a recognition task. Participants were worse at discriminating faces seen under restricted viewing conditions compared to when faces were freely viewed. Further and critically, avoiding looking at the eyes resulted in a greater disadvantage than avoiding looking at the mouth: thus, looking to the eyes during learning was important for effective facial recognition. Advance knowledge of the recognition test did not interact with this eye processing advantage, suggesting that the benefit of processing eye information is not strongly modulated by encoding strategy. To determine if there is any advantage to covertly attending to the eyes, in Experiment 2 participants were asked to maintain fixation centrally while keeping their covert attention to either the eyes or the mouth. Though participants shifted covert attention in accordance with directions, there was no recognition advantage for attending to the eyes rather than to the mouth, nor did advance knowledge of the recognition test improve performance. Taken together, these results demonstrate that it is not sufficient to simply covertly direct attention to the eyes; instead, directing overt attention to the eyes of another person 143  benefits face learning whether or not one knows that there will be a subsequent memory recognition test. Though it has been reported several times that the eyes convey important information used during face learning (Henderson et al., 2005; McKelvie, 1976; Sadr, Jarudi, & Sinha, 2003; Schmalzl et al., 2008; Schyns et al., 2002; Tanaka & Sengco, 1997), until now it has not been clear whether this feature necessarily requires fixation for optimal face encoding. Certainly, there are specific details present in the eyes that may be processed better when fixated. For instance, small changes in eye and eyebrow shape, size, and colouring may all be unique and therefore diagnostic of identity (e.g. Sadr et al., 2003). In contrast however, a large body of literature has demonstrated that facial recognition is in part accomplished holistically, such that changing the configuration of the features in a face can affect recognition (Tanaka & Farah, 1993; Tanaka & Sengco, 1997) and even that holistic processing predicts recognition performance (Richler et al., 2011). Within these tasks, changing the configuration of the eyes has a strong impact on subsequent recognition, suggesting that the relative placement of this feature is critical to face learning (Tanaka & Sengco, 1997). From this perspective, it would not be altogether necessary for the eyes themselves to be fixated, simply that they are available to be seen peripherally and remain in a relatively consistent configuration. Considering these two standpoints, the results of this study are useful in shedding light on the utility of the eyes such that their presence is not only important for effective face learning, but that performance is improved when fixations are centered about them.  This is not to say that holistic processing does not play a role in face learning, as it has previously been shown that it does (Richler et al., 2011). In addition to the eyes providing specific details that are better processed when fixated, it has also been posited that saccadic 144  transitions between the eyes may also be important in encoding face configuration, perhaps in learning second-order relations (Henderson et al., 2005). This view was recently supported, such that participants with better facial recognition scores also transitioned between the eyes more frequently (Sekiguchi, 2011). Thus, centering fixations about the eyes may also provide valuable configural information as well. Indeed, Laidlaw and colleagues (2012) reported evidence of a non-volitional bias to fixate the eyes of upright, but not inverted, faces, and suggested that initial holistic processing may have been partially responsible for driving attention to the eyes. In this way, the commonly observed bias to look to the eyes might not only represent a need to fixate this feature to process information there in particular, but also to optimally place one's gaze in order to maximally process the face as a whole.  If one considers fixating the eyes to be an optimization strategy, then it is not strictly required that it be one that is universally adopted across either cultures or varying task demands. To this first point, recent findings suggest that while Western Caucasian participants tend to center fixations about the eyes, East Asian (EA) participants fixate more centrally (Blais et al., 2008). This central preference may be due to increased reliance on parafoveal information for face processing in EA participants (Caldara et al., 2010). When forced to use foveal information for recognition, EA participants now often look to the eyes, suggesting that this information is what is being processed parafoveally under normal circumstances. Is the same also true for face learning? If so, covertly attending, rather than fixating, the eyes may be of greater use for EA participants than what our present results demonstrate. As our participants were not restricted based on their ethnicity or cultural experiences, this must remain a question for future investigation.  145  Given the perspective that fixating the eyes may represent a strategy for maximal face processing, it is also worth considering whether the same bias to look at the eyes would be observed in different instances. While for static images, the eyes may represent the most informative feature, this may not be true when faces convey certain emotions (Blais, Roy, Fiset, Arguin, & Gosselin, 2012; Eisenbarth & Alpers, 2011) or when motion is introduced. For instance, Lansing and McConkie (2003) demonstrated that while participants fixate the eyes of static video stills, fixations move towards the mouth when the video is played. Võ, Smith, Mital, and Henderson (2012) also showed that including audio (i.e. speech) in a video reduced fixations to the eyes. A topic for future investigation is whether under other circumstances, fixating the eyes becomes less relevant, or even detrimental, for effective face recognition. Finally, it is prudent to return to the role of covert attention during face viewing. Although the present data implies that covert attention to the eyes does not improve face learning, it does not rule out the possibility that covert attention is used in other ways during face learning or other social experiences. For instance, in real life, staring at another's eyes can be interpreted in a number of ways (e.g. to convey intimacy or exert control, etc, see Kleinke, 1986 for review), not all of which may be appropriate in a given situation (Wu et al., 2014). As such, fixations to another's eyes or to another person in general may be reduced in some live situations. Though not related to face learning, examples of more general real-life social attentional behaviours are detailed in the results from Chapters 2 and 3. Further, the results of Chapter 3 demonstrate that while overt attention may be limited in certain real life social situations involving unfamiliar others, observers still covertly attend to the actions of nearby others. Interestingly, the different gaze behaviours of the confederate in Chapter 3 did not have a significant impact on pedestrian looking behaviour which suggests that covert attention may not 146  be sufficient to pick up on relatively smaller changes of other people’s actions in real life, such as when another shifts their gaze.   In summary, the present studies demonstrate that restricting overt attention away from the eyes impairs face learning, and that using covert attention does not compensate for the inability to fixate this feature suggesting that overt attention to the eyes is a key factor in face learning.   147  Chapter 7: General discussion  The studies in the present document explore how people selectively attend to social stimuli across a range of different experimental paradigms. In so doing, I have aimed to explore both the generalizability and also flexibility of human social attentional behaviour. Until recently, social attention had primarily been studied using a relatively limited tool box, leading researchers to learn a lot about how people look to representations of other people. However, this represents one small slice of everyday experiences, and researchers are now acknowledging that understanding social attention will require investigating it across a broader range of conditions than have been previously explored (Kingstone et al., 2008; Kingstone, 2009; Pfeiffer, Vogeley, & Schilbach, 2013; Przyrembel, Smallwood, Pauen, & Singer, 2012; Risko et al., 2012; Schilbach et al., 2013). In addition to asking new questions using highly controlled lab experiments, the preceding chapters also document how social attention operates in new environments, mainly in everyday, real-life scenarios. The purpose of this chapter is to consider how the findings presented in the current document expand upon the understanding of social attention. I began this thesis with four questions; I now return to these questions, and follow with a discussion of additional implications, and future directions.  7.1 How is overt and covert attention directed to social stimuli in real life?   It is a commonly held belief that people are interested in other people, and further, that humans spend a great deal of time attending to others around them. There is no doubt that there is some truth to this belief: in intimate settings, people do pay close attention to others (Freeth, Foulsham, & Kingstone, 2013; Vertegaal et al., 2001; Wu et al., 2014), and this focusing of one's attention on other people helps to facilitate fluid interactions (Kitazawa & Fujiyama, 2008; Saxe, 2006; Senju, Johnson, & Csibra, 2006). But can it be said that these intimate interactions 148  accurately represent how people attend to others more generally? Consider walking down a street: it would be exceedingly inefficient, not to mention potentially invasive, to try to engage in direct eye contact with everyone that passes. In his essays about human behaviour in public spaces, Erving Goffman (1963) argued that in 'informal' interactions with others – that is, instances in which people do not initiate conversation or otherwise intimately engage with one another – people are actually infrequently the focus of one's attention. Termed 'civil inattention', he suggested that strangers more often avoid looking at others nearby, and limit their overt behaviour to a brief glance so as to simultaneously acknowledge the other's presence while maintaining their own sense of privacy. Implicit in this behaviour is the understanding that looking serves a purpose beyond information acquisition: looking in the presence of others is a fundamentally social behaviour, a point I will come back to later. Goffman argued that civil inattention was central to maintaining public order, though experimental evidence supporting the idea is thin. For instance, Zuckerman and colleagues (1983) reported evidence consistent with civil inattention in elevators, though across four studies, Cary (1978a) failed to show support for the practice when filming pedestrians on a college campus.  Years later, vision scientists picked up where sociologists left off, except they held a stronger focus on rigid experimental control, relying more heavily on computer-based paradigms. Interestingly, the findings from these laboratory studies painted a very different picture than what was observed in (some) older field experiments, mainly that observers often disproportionally directed their attention to images of others, and did so in an overt manner (Birmingham, Bischof, & Kingstone, 2007; Birmingham et al., 2008a, 2008b; Foulsham et al., 2010; Zwickel & Võ, 2010). Considering this, the study of whether social attention is directed to strangers in real life fulfills a greater need than simply supporting Goffman's propositions. In 149  addition, its exploration is central to beginning to understand the generalizability and flexibility of social attention. To this point, the results from Chapters 2 and 3 shows that people do attend to strangers in everyday settings. Further, the findings provide insight for why this conclusion has eluded researchers for decades, as the manner by which attention is directed in real life does not at all mimic what had been observed within the lab.  Chapter 2 represents the first direct comparison of overt social attention towards a real person as compared to their representation. Considered on its own, one might be tempted to conclude from the results of Chapter 2 that people do not frequently attend to real strangers and that instead, the social attentional bias might be limited to viewing images or videos of people. In the study, participant eye movements were recorded as they sat in a waiting room prior to beginning an unrelated task. Half of the participants waited while a confederate quietly completed a questionnaire nearby, whereas the other participants saw a video of the same confederate playing at a nearby workstation. Although participants frequently looked to the videotaped confederate, the live confederate was rarely fixated. In fact, participants were less likely to look to the live confederate than they were to look at a baseline, non-social object. Participants were also much more willing to turn their head towards the video than they were to turn towards the live confederate.   The results from Chapter 2 mirror Goffman's original theories and also support new findings of general avoidance of looking to others. For instance, Gallup and colleagues (2012) found that people rarely followed the gaze of approaching pedestrians when they looked towards a visually salient stimulus. Similarly, in another study, it was found that only roughly one in five pedestrians passing in front of another person(s) followed their gaze (Gallup, Hale, et al., 2012). Though these studies focused on orienting away from another in response to their gaze 150  behaviour, attending to the other person is a necessary precursor to this behaviour and thus one could argue that gaze following rates were so low because the people were not attended to in the first place.   However, it is important to avoid equating not looking with not attending. In fact, to avoid looking to another person may in fact require that the observer initially (and even intermittently) attend - albeit discretely - to the person they are not looking to. Rather than avoiding others entirely, do people instead deploy attention covertly when in real life social environments? The results of Chapter 3 overwhelmingly suggest that this is the case. Pedestrians were filmed as they walked past a confederate who performed no action, waved, or answered his phone. All actions were initiated when the pedestrians were not looking, and yet they were much more likely to look in response to the confederate waving versus either of the other two conditions. As the wave and phone actions were closely matched, this divide in response behaviour in the pedestrians based on confederate action could only have been possible had they initially been covertly attending to the confederate. The results of this study demonstrates that attention is directed to nearby others in real life, albeit covertly. 7.2 How is volitional and non-volitional covert and overt visual attention deployed to representations of faces, and socially relevant facial features (e.g. eyes)?  Whereas Chapters 2, 3 and 6 are concerned with the manner in which attention is deployed, overtly or covertly, Chapters 4 and 5 focus on whether social attentional selection is volitionally or non-volitionally driven. Investigating the control of social attention is relevant to a larger debate about the uniqueness of social stimuli, in particular the eyes of other people. Some have argued that eyes (and more generally, people) constitute a unique attentional trigger, such that they serve to guide attention in a manner not explained via other mechanisms (such as their 151  salience; Birmingham et al., 2009b). In contrast, it could be argued that social stimuli are not special and that their attentional draw can be explained by either other known attentional triggers or a participant's own motivation to attend to other people, especially to their eyes (Bindemann, 2010; Ricciardelli et al., 2000). This debate has largely centered on whether the eyes can reflexively orient attention away from their location (i.e. cue directionality via gaze, Friesen & Kingstone, 1998; Hietanen, Nummenmaa, Nyman, Parkkola, & Hämäläinen, 2006; Tipples, 2002), but an equally relevant perspective to consider is whether social stimuli initially 'grab' attention non-volitionally. Without this initial draw to orient towards other people and their features, the functional utility of gaze cuing is admittedly limited. Further to the idea of generalizability, if one can demonstrate that attention is non-volitionally attracted to social stimuli in one context (i.e. using images), it may be more reasonable to presume that orienting may be at least partially non-volitional in other contexts (i.e. in naturalistic environments), such that although social influences might serve to suppress orienting behaviour, it will not completely eliminate it.  Chapters 4 and 5 differ in what level of attentional selection was explored (more on this below), and also in what type of orienting was deployed, either overt or covert. In Chapter 4, participants were asked to make saccades to non-social target objects. In some trials, a distractor appeared, which for some participants was an image of a face, whereas for other participants was a meaningless oval (made by scrambling the face images). Faces caused greater interference in correctly executed saccades: despite saccades landing at the target, they deviated more in response to a face distractor than to a non-face distractor. As both the salience of (van Zoest & Donk, 2005) and attention to (Sheliga, Riggio, Craighero, et al., 1995) a distractor increases its interference in saccade planning and results in greater saccade deviation, one can conclude that 152  faces were covertly attended to more than the non-face distractor. Further, it can be argued that because attending to the distractor was irrelevant to performing the task and dividing attention in this way could be detrimental to target processing, this deviation difference also reflects a non-volitional drive to attend to the faces. It is worth noting that this may not constitute a strict 'reflexive' orienting bias to social over non-social stimuli, as there was no significant difference at early RTs, as might be anticipated with 'reflexive' orienting (Müller & Rabbitt, 1989). The time course of the influence that presenting a meaningful (social) stimulus has on oculomotor behaviour may have been relatively slower owing to the fact that the face was always presented as a distracting stimulus rather than a possible target of action. Despite this, Chapter 4 demonstrates that social stimuli (or, at the very least, meaningful stimuli) are afforded a boost in representation within the oculomotor system's priority map, presumably due to them attracting attention in a non-volitional manner.   The bias to attend to social information is not simply limited to selection of the stimulus in general, as was revealed in Chapter 4. In Chapter 5, participants were asked to look at faces, but to avoid looking at either the eyes or the mouth. That is, I asked participants to override the observed biases to attend to these features under normal viewing conditions in an effort to determine whether these biases could be volitionally controlled. The results show that despite explicit instruction to avoid looking at the eyes, participants were unable to completely avoid overtly attending to the feature. To be clear, participants were able to drastically reduce looks to the eyes when instructed to, but critically, were unable to completely avoid looking. This persistent bias was not observed in relation to fixations directed to the mouth. In addition, Chapter 5 demonstrates that this bias is not simply due to low-level features, as participants were able to avoid looking at the eyes of the same faces when they were inverted. The bias was also 153  not a result of initial 'capturing' of attention by the eyes, such that even in the last four seconds of viewing, the non-volitional bias to attend to the eyes persisted. These results expand on previous reports of an eye-bias when attending to faces by demonstrating that it is at least partially outside of the observer's control.  7.3 What purpose might a social attentional bias serve?  While the present dissertation mainly focuses on whether and how attention is directed to social stimuli, each chapter alludes to why this social bias is functional, and is briefly discussed below.  In the naturalistic environments tested in Chapters 2 and 3, attentiveness toward the confederate – even if it is covert and discrete – appears to facilitate communication. Covert deployment of attention to others, as observed in Chapter 3, allows for the quiet assessment of another's actions and intentions, thereby enabling the observer to strategically reserve overt attentional displays to situations where this may be appropriate (e.g. in an attempt to elicit further interaction) or necessary (e.g. to further process information not easily discernible using peripheral vision alone). The infrequency of looks to the live confederate in Chapter 2 may therefore not represent a lack of attentiveness to another person, but rather a communicative signal that tells other people they are unwilling to engage in interaction.   In Chapter 4, I argue that faces, and perhaps other social stimuli, are prioritized within the oculomotor system, presumably due to their social relevance. This is noteworthy because these faces were never task-relevant: they drew attention to themselves because doing so is beneficial in a broader context (i.e. living within social environments). If also observed in real life, this biasing of attention to social stimuli may be beneficial as it would serve to direct attention quickly and efficiently towards behaviourally relevant stimuli, even when not immediately 154  relevant to a person's task, for example when avoiding collisions with others in shared spaces. Further, Chapter 5's finding that attention is directed to the eyes non-volitionally and that this may be associated with improved holistic face processing (as this bias was not observed with inverted faces) suggests a functional utility to directing overt attention to particular landmarks within a social stimulus.   However, the most direct test of the functionality of a social bias comes from Chapter 6. Participants were asked to either avoid looking at (Experiment 1) or covertly attend to (Experiment 2) the eyes or mouths of faces, and then were asked to discriminate faces as old (i.e. previously viewed) or new. Previous work has suggested that the eyes are important for face learning (Henderson et al., 2005; McKelvie, 1976) and recognition (M. F. Peterson & Eckstein, 2012; Williams & Henderson, 2007), but have confounded the role of covert and overt attention to the eyes. The results of Experiment 1 show that without the ability to overtly attend to the eyes of the faces, participants perform much worse at a subsequent recognition task. The second experiment expands on this finding to show that covertly directing attention to the eyes is not enough.   The work in Chapter 6 aligns with other research suggesting alternate advantages to attending to the eyes, be it to infer intentionality, understand certain emotional expressions, or even detect a person within a complex scene (Emery, 2000; Lewis & Edmonds, 2003). It is unclear whether these functions necessitate overt attention or would also be possible using covert mechanisms, but the utility of attending to the feature is nevertheless clear. The effective and reciprocal focusing of overt attention to another also serves to foster positive social interactions (Cook, 1977) and facilitate turn-taking (Kendon, 1967; Rutter & Durkin, 1987). Finally, though not the focus of the present document, without attention directed to the eyes, it would be 155  impossible to elicit non-volitional gaze following in others, and would limit one's ability to engage in triadic interactions with another person and the environment, as both are contingent on first looking to the eyes of another. Though the purpose of Chapter 6 in this document was to examine the utility of overt versus covert deployment of attention as it relates to face learning, when framed within the larger literature, it serves to further demonstrate the value in attending to the eyes of faces in particular. 7.4 What influences how social attention is directed across contexts?   When images of faces or people are used as social stimuli, researchers consistently report that people will rapidly overtly orient their attention preferentially towards those stimuli (Crouzet et al., 2010; Fletcher-Watson et al., 2008). This bias cannot be explained due to low-level visual salience (Birmingham et al., 2009b), and persists even when the stimuli are not task-relevant, or when fixating them is detrimental to performance (Carmel, Fairnie, & Lavie, 2012; Devue et al., 2012; Langton et al., 2008). The eyes appear to be the most important social feature, and typically developing people will look to them the most out of any other facial feature (Henderson et al., 2005; Janik et al., 1978; Langton et al., 2000; Schyns et al., 2002). Avoidance of the eyes has been linked to social impairments in ASD (Senju & Johnson, 2009a; Tanaka & Sung, 2013), and when the feature is masked, participants are worse at detecting people in scenes (Lewis & Edmonds, 2003), later recognizing the faces (McKelvie, 1976), and accurately judging emotionality (Baron-Cohen et al., 1997; Vuilleumier & Driver, 2007). Laboratory-based tasks have revealed important contextual and cultural modifiers of this general bias to attend to people and the eyes. For instance, East Asian participants appear to pay much less overt attention to the eyes than do Western participants (Blais et al., 2008), and male infants appear to orient to faces or make eye contact less than female infants (Connellan, Baron-Cohen, Batki, & Ahluwalia, 156  2000; Lutchmaya, Baron-Cohen, & Raggatt, 2002). Generally speaking, however, the conclusion from studies employing images of people is that there is a strong and overt bias to attend to social stimuli. The results from Chapter 5 support this view.  However, these findings stand in direct contrast to the first two experimental chapters in this thesis, as well as with more recent naturalistic observations. For instance, people are the less likely to follow gaze when facing the looker than when the looker's back is to the observer (Gallup, Chong, et al., 2012) or when the pedestrians walk behind rather than in front of the lookers (Gallup, Hale, et al., 2012). In real life, looks to an approaching pedestrian decline as they come closer; while this drop in looks as people come near also occurs when watching a video of the same event, it is less substantial than that observed in real life (Foulsham et al., 2011). When looking at a high-status image of another person, participants look at the eyes less when they think the person can see them versus when they think no one is observing their visual behaviour (Gobel et al., 2015). Certainly again, there will be exceptions to these findings, for instance that people frequently look at their partner when engaging in conversation over a meal (Wu, Bischof, & Kingstone, 2013). However, the general finding from studying encounters with strangers in real life is that people do not readily overtly signal their attentional focus.  What does the exploration of the paradigms and results of the current document contribute to unraveling this apparent discrepancy in real and lab-based social attention results? At one extreme, it might be argued that what is being studied in the lab is simply no substitute for real life: there is something special about live others that fundamentally changes how attention is deployed and there is no use trying to devise a cohesive understanding of attention to people generally from results using real and represented others. In contrast, I prefer to think of computer based tasks as representing one (of many) contexts within which social attention can be 157  studied. In other words, images of people are still represented as social stimuli, but the circumstances around which they are encountered change the way in which an observer engages with them. In examining the results from Chapters 2-6, I would propose two factors, discussed below, that appear to have a strong influence on determining how attention is deployed to social stimuli. Under this flexible viewpoint, it is unreasonable to suggest that only results from real life can accurately reflect true social attentional behaviour. Instead, it is more accurate to hypothesize that as circumstances change, so too does one's expression of social attention. Critically, however, the fundamental draw to pay attention to other people persists across paradigms, levels of realism, and the scale at which attentional selection is examined. 7.4.1 The influence of interactivity  One of the most conspicuous differences between real and represented others is the (potential) effect one's own looking behaviour has on the behaviour of the other person. In the case of an image or video, the effect is nil: the viewer can look without the expectation that the person in the image will react. In other words, overt looking behaviour cannot be interpreted in a one-way display: one can act upon an image, but it cannot act back.   This stands in stark contrast to real life social situations. Kobayashi and Kohshima (2001a) argued that the very evolution of the human eye has been dramatically influenced by the human need to communicate with others using gaze. People are especially adept at determining another's gaze direction (N. C. Anderson, Risko, & Kingstone, 2011; Gamer & Hecht, 2007) and the direct gaze of another not only pops-out in a crowd (Conty, Tijus, Hugueville, Coelho, & George, 2006; von Grünau & Anston, 1995), but elicits specific attributions about the other person and their intentions (Argyle, Lefebvre, & Cook, 1974; Emery, 2000; Thayer & Schiff, 1977). As such, in many encounters with nearby others, even if the other person is not especially 158  engaged, they may nevertheless be capable of receiving and interpreting the signal generated by one's own eye movements. In this way then, the potential for social interaction is often different between laboratory and real life stimuli, which in turn changes the way in which gaze is used, either as a means of processing visual information or additionally as a social communicative signal. In Chapter 2, I proposed that this potential for social interaction with another is a major reason why real life attentional effects seem to be superficially different than what has previously been observed using one-way computer-based tasks. Whereas for Chapters 2 and 3, the behaviour of the participant could initiate a dynamic interplay of responses and counter-responses between themselves and the confederate, in Chapters 4-6, participants knew that their actions would not elicit reactions from the images they were viewing.  Another factor that must be considered beyond the influence of interactivity of a paradigm is the willingness of the observer to interact. This willingness might be mediated by the inclusion of social norms in everyday social spaces, whereas different expectations about appropriate looking behaviour exist within the lab. Whereas in laboratory tasks, participants are invited to attend to faces of other people, in the naturalistic environments examined in the present document, consent to attend to the other person was not explicitly provided. Without this desire to further engage with another (Cary, 1978b), or the approval (either explicit through instruction or implicit based on social norms) to look towards another person, overtly directing attention to someone generates the risk of mis-communicating one's intentions (Bolmont, Cacioppo, & Cacioppo, 2014; Kleinke, 1986). Thus, when interaction is undesired, as it would have been in the waiting room in Chapter 2, avoidance of overt attention to a live other is a reasonable behaviour. Similarly, in Chapter 3, pedestrians were much less likely to look to the confederate when he answered his phone or did nothing than when he waved. In the latter scenario, the 159  confederate served to draw the pedestrian into an interaction, to which pedestrians signaled their awareness of the confederate by looking at him (and simultaneously, their unwillingness to engage further by limiting looking to a brief glance). Otherwise, pedestrians avoided gaze so as to also avoid further interaction. This represents a clear example of strategic gaze use (or withholding) in instances where overt attention could be interpreted by others. Recall that in Chapter 1, a distinction was made between covert attention as it is traditionally defined in visual cognitive research (as a shift of attention without using the eyes), and a more common-place understanding of the term covert (as a stealthy or hidden action). Critically, it would appear that whereas covert attention need not be used for discretion (consider Chapters 4 or 6, for examples), real-life social scenarios may in effect present a perfect environment within which covert attention is deployed as a manner of maintaining a degree of camouflage about one's attentional focus.    Though looking at pictures does not often elicit a sense of interaction, it should not be interpreted that people will only ever overtly deploy attention within computer-based tasks. Chapter 4 demonstrates that people can generally avoid looking to faces when instructed. Likewise, despite evidence of a non-volitional overt component of orienting to the eyes, Chapter 5 also reveals that participants are adept at dramatically reducing looks to the eyes when required. Rather, overt attention may simply be the preferred method of deployment when social consequences are eliminated. Consistent with this argument, new research using computer-based tasks have reported that people will change their attentional behaviour if they believe that their eye movements are signaling information to the person they are looking at (Gobel et al., 2015) or if they believe that the person they are attending to can look back (Montague et al., 2002; Redcay et al., 2010; Redcay, Rice, & Saxe, 2013; Teufel, Fletcher, & Davis, 2010). In sum then, to 160  divide attentional results based on whether they are derived from paradigms using a real or represented person may be somewhat misleading; it may be more accurate to distinguish between interactive (or potentially interactive) and non-interactive contexts.  7.4.2 Differences in the stage of social attention studied   I would argue that more than lacking in interactivity, computer-based and naturalistic studies may elicit different social attentional responses because they often focus on different stages of attentional selection. Consider a scene which contains a single person. Initial selection of the social stimulus would involve orienting attention towards the person at the expense of attending to the rest of the scene. This is akin to what was explored in Chapters 2 and 3: participants were free to attend to anything within their visual environment, which happened to include other people. Within the lab however, researchers can be specific about the type, location, and duration of each social stimulus and in so doing, this likely influences the initial stage of social attentional selection. This is certainly true in tasks that use simplistic, isolated images and require engagement with the image in order to complete the task, for example in Chapters 5 and 6. The first stage of social selection is virtually eliminated. Even when researchers use more complex stimuli, presenting the stimuli via computer serves to pre-select relevant information for the observer, thereby eliminating the need for them to direct their attention outside of the frame of the screen. Further, stimulus position can interact with known biases in how overt attention is deployed. For instance, center of gravity effects – in which attention is directed to the center of images – have been demonstrated to influence both person detection within scenes (Bindemann, Scheepers, Ferguson, & Burton, 2010) as well as initial fixations on faces (Bindemann et al., 2009).  161   However, provided that one acknowledges that a lack of interactivity could influence attentional deployment, laboratory tasks nevertheless seem well-suited to studying subsequent stages of selection. For example, once a person is selected within a scene (or is pre-selected for in a computer task by limiting alternate objects to attend to), attention is further biased to particular locations or features within a person, for instance to their face, or as was the focus in Chapters 5 and 6, to their eyes. This better reflects the stages studied in the last two experimental chapters, and is commonly the focus of many laboratory tasks. To date, the study of attention following initial selection using a real social stimulus has been limited, and may suffer from failing to present people in realistic environments. 7.4.3 An example of the roles of covert and covert (non)-volitional social attention    Based on the results of the present studies, and keeping these two important influences in mind – that is, that attentional behaviour changes based on both the potential to interact as well based on the level of attentional selection being examined – it is possible to generate a type of template of how social attention is deployed. Overall, the present findings suggest that regardless of task, other people or their features appear to attract attention, such that it can be concluded that a bias to attend to nearby or relevant others transcends context and levels of interactivity. As already discussed earlier in this chapter, what changes across contexts and scale is not the presence or absence of a bias, but rather how a social attentional bias is manifested behaviourally. This may take the form of overt avoidance (which, it would be argued, requires initial attention to the person in order to avoid them), covert monitoring, or explicitly looking to another person or their features; I would argue that all of these behaviours stem from an initial underlying drive selectively attend in some manner to social stimuli. 162    If one considers social attentional selection as a hierarchical, multi-staged process which acts at different levels of social interaction, how attentional behaviour changes can be examined using a straightforward example. Imagine that a pedestrian turns onto a relatively empty and narrow path into a park. Close by, a woman sits quietly on a bench. When in close quarters with another real person (possibly within a typical interactive spatial zone, e.g. Hall, 1968), initially covert attention will play a stronger role: the pedestrian will take note of the person on the bench but may not immediately signal that this other person has become the focus of their attention. If the person on the bench performs an action that makes them more relevant to the pedestrian, for example by raising their head and smiling, then the pedestrian, having detected this change peripherally, might shift their overt attention towards the other individual and respond in turn. Without any further cue to interact, and no motivation on the pedestrian's part to engage, however, one will quickly divert their eyes away from the woman on the bench and continue walking. Soon after, if the pedestrian encounters an advertisement containing an image of another person, their covert attention will likely be biased towards it based merely on its presence as a socially relevant stimulus. Unlike with a real person, however, it is likely that the pedestrian will follow-up that covert orienting with an eye movement. After all, there is little risk for signaling one's own focus to an image, and no chance of introducing a social interaction with the picture. Given that the pedestrian can freely explore the image, they may be drawn non-volitionally to fixate on the eyes of the image more often than on other features on the face. This non-volitional drive is, on the whole, relatively weak, however, and the pedestrian will be able to refocus their attention to the path and other items in the park as they pass the ad.  Just as the present document contributes several findings that support the attentional behaviours described in the above scenario, there are of course pieces of this scene that the 163  current work cannot address. For example, it is unclear whether any of the (small number of) fixations that are directed to real person would be non-volitionally biased towards their eyes in the same way as what was observed with images (Chapters 5, 6). Similarly, while Chapter 3's findings would suggest that the motion signals provided in real life (e.g. the woman looking up and smiling) are insufficient by themselves to non-volitionally capture overt attention, they may serve to initially non-volitionally capture covert attention (perhaps in a way that is more substantive or consistent than what is observed with non-social motion signals; Downing, Bray, Rogers, & Childs, 2004). Broadly speaking, however, the present findings suggest that 1) the level of interaction available within a given context will alter an observer's willingness to engage overtly with another person, and also that 2) the decision to direct attention overtly versus covertly may depend on the specificity of selection (e.g., to the person, the face, or the features). Whereas covert attention may be sufficient to detect the presence of another and allow the observer to distinguish some types of actions (Chapter 3), overt attention likely plays an important role when attention is directed to a person, as foveation would allow for greater processing of detail that may be critical for other social cognitive tasks, such as face recognition and learning (Chapter 6). Given that the eyes are rich sources of detailed information, it may be reasonable that overt orienting to that feature would then be partially beyond the observers’ control. In other words, whereas the transition from covert to overt orienting may be more easily controlled (Chapters 2, 3), once a person is selected for, the observer may lose a degree of control over where their fixations are directed (Chapters 5, 6).  7.5 Further implications  Several implications have already been discussed in the above examination of the four questions set forth in Chapter 1. For example, the results of the present document provide 164  implications about how social attentional deployment might be influenced by the potential for interaction, and how different stages of selection may rely on different modes of orienting. The following focuses on an additional important consideration to come out of the presented studies. 7.5.1 The role of covert attention and the value of inhibition  One of the most novel and important findings to come out of this document is the proposition that covert attention serves a social function. Traditionally, covert attention has been of little interest to vision scientists except as it relates to oculomotor behaviour. It is not a stretch to say that covert attention is often considered an artifact of the paradigms used within the lab; a tool by which oculomotor and attentional effects can be dissociated. It is known that covert attention precedes overt shifts of attention (Hoffman & Subramaniam, 1995; M. S. Peterson et al., 2004), and it has been demonstrated in reading tasks to facilitate fluid information processing (McConkie & Rayner, 2013), but there has been little consideration for why humans may have retained the ability to dissociate their eye position from their attentional focus. The argument presented herein thus represents a major reconsideration of the utility of a poorly understood visual phenomenon. Rather than merely a by-product of oculomotor action, as it was once presumed (Rizzolatti et al., 1987, 1994), I suggest that covert attention is used in service of successful social interaction by allowing people to discretely attend to others nearby in order to quietly assess intentionality and thus adapt their own behaviour to best respond or align with the actions of others.   The push to effectively communicate with others shaped the morphology of human eyes, and sacrificed one's ability to camouflage looking behaviour. However, a fascinating consequence of developing communicative gaze is that people appear to have come to rely on an alternate method of attentional camouflage: covert attention. Chapter 3 demonstrates that the use 165  of overt attention is constrained by its innate ability to communicate information with the person being attended to. In situations where communication is unwanted, people instead rely on covert deployment.  It is perhaps misleading to suggest that only looking behaviour serves to signal information to others, however. A complimentary idea is that not looking may be as important to social attention as its counterpart. To covertly attend to another, and then withhold a look may suggest several things to someone nearby. For example, in Chapter 2 participants may have avoided looking to the confederate as either a way of showing they were providing privacy to the confederate, or as a way of signaling that they did not want to further engage. In a similar way, the avoidance of looks to the confederate in Chapter 3 could represent both an unwillingness to interact (as in the ‘baseline’ condition) and a desire to provide privacy (in the ‘phone’ condition). In other contexts or for certain people, avoiding looking to others can signal anxiety, fear, or social rank (Foulsham et al., 2010; Moukheiber et al., 2010). Except for cases in which not looking truly reflects a lack of attention (consider, for example, people too far away to influence one's own behaviour, or crowds in which attending to everyone at once would be too resource-heavy), one's ability to not look is tied up with the ability to discretely attend in the first place.  7.6 Future directions  This thesis should be considered an exploration of how people orient their social attention and a demonstration of how both context and scale can influence orienting behaviour. Through this, I have provided support for the proposal that in order to best understand how social attention operates, it must be studied across many levels. If there is one major limitation of the presented document, it is that the work presented herein only scratches the surface of a much larger topic waiting to be explored. For instance, in Chapter 1, I laid out a social attention space 166  consisting of four quadrants within which I could study how the style of orienting behaviour changes (Figure 1.1). Though I have detailed investigations involving all quadrants, one or two studies cannot begin to capture the complexities involved in how social attention operates within each action space. As an example, my showing that covert attention is shifted to cropped faces in Chapter 4 does not preclude the possibility that covert attention may operate differently when presented with another social representation, for instance, a video of a realistic and dynamic social scene. There are many questions left unanswered, and the purpose of this thesis was not to try to address them all but to highlight an important issue in the study of social attention and demonstrate the flexibility by which people commit attention. Based loosely on themes that emerged from the experimental chapters presented, I focus now on a selection of future areas of interest whose study could provide insight into social attentional orienting.   Though this thesis focuses primarily on attentional deployment, how this deployment might reveal underlying differences in social cognitive processes is also very relevant. For instance, one might hypothesize that the variation of attention across different social contexts may be viewed as an important variable rather than as a source of 'noise' to be removed from responses of interest. As an example, in their paper arguing for a distinction between how the brain reacts to instances of social observation versus how it responds to true social interaction, Tylén, Allen, Hunter, and Roepstorff (2012) reported that even when fMRI results were corrected for eye movements during testing, differences in brain responses across conditions were still observed. However, the results after this correction are admittedly much weaker than beforehand, and the eye tracking findings separately show changes across conditions that may be related to attentional effort (i.e., in pupil dilation, Smallwood et al., 2011; Wierda, van Rijn, Taatgen, & Martens, 2012). Rather than being a source of contamination, changes in eye 167  movements, and attention more generally, may allow for researchers to disambiguate underlying differences in social cognitive processes, based for instance on variations in social engagement, or task realism.   The results from Chapter 2 are particularly relevant to this claim, but cannot conclusively argue that attention can be used as a measure of differences in how people conceptualize interactive (or potentially interactive) and observational social tasks. On the one hand, the potential to interact could instigate the shift from what Tylén and colleagues (2012) call an observational to an interactive stance, thereby eliciting both changes in social cognitive neural processing and social attentional behaviour (see also, e.g., Chatel-Goldman, Schwartz, Jutten, & Congedo, 2013; Schilbach et al., 2013 for a similar discussion). However, this is not strictly necessary to explain the results from Chapter 2, considering that participants did not formally interact with the confederate. In these in-between instances in which interaction is possible but participants are not strictly engaged, can changes in social attentional orienting be considered evidence of a shift in the way participants are thinking about the other person? An alternative view is that social attentional behaviour flexibly adapts to the needs of the observer, even without engaging additional processes involved in thinking about others.   Put simply, the relationship between social attentional orienting and other social cognitive processes involved in understanding other people is as yet poorly understood, and ripe for future study. The effects of putative couplings between attentional orienting and socio-cognitive processes may be of particular interest in studies of clinical populations, such as those with high-functioning ASD, where social attention is markedly different than in those without ASD (e.g., Dawson, Webb, & McPartland, 2005; Jones & Klin, 2013; Klin et al., 2002). Schilbach (2013) describes subjective experiences of those with ASD as being comfortable when 168  in an observational role within social environments, but that they report feeling overwhelmed once engaged in interactions with others who do not share the same perspective as their own. If divergences in social attentional behaviour link to differences in one mode of social processing but not another, it might not only explain these subjective experiences, but also help in defining the underlying characteristics of social deficiencies in clinical populations.   While different processes may underlie orienting in different contexts, it is also pertinent to examine whether (or, more likely, how) cultural or individual differences might also impact the use of social attention across lab and real life settings. Though these differences were not thoroughly examined in the present thesis, they are relevant to the question of generalizability of social attentional behaviours. Previously, it has been shown that individuals from different cultures will focus on different facial features when examining a face (Blais et al., 2008; Jack, Blais, Scheepers, Fiset, & Caldara, 2007), and that natural looking behaviour to pedestrians differs based on whether it is observed in Western or East Asian locations (Patterson et al., 2007). There are some that would argue that cultural differences cause lasting changes in the brain's function (e.g., Park & Huang, 2010), but it is unclear whether this might fundamentally change the draw of social stimuli to the individual (i.e. make people more or less attentionally attractive) or rather would simply change the rules individuals follow when it comes to displaying the focus of attention to social stimuli (Caldara et al., 2010).   Unlike culture's influence, individual variation in social cognitive abilities may have strong modulating effects on orienting behaviour that are not reliant on one's understanding of social norms. For instance, if one's awareness of gaze as a signal is critical to modifying the way in which people orient their attention, then this behaviour may also be dependent on one's strengths in mentalizing (Frith, Morton, & Leslie, 1991; also sometimes refered to as Theory of 169  Mind, Premack & Woodruff, 1978). Even if one was aware that looking to a stranger was considered rude, a reduced ability to infer the mental states of another could limit one's ability to spontaneously adopt covert orienting behaviours in order to adhere to these social norms. Similar to the idea that there are different processes which underlie attentional allocation in different contexts, the study of individual differences may reveal that attention is also influenced by personal variations in activating these processes. A final point related to the findings presented in this thesis concerns why people show strong preferences to attend to others, even when the person is not real or cannot interact. This is an issue that deserves more than a cursory paragraph exploration, but briefly, it seems fruitful to consider whether social stimuli have, over time (both developmentally and evolutionarily), been ascribed a higher value than other commonly-encountered object categories, possibly due to their historical association with reward (e.g., B.A Anderson et al., 2011b; B.A. Anderson, Laurent, & Yantis, 2012; B.A. Anderson & Yantis, 2013). The idea that the ‘value’ of an object prioritizes attention has been demonstrated using non-social manipulations, and its effects have been shown to persist over longer time periods (B. A. Anderson & Yantis, 2013). Arguing against a purely reward-based valuation of social stimuli, however, are findings that demonstrate that negative or threatening social stimuli are also preferentially attended to (Schmidt et al., 2012), even more so in those with social anxiety (Staugaard, 2010). To complement a reward-based explanation, it may be fruitful to broaden the definition of ‘value’ to include associations with personal motivation, which may vary depending on the individual’s current goals and needs, as well as with an individual’s internal affective state (e.g., Madan, 2013). Future work would benefit from considering whether this associated value, be it reward-, motivation- or affect-based (or, more likely, an interaction of these), has served to 'attune' the attentional system toward social stimuli. 170  In this way, a social attentional bias might reflect an evolutionarily adaptive means of rapidly 'filtering' the cluttered visual environment, by pre-selecting what is historically been most important: other people (see Todd et al., 2012 for a similar proposal concerning affect-based attention).     171  References Abrams, R. A., & Christ, S. E. (2003). Motion onset captures attention. Psychological Science, 14(5), 427–432. http://doi.org/10.1111/1467-9280.01458 Al-Aidroos, N., & Pratt, J. (2010). Top-down control in time and space: Evidence from saccadic latencies and trajectories. Visual Cognition, 18(1), 26–49. http://doi.org/10.1080/13506280802456939 American Psychiatric Association. (2000). Diagnostic and Statistical Manual of Mental Disorders (Vol. Fourth Edi). http://doi.org/10.1002/jps.3080051129 Anderson, B. A. (2013). A value-driven mechanism of attentional selection. Journal of Vision, 13(3), 1–16. http://doi.org/10.1167/13.3.7.doi Anderson, B. A., Laurent, P. a, & Yantis, S. (2013). Reward predictions bias attentional selection. Frontiers in Human Neuroscience, 7. http://doi.org/10.3389/fnhum.2013.00262 Anderson, B. A., Laurent, P. A., & Yantis, S. (2011a). Learned value magnifies salience-based attentional capture. PLoS ONE, 6(11), e27926. http://doi.org/10.1371/journal.pone.0027926 Anderson, B. A., Laurent, P. A., & Yantis, S. (2011b). Value-driven attentional capture. Proceedings of the National Academy of Sciences of the U.S.A., 108(25), 10367–10371. http://doi.org/10.1073/pnas.1104047108 Anderson, B. A., Laurent, P. A., & Yantis, S. (2012). Generalization of value-based attentional priority. Visual Cognition, 20(6), 647–658. http://doi.org/10.1080/13506285.2012.679711 Anderson, B. A., & Yantis, S. (2013). Persistence of value-driven attentional capture. Journal of Experimental Psychology: Human Perception and Performance, 39(1), 6–9. http://doi.org/10.1037/a0030860 Anderson, N. C., Risko, E. F., & Kingstone, A. (2011). Exploiting human sensitivity to gaze for tracking the eyes. Behavior Research Methods, 43(3), 843–852. http://doi.org/10.3758/s13428-011-0078-8 Ando, S. (2002). Luminance-induced shift in the apparent direction of gaze. Perception, 31(6), 657–674. http://doi.org/10.1167/1.3.287 Argyle, M., & Cook, M. (1976). Gaze and mutual gaze. Oxford, England, England: Cambridge University Press. Argyle, M., Lefebvre, L., & Cook, M. (1974). The meaning of five patterns of gaze. European Journal of Social Psychology, 4(2), 125–136. http://doi.org/10.1002/ejsp.2420040202 172  Bailenson, J. N., Blascovich, J., Beall, A. C., & Loomis, J. M. (2001). Equilibrium Theory revisited: Mutual gaze and personal space in virtual environments. Presence: Teleoperators and Virtual Environments. http://doi.org/10.1162/105474601753272844 Bargh, J. A. (1994). The four horsemen of automaticity: Awareness, efficiency, intention, and control in social cognition. In R. S. J. Wyer & T. K. Srull (Eds.), Handbook of Social Cognition (2nd ed., Vol. 1, pp. 1–40). Erlbaum: Hillsdale, NJ. Baron-Cohen, S. (1995). Mindblindness. Learning, development, and conceptual change. Baron-Cohen, S., Wheelwright, S., & Jolliffe, T. (1997). Is there a “Language of the Eyes”? Evidence from normal adults, and adults with autism or asperger syndrome. Visual Cognition, 4, 311–331. http://doi.org/10.1080/713756761 Baron-Cohen, S., Wheelwright, S., Skinner, R., Martin, J., & Clubley, E. (2001). The autism-spectrum quotient (AQ): Evidence from Asperger syndrome/high-functioning autism, males and females, scientists and mathematicians. Journal of Autism and Developmental Disorders, 31(1), 5–17. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/11439754 Bateson, M., Callow, L., Holmes, J. R., Redmond Roche, M. L., & Nettle, D. (2013). Do images of “watching eyes” induce behaviour that is more pro-social or more normative? A field experiment on littering. PloS One, 8(12), e82055. http://doi.org/10.1371/journal.pone.0082055 Bell, A. H., Meredith, M. A., Van Opstal, A. J., & Munoz, D. P. (2006). Stimulus intensity modifies saccadic reaction time and visual response latency in the superior colliculus. Experimental Brain Research, 174, 53–59. http://doi.org/10.1007/s00221-006-0420-z Belopolsky, A. V, & Theeuwes, J. (2012). Updating the premotor theory: The allocation of attention is not always accompanied by saccade preparation. Journal of Experimental Psychology: Human Perception and Performance, 38(4), 902–914. http://doi.org/10.1037/a0028662 Belopolsky, A. V, & Van der Stigchel, S. (2013). Saccades curve away from previously inhibited locations: Evidence for the role of priming in oculomotor competition. Journal of Neurophysiology, 110(10), 2370–2377. http://doi.org/10.1152/jn.00293.2013 Bindemann, M. (2010). Scene and screen center bias early eye movements in scene viewing. Vision Research, 50(23), 2577–2587. http://doi.org/10.1016/j.visres.2010.08.016 Bindemann, M., & Burton, A. M. (2008). Attention to upside-down faces: An exception to the inversion effect. Vision Research, 48(25), 2555–2561. http://doi.org/10.1016/j.visres.2008.09.001 173  Bindemann, M., Burton, A. M., Hooge, I. T. C., Jenkins, R., & de Haan, E. H. F. (2005). Faces retain attention. Psychonomic Bulletin & Review, 12(6), 1048–1053. http://doi.org/10.3758/BF03206442 Bindemann, M., Scheepers, C., & Burton, A. M. (2009). Viewpoint and center of gravity affect eye movements to human faces. Journal of Vision, 9(2), 7. http://doi.org/10.1167/9.2.7.Introduction Bindemann, M., Scheepers, C., Ferguson, H. J., & Burton, A. M. (2010). Face, body, and center of gravity mediate person detection in natural scenes. Journal of Experimental Psychology: Human Perception and Performance, 36(6), 1477–1485. http://doi.org/10.1037/a0019057 Birmingham, E., Bischof, W. F., & Kingstone, A. (2007). Why do we look at people’s eyes? Journal of Eye Movement Research, 1(1), 1–6. Birmingham, E., Bischof, W. F., & Kingstone, A. (2008a). Gaze selection in complex social scenes. Visual Cognition, 16(2), 341–355. http://doi.org/10.1080/13506280701434532 Birmingham, E., Bischof, W. F., & Kingstone, A. (2008b). Social attention and real-world scenes: The roles of action, competition and social content. The Quarterly Journal of Experimental Psychology, 61(7), 986–998. http://doi.org/10.1080/17470210701410375 Birmingham, E., Bischof, W. F., & Kingstone, A. (2009a). Get real! Resolving the debate about equivalent social stimuli. Visual Cognition, 17(6), 904–924. http://doi.org/10.1080/13506280902758044 Birmingham, E., Bischof, W. F., & Kingstone, A. (2009b). Saliency does not account for fixations to eyes within social scenes. Vision Research, 49(24), 2992–3000. http://doi.org/10.1016/j.visres.2009.09.014 Blais, C., Jack, R. E., Scheepers, C., Fiset, D., & Caldara, R. (2008). Culture shapes how we look at faces. PLoS ONE, 3(8), e3022. http://doi.org/10.1371/journal.pone.0003022 Blais, C., Roy, C., Fiset, D., Arguin, M., & Gosselin, F. (2012). The eyes are not the window to basic emotions. Neuropsychologia, 50(12), 2830–2838. http://doi.org/10.1016/j.neuropsychologia.2012.08.010 Bolmont, M., Cacioppo, J. T., & Cacioppo, S. (2014). Love Is in the Gaze An Eye-Tracking Study of Love and Sexual Desire. Psychological Science, 0956797614539706. http://doi.org/10.1177/0956797614539706 Braeutigam, S., Bailey, A. J., & Swithenby, S. J. (2001). Task-dependent early latency (30-60 ms) visual processing of human faces and other objects. Neuroreport, 12(7), 1531–1536. http://doi.org/10.1097/00001756-200105250-00046 174  Brassen, S., Gamer, M., Rose, M., & Büchel, C. (2010). The influence of directed covert attention on emotional face processing. NeuroImage, 50(2), 545–551. http://doi.org/10.1016/j.neuroimage.2009.12.073 Brown, G. S., & White, K. G. (2005). The optimal correction for estimating extreme discriminability. Behavior Research Methods, 37(3), 436–449. http://doi.org/10.3758/BF03192712 Buswell, G. T. (1935). How people look at pictures: A study of the psychology of perception in art. Chicago: University of Chicago Press. Cabel, D. W. J., Armstrong, I. T., Reingold, E., & Munoz, D. P. (2000). Control of saccade initiation in a countermanding task using visual and auditory stop signals. Experimental Brain Research, 133(4), 431–441. http://doi.org/10.1007/s002210000440 Caldara, R., Zhou, X., & Miellet, S. (2010). Putting culture under the “spotlight” reveals universal information use for face recognition. PLoS ONE, 5(3), e9708. http://doi.org/10.1371/journal.pone.0009708 Carmel, D., Fairnie, J., & Lavie, N. (2012). Weight and see: Loading working memory improves incidental identification of irrelevant faces. Frontiers in Psychology, 3, 286. http://doi.org/10.3389/fpsyg.2012.00286 Cary, M. (1978a). Does civil inattention exist in pedestrian passing? Journal of Personality and Social Psychology, 36(11), 1185–1193. http://doi.org/10.1037/0022-3514.36.11.1185 Cary, M. (1978b). The role of gaze in the initiation of conversation. Social Psychology, 41(3), 269–271. http://doi.org/10.2307/3033565 Casarotti, M., Lisi, M., Umiltà, C., & Zorzi, M. (2012). Paying attention through eye movements: A computational investigation of the premotor theory of spatial attention. Journal of Cognitive Neuroscience, 24(7), 1519–1531. http://doi.org/10.1162/jocn_a_00231 Charman, T., Swettenham, J., Baron-Cohen, S., Cox, A., Baird, G., & Drew, A. (1997). Infants with autism: An investigation of empathy, pretend play, joint attention, and imitation. Developmental Psychology, 33(5), 781–789. http://doi.org/http://dx.doi.org/10.1037/0012-1649.33.5.781 Chatel-Goldman, J., Schwartz, J.-L., Jutten, C., & Congedo, M. (2013). Non-local mind from the perspective of social cognition. Frontiers in Human Neuroscience, 7, 107. http://doi.org/10.3389/fnhum.2013.00107 Connellan, J., Baron-Cohen, S., Batki, A., & Ahluwalia, J. (2000). Sex differences in human neonatal social perception. Infant Behavior and Development, 23(1), 113–118. http://doi.org/10.1016/S0163-6383(00)00032-1 175  Conty, L., Tijus, C., Hugueville, L., Coelho, E., & George, N. (2006). Searching for asymmetries in the detection of gaze contact versus averted gaze under different head views: A behavioural study. Spatial Vision, 19(6), 529–545. http://doi.org/10.1163/156856806779194026 Cook, M. (1977). Gaze and mutual gaze in social encounters: How long—and when—we look others “in the eye” is one of the main signals in nonverbal communication. American Scientist, 65(3), 328–333. http://doi.org/10.2307/27847843 Crouzet, S. M., Kirchner, H., & Thorpe, S. J. (2010). Fast saccades toward faces: Face detection in just 100 ms. Journal of Vision, 10(4), 16. http://doi.org/10.1167/10.4.16 Crundall, D., & Underwood, G. (2008). Some practical constraints on Cognitive Ethology: Striking the balance between a theoretical approach and a practical methodology. British Journal of Psychology, 99(3), 341–345. http://doi.org/10.1348/000712608X283788 Dahl, C. D., Logothetis, N. K., Bülthoff, H. H., & Wallraven, C. (2010). The Thatcher illusion in humans and monkeys. Proceedings of the Royal Society of London B: Biological Sciences, 277(1696), 2973–2981. http://doi.org/10.1098/rspb.2010.0438 Dalton, K. M., Nacewicz, B. M., Johnstone, T., Schaefer, H. S., Gernsbacher, M. A., Goldsmith, H. H., … Davidson, R. J. (2005). Gaze fixation and the neural circuitry of face processing in autism. Nature Neuroscience, 8(4), 519–526. http://doi.org/10.1038/nn1421 Dawson, G., Meltzoff, A. N., Osterling, J., Rinaldi, J., & Brown, E. (1998). Children with autism fail to orient to naturally occurring social stimuli. Journal of Autism and Developmental Disorders, 28(6), 479–485. http://doi.org/10.1023/A:1026043926488 Dawson, G., Webb, S. J., & McPartland, J. (2005). Understanding the nature of face processing impairment in autism: Insights from behavioral and electrophysiological studies. Developmental Neuropsychology, 27(3), 403–424. http://doi.org/10.1207/s15326942dn2703_6 De Marco, A., Sanna, A., Cozzolino, R., & Thierry, B. (2014). The function of greetings in male Tonkean macaques. American Journal of Primatology, 76(10), 989–998. http://doi.org/10.1002/ajp.22288 Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visual attention. Annual Review of Neuroscience, 18(1), 193–222. http://doi.org/10.1146/annurev.ne.18.030195.001205 Deubel, H., & Schneider, W. X. (2003). Delayed saccades, but not delayed manual aiming movements, require visual attention shifts. Annals of the New York Academy of Sciences, 1004(1), 289–296. http://doi.org/10.1196/annals.1303.026 176  Devue, C., Belopolsky, A. V, & Theeuwes, J. (2012). Oculomotor guidance and capture by irrelevant faces. PloS One, 7(4), e34598. http://doi.org/10.1371/journal.pone.0034598 Devue, C., Laloyaux, C., Feyers, D., Theeuwes, J., & Brédart, S. (2009). Do pictures of faces, and which ones, capture attention in the inattentional-blindness paradigm? Perception, 38, 552–568. http://doi.org/10.1068/p6049 Downing, P. E., Bray, D., Rogers, J., & Childs, C. (2004). Bodies capture attention when nothing is expected. Cognition, 93(1). http://doi.org/10.1016/j.cognition.2003.10.010 Doyle, M. C., & Walker, R. (2001). Curved saccade trajectories: Voluntary and reflexive saccades curve away from irrelevant distractors. Experimental Brain Research, 139(3), 333–344. http://doi.org/10.1007/s002210100742 Driver, J., Davis, G., Ricciardelli, P., Kidd, P., Maxwell, E., & Baron-Cohen, S. (1999). Gaze perception triggers reflexive visuospatial orienting. Visual Cognition, 6(5), 509–540. http://doi.org/10.1080/135062899394920 Duranti, A. (1997). Universal and culture-specific properties of greetings. Linguistic Anthropology, 7(1), 63–97. http://doi.org/10.1525/jlin.1997.7.1.63 Eisenbarth, H., & Alpers, G. W. (2011). Happy mouth and sad eyes: Scanning emotional facial expressions. Emotion, 11(4), 860–865. http://doi.org/http://dx.doi.org/10.1037/a0022758 Ellsworth, P. C., Carlsmith, J. M., Henson, A., Austin, J., Mower, J., Payne, J., … Haven, N. (1972). The stare as a stimulus to flight in human subjects: A series of field experiments. Journal of Personality and Social Psychology, 21(3), 302–311. http://doi.org/10.1037/h0032323 Emery, N. J. (2000). The eyes have it: The neuroethology, function and evolution of social gaze. Neuroscience and Biobehavioral Reviews, 24(6), 581–604. http://doi.org/10.1016/S0149-7634(00)00025-7 Engbert, R., & Kliegl, R. (2003). Microsaccades uncover the orientation of covert attention. Vision Research, 43(9), 1035–1045. http://doi.org/10.1016/S0042-6989(03)00084-1 Eriksen, C. W., & Yeh, Y. Y. (1985). Allocation of attention in the visual field. Journal of Experimental Psychology: Human Perception & Performance, 11(5), 583–597. Everling, S., & Fischer, B. (1998). The antisaccade: A review of basic research and clinical studies. Neuropsychologia, 36(9), 885–99. Farah, M. J., Tanaka, J. W., & Drain, H. M. (1995). What causes the face inversion effect? Journal of Experimental Psychology: Human Perception and Performance, 21(3), 628–634. http://doi.org/10.1037/0096-1523.21.3.628 177  Farroni, T., Csibra, G., Simion, F., & Johnson, M. H. (2002). Eye contact detection in humans from birth. Proceedings of the National Academy of Sciences of the United States of America, 99(14), 9602–9605. http://doi.org/10.1073/pnas.152159999 Farroni, T., Johnson, M. H., & Csibra, G. (2004). Mechanisms of eye gaze perception during infancy. Journal of Cognitive Neuroscience, 16(8), 1320–1326. http://doi.org/10.1162/0898929042304787 Farroni, T., Johnson, M. H., Menon, E., Zulian, L., Faraguna, D., & Csibra, G. (2005). Newborns’ preference for face-relevant stimuli: Effects of contrast polarity. Proceedings of the National Academy of Sciences of the United States of America, 102(47), 17245–17250. http://doi.org/10.1073/pnas.0502205102 Farroni, T., Massaccesi, S., Menon, E., & Johnson, M. H. (2007). Direct gaze modulates face recognition in young infants. Cognition, 102(3), 396–404. http://doi.org/10.1016/j.cognition.2006.01.007 Fecteau, J. H., & Munoz, D. P. (2006). Salience, relevance, and firing: A priority map for target selection. Trends in Cognitive Sciences, 10(8), 382–90. http://doi.org/10.1016/j.tics.2006.06.011 Field, A. (2013). Discovering Statistics using IBM SPSS Statistics. In Discovering Statistics using IBM SPSS Statistics (pp. 297–321). http://doi.org/10.1016/B978-012691360-6/50012-4 Findlay, J. M. (2004). Eye scanning and visual search. In J. M. Henderson & F. Ferreira (Eds.), The interface of language, vision, and action: Eye movements and the visual world (pp. 135–159). Psychology Press. Fletcher-Watson, S., Findlay, J. M., Leekam, S. R., & Benson, V. (2008). Rapid detection of person information in a naturalistic scene. Perception, 37(4), 571–583. http://doi.org/10.1068/p5705 Fletcher-Watson, S., Leekam, S. R., Benson, V., Frank, M. C., & Findlay, J. M. (2009). Eye-movements reveal attention to social information in autism spectrum disorder. Neuropsychologia, 47(1), 248–257. http://doi.org/10.1016/j.neuropsychologia.2008.07.016 Folk, C. L., Remington, R. W., & Johnston, J. C. (1992). Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human Perception and Performance, 18(4), 1030–1044. http://doi.org/10.1037/0096-1523.18.4.1030 Fotios, S., Yang, B., & Uttley, J. (2015). Observing other pedestrians: Investigating the typical distance and duration of fixation. Lighting Research and Technology, 47(5), 548–564. http://doi.org/10.1177/1477153514529299 178  Foulsham, T., Cheng, J. T., Tracy, J. L., Henrich, J., & Kingstone, A. (2010). Gaze allocation in a dynamic situation: effects of social status and speaking. Cognition, 117(3), 319–331. http://doi.org/10.1016/j.cognition.2010.09.003 Foulsham, T., & Underwood, G. (2009). Does conspicuity enhance distraction? Saliency and eye landing position when searching for objects. Quarterly Journal of Experimental Psychology, 62(6), 1088–98. http://doi.org/10.1080/17470210802602433 Foulsham, T., Walker, E., & Kingstone, A. (2011). The where, what and when of gaze allocation in the lab and the natural environment. Vision Research, 51(17), 1920–1931. http://doi.org/10.1016/j.visres.2011.07.002 Franconeri, S. L., & Simons, D. J. (2003). Moving and looming stimuli capture attention. Perception & Psychophysics, 65(7), 999–1010. http://doi.org/10.3758/BF03194829 Freeth, M., Foulsham, T., & Kingstone, A. (2013). What affects social attention? Social presence, eye contact and autistic traits. PloS One, 8(1), e53286. http://doi.org/10.1371/journal.pone.0053286 Freire, A., Lee, K., & Symons, L. A. (2000). The face-inversion effect as a deficit in the encoding of configurai information: Direct evidence. Perception, 29(2), 159–170. http://doi.org/10.1068/p3012 Friesen, C. K., & Kingstone, A. (1998). The eyes have it! Reflexive orienting is triggered by nonpredictive gaze. Psychonomic Bulletin & Review, 5(3), 490–495. http://doi.org/10.3758/BF03208827 Frischen, A., Bayliss, A. P., & Tipper, S. P. (2007). Gaze cueing of attention: Visual attention, social cognition, and individual differences. Psychological Bulletin, 133(4), 694–724. http://doi.org/10.1037/0033-2909.133.4.694 Frith, U., Morton, J., & Leslie, A. M. (1991). The cognitive basis of a biological disorder: Autism. Trends in Neurosciences, 14(10), 433–438. http://doi.org/10.1016/0166-2236(91)90041-R Gallup, A. C., Chong, A., & Couzin, I. D. (2012). The directional flow of visual information transfer between pedestrians. Biology Letters, 8(4), 520–522. http://doi.org/10.1098/rsbl.2012.0160 Gallup, A. C., Hale, J. J., Sumpter, D. J. T., Garnier, S., Kacelnik, A., Krebs, J. R., & Couzin, I. D. (2012). Visual attention and the acquisition of information in human crowds. Proceedings of the National Academy of Sciences of the United States of America, 109(19), 7245–7250. http://doi.org/10.1073/pnas.1116141109 179  Gamer, M., & Hecht, H. (2007). Are you looking at me? Measuring the cone of gaze. Journal of Experimental Psychology: Human Perception and Performance, 33(3), 705–715. http://doi.org/10.1037/0096-1523.33.3.705 Gauthier, I., & Tarr, M. J. (1997). Becoming a “Greeble” expert: Exploring mechanisms for face recognition. Vision Research, 37(12), 1673–1682. http://doi.org/10.1016/S0042-6989(96)00286-6 Gazzaley, A., Cooney, J. W., McEvoy, K., Knight, R. T., & D’Esposito, M. (2005). Top-down enhancement and suppression of the magnitude and speed of neural activity. Journal of Cognitive Neuroscience, 17(3), 507–517. http://doi.org/10.1162/0898929053279522 Gillath, O., McCall, C., Shaver, P. R., & Blascovich, J. (2008). What can virtual reality teach us about prosocial tendencies in real and virtual environments? Media Psychology, 11(2), 259–282. http://doi.org/10.1080/15213260801906489 Girden, E. R. (1992). ANOVA: Repeated Measures. Newbury Park, CA: Sage Publications. Gobel, M. S., Kim, H. S., & Richardson, D. C. (2015). The dual function of social gaze. Cognition, 136, 359–364. http://doi.org/10.1016/j.cognition.2014.11.040 Godijn, R., & Theeuwes, J. (2002). Programming of endogenous and exogenous saccades : Evidence for a competitive integration model. Journal of Experimental Psychology: Human Perception & Performance, 28(5), 1039 –1054. http://doi.org/10.1037//0096-1523.28.5.1039 Goffman, E. (1963). Behavior in public places: Notes on the social organization of gatherings. New York: The Free Press. Goffman, E. (1971). Relations in public: Microstudies of the public order. New York: Basic Books. Goren, C. C., Sarty, M., & Wu, P. Y. (1975). Visual following and pattern discrimination of face-like stimuli by newborn infants. Pediatrics, 56(4), 544–549. Gosselin, F., & Schyns, P. G. (2001). Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research, 41(17), 2261–2271. http://doi.org/10.1016/S0042-6989(01)00097-9 Guerin, B. (1986). Mere presence effects in humans: A review. Journal of Experimental Social Psychology, 22(1), 38–77. http://doi.org/10.1016/0022-1031(86)90040-5 Guo, K., Robertson, R. G., Mahmoodi, S., Tadmor, Y., & Young, M. P. (2003). How do monkeys view faces? A study of eye movements. Experimental Brain Research, 150(3), 363–374. http://doi.org/10.1007/s00221-003-1429-1 180  Hafed, Z. M., & Clark, J. J. (2002). Microsaccades as an overt measure of covert attention shifts. Vision Research, 42(22), 2533–2545. http://doi.org/10.1016/S0042-6989(02)00263-8 Hall, E. T. (1968). Proxemics. Current Anthropology, 9(2/3), 83–108. http://doi.org/10.1086/200975 Henderson, J. M. (2003). Human gaze control during real-world scene perception. Trends in Cognitive Sciences, 7(11), 498–504. http://doi.org/10.1016/j.tics.2003.09.006 Henderson, J. M., Falk, R. J., Minut, S., Dyer, F. C., & Mahadevan, S. (2001). Gaze control for face learning and recognition in humans and machines. In T. Shipley & P. Kellman (Eds.), From fragments to objects: Segmentation processes in vision (pp. 463–481). New York: Elsevier. Henderson, J. M., Williams, C. C., & Falk, R. J. (2005). Eye movements are functional during face learning. Memory & Cognition, 33(1), 98–106. http://doi.org/http://psycnet.apa.org/doi/10.3758/BF03195300 Hermens, F., & Walker, R. (2010). Gaze and arrow distractors influence saccade trajectories similarly. Quarterly Journal of Experimental Psychology, 63(11), 2120–2140. http://doi.org/10.1080/17470211003718721 Hietanen, J. K., Nummenmaa, L., Nyman, M. J., Parkkola, R., & Hämäläinen, H. (2006). Automatic attention orienting by social and symbolic cues activates different neural networks: an fMRI study. NeuroImage, 33(1), 406–413. http://doi.org/10.1016/j.neuroimage.2006.06.048 Hoffman, J. E., & Subramaniam, B. (1995). The role of visual attention in saccadic eye movements. Perception & Psychophysics, 57, 787–795. http://doi.org/10.3758/BF03206794 Honda, H. (2005). The remote distractor effect of saccade latencies in fixation-offset and overlap conditions. Vision Research, 45(21), 2773–2779. http://doi.org/10.1016/j.visres.2004.06.026 Hood, B. M., Macrae, C. N., Cole-Davies, V., & Dias, M. (2003). Eye remember you: The effects of gaze direction on face recognition in children and adults. Developmental Science, 6(1), 67–71. http://doi.org/10.1111/1467-7687.00256 Hsiao, J. H. W., & Cottrell, G. (2008). Two fixations suffice in face recognition. Psychological Science, 19(10), 998–1006. http://doi.org/10.1111/j.1467-9280.2008.02191.x Hunt, A. R., & Kingstone, A. (2003a). Covert and overt voluntary attention: Linked or independent? Cognitive Brain Research, 18(1), 102–105. http://doi.org/10.1016/j.cogbrainres.2003.08.006 181  Hunt, A. R., & Kingstone, A. (2003b). Inhibition of return: Dissociating attentional and oculomotor components. Journal of Experimental Psychology: Human Perception and Performance, 29(5), 1068–1074. http://doi.org/10.1037/0096-1523.29.5.1068 Itier, R. J., Alain, C., Sedore, K., & McIntosh, A. R. (2007). Early face processing specificity: it’s in the eyes! Journal of Cognitive Neuroscience, 19(11), 1815–1826. http://doi.org/10.1162/jocn.2007.19.11.1815 Itier, R. J., Villate, C., & Ryan, J. D. (2007). Eyes always attract attention but gaze orienting is task-dependent: Evidence from eye movement monitoring. Neuropsychologia, 45(5), 1019–1028. http://doi.org/10.1016/j.neuropsychologia.2006.09.004 Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40(10), 1489–1506. http://doi.org/10.1016/S0042-6989(99)00163-7 Jack, R. E., Blais, C., Scheepers, C., Fiset, D., & Caldara, R. (2007). Culture shapes eye movements during face identification. Journal of Vision, 7(9), 573. http://doi.org/10.1167/7.9.573 Jacoby, L. L. (1991). A process dissociation framework: Separating automatic from intentional uses of memory. Journal of Memory and Language, 30(5), 513–541. http://doi.org/10.1016/0749-596X(91)90025-F Janik, S. W., Wellens, A. R., Goldberg, M. L., & Dell’Osso, L. F. (1978). Eyes as the center of focus in the visual examination of human faces. Perceptual and Motor Skills, 47(3), 857–858. http://doi.org/10.2466/pms.1978.47.3.857 Jasso, H., & Triesch, J. (2008). Attention in cognitive systems. Theories and systems from an interdisciplinary viewpoint. In Attention in Cognitive Systems. Theories and Systems from an Interdisciplinary Viewpoint (Vol. 4840, pp. 106 – 122). http://doi.org/10.1007/978-3-540-77343-6 Johnson, M. H. (2005). Subcortical face processing. Nature Reviews Neuroscience, 6(10), 766–774. http://doi.org/10.1038/nrn1766 Johnson, M. H., Dziurawiec, S., Ellis, H., & Morton, J. (1991). Newborns’ preferential tracking of face-like stimuli and its subsequent decline. Cognition, 40(1), 1–19. http://doi.org/10.1016/0010-0277(91)90045-6 Jones, W., & Klin, A. (2013). Attention to eyes is present but in decline in 2-6-month-old infants later diagnosed with autism. Nature, 504(7480), 427–431. http://doi.org/10.1038/nature12715 182  Juan, C.-H., Shorter-Jacobi, S. M., & Schall, J. D. (2004). Dissociation of spatial attention and saccade preparation. Proceedings of the National Academy of Sciences of the United States of America, 101(43), 15541–15544. http://doi.org/10.1073/pnas.0403507101 Kanwisher, N., McDermott, J., & Chun, M. M. (1997). The fusiform face area: A module in human extrastriate cortex specialized for face perception. The Journal of Neuroscience, 17(11), 4302–4311. http://doi.org/10.1098/Rstb.2006.1934 Kanwisher, N., Tong, F., & Nakayama, K. (1998). The effect of face inversion on the human fusiform face area. Cognition, 68(1), B1–B11. http://doi.org/10.1016/S0010-0277(98)00035-3 Kendon, A. (1967). Some functions of gaze-direction in social interaction. Acta Psychologica, 26(1), 22–63. http://doi.org/10.1016/0001-6918(67)90005-4 Kingstone, A. (2009). Taking a real look at social attention. Current Opinion in Neurobiology, 19(1), 52–56. http://doi.org/10.1016/j.conb.2009.05.004 Kingstone, A., Smilek, D., & Eastwood, J. D. (2008). Cognitive ethology: A new approach for studying human cognition. British Journal of Psychology, 99(3), 317–340. http://doi.org/10.1348/000712607X251243 Kingstone, A., Smilek, D., Ristic, J., Friesen, C. K., & Eastwood, J. D. (2003). Attention, researchers! it is time to take a look at the real world. Current Directions in Psychological Science, 12(5), 176–180. http://doi.org/10.1111/1467-8721.01255 Kirchner, H., & Thorpe, S. J. (2006). Ultra-rapid object detection with saccadic eye movements: Visual processing speed revisited. Vision Research, 46(11), 1762–1776. http://doi.org/10.1016/j.visres.2005.10.002 Kitazawa, K., & Fujiyama, T. (2008). Pedestrain vision and collision avoidance behavior: Investigation of the information process space of pedestrians using an eye tracker. In W. W. . Klinsch, C. Rogsch, A. Schadschneider, & M. Schrechenberg (Eds.), Pedestrian and Evacuation Dynamics (pp. 95–108). Springer Berlin Heidelberg. Klein, J. T., Shepherd, S. V, & Platt, M. L. (2009). Social attention and the brain. Current Biology, 19(20), R958–R962. http://doi.org/10.1016/j.cub.2009.08.010 Kleinhans, N. M., Richards, T., Johnson, L. C., Weaver, K. E., Greenson, J., Dawson, G., & Aylward, E. (2011). fMRI evidence of neural abnormalities in the subcortical face processing system in ASD. NeuroImage, 54(1), 697–704. http://doi.org/10.1016/j.neuroimage.2010.07.037 Kleinke, C. L. (1986). Gaze and eye contact: A research review. Psychological Bulletin, 100(1), 78–100. http://doi.org/10.1037/0033-2909.100.1.78 183  Kleinke, C. L., Staneski, R. A., & Berger, D. E. (1975). Evaluation of an interviewer as a function of interviewer gaze, reinforcement of subject gaze, and interviewer attractiveness. Journal of Personality and Social Psychology, 31(1), 115–122. http://doi.org/10.1037/h0076244 Klin, A., Jones, W., Schultz, R. T., Volkmar, F. R., & Cohen, D. (2002). Visual fixation patterns during viewing of naturalistic social situations as predictors of social competence in individuals with autism. Archives of General Psychiatry, 59(9), 809–816. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/12215080 Knapp, M. L., Hall, J. A., & Horgan, T. (2009). Nonverbal Communication in Human Interaction (8th ed., Vol. 5). Boston: Wadsworth, Cengage Learning. Retrieved from http://books.google.com/books?id=j5HIIfRUPm0C Kobayashi, H., & Kohshima, S. (1997). Unique morphology of the human eye. Nature, 387(6635), 767–768. http://doi.org/10.1038/42842 Kobayashi, H., & Kohshima, S. (2001a). Evolution of the human eye as a device for communication. In T. Matsuzawa (Ed.), Primate Origins of Human Cognition and Behavior (pp. 383–401). Japan: Springer. Kobayashi, H., & Kohshima, S. (2001b). Unique morphology of the human eye and its adaptive meaning: Comparative studies on external morphology of the primate eye. Journal of Human Evolution, 40(5), 419–435. http://doi.org/10.1006/jhev.2001.0468 Kuhn, G., & Kingstone, A. (2009). Look away! Eyes and arrows engage oculomotor responses automatically. Attention, Perception & Psychophysics, 71(2), 314–327. http://doi.org/10.3758/APP.71.2.314 Kuhn, G., Kourkoulou, A., & Leekam, S. R. (2010). How magic changes our expectations about autism. Psychological Science, 21(10), 1487–93. http://doi.org/10.1177/0956797610383435 Kuhn, G., Tatler, B. W., & Cole, G. G. (2009). You look where I look! Effect of gaze cues on overt and covert attention in misdirection. Visual Cognition, 17(6), 925–944. http://doi.org/10.1080/13506280902826775 Kundel, H. L., Nodine, C. F., Krupinski, E. A., & Mello-Thoms, C. (2008). Using gaze-tracking data and mixture distribution analysis to support a holistic model for the detection of cancers on mammograms. Academic Radiology, 15(7), 881–886. http://doi.org/10.1016/j.acra.2008.01.023 Kustov, A. A., & Robinson, D. L. (1996). Shared neural control of attentional shifts and eye movements. Nature, 384, 74–77. http://doi.org/10.1038/384074a0 184  Laidlaw, K. E. W., Badiudeen, T. A., Zhu, M. J. H., & Kingstone, A. (2015). A fresh look at saccadic trajectories and task irrelevant stimuli: Social relevance matters. Vision Research, 111(Part A), 82–90. http://doi.org/10.1016/j.visres.2015.03.024 Laidlaw, K. E. W., Foulsham, T., Kuhn, G., & Kingstone, A. (2011). Potential social interactions are important to social attention. Proceedings of the National Academy of Sciences of the United States of America, 108(14), 5548–5553. http://doi.org/10.1073/pnas.1017022108 Laidlaw, K. E. W., & Kingstone, A. (2010). The time course of vertical, horizontal and oblique saccade trajectories: Evidence for greater distractor interference during vertical saccades. Vision Research, 50(9), 829–837. http://doi.org/10.1016/j.visres.2010.02.009 Laidlaw, K. E. W., Risko, E. F., & Kingstone, A. (2012). A new look at social attention: Orienting to the eyes is not (entirely) under volitional control. Journal of Experimental Psychology: Human Perception and Performance, 38(5), 1132–1143. http://doi.org/10.1037/a0027075 Langton, S. R. H., & Bruce, V. (1999). Reflexive visual orienting in response to the social attention of others. Visual Cognition, 6(5), 541–567. http://doi.org/10.1080/135062899394939 Langton, S. R. H., Law, A. S., Burton, A. M., & Schweinberger, S. R. (2008). Attention capture by faces. Cognition, 107(1), 330–342. http://doi.org/10.1016/j.cognition.2007.07.012 Langton, S. R. H., Watt, R. J., & Bruce, V. (2000). Do the eyes have it? Cues to the direction of social attention. Trends in Cognitive Sciences, 4(2), 50–59. http://doi.org/10.1016/S1364-6613(99)01436-9 Lansing, C. R., & McConkie, G. W. (2003). Word identification and eye fixation locations in visual and visual-plus-auditory presentations of spoken sentences. Perception & Psychophysics, 65(4), 536–552. http://doi.org/10.3758/BF03194581 LeDoux, J. E. (1996). The Emotional Brain. New York: Simon and Schuster. Levy, J., Foulsham, T., & Kingstone, A. (2012). Monsters are people too. Biology Letters, 9(1). http://doi.org/10.1098/rsbl.2012.0850 Lewis, M. B., & Edmonds, A. J. (2003). Face detection: Mapping human performance. Perception, 32(8), 903–920. http://doi.org/10.1068/p5007 Lobmaier, J. S., Fischer, M. H., & Schwaninger, A. (2006). Objects capture perceived gaze direction. Experimental Psychology, 53(2), 117–122. http://doi.org/10.1027/1618-3169.53.2.117 Loomis, J. M., Kelly, J. W., Pusch, M., Bailenson, J. N., & Beall, A. C. (2008). Psychophysics of perceiving eye-gaze and head direction with peripheral vision: Implications for the 185  dynamics of eye-gaze behavior. Perception, 37(9), 1443–1457. http://doi.org/10.1068/p5896 Ludwig, C. J. H., & Gilchrist, I. D. (2002). Measuring saccade curvature: A curve-fitting approach. Behavior Research Methods, Instruments, & Computers, 34(4), 618–624. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/12564565 Ludwig, C. J. H., & Gilchrist, I. D. (2003). Target similarity affects saccade curvature away from irrelevant onsets. Experimental Brain Research, 152(1), 60–69. http://doi.org/10.1007/s00221-003-1520-7 Lutchmaya, S., Baron-Cohen, S., & Raggatt, P. (2002). Foetal testosterone and eye contact in 12-month-old human infants. Infant Behavior and Development, 25(3), 327–335. http://doi.org/10.1016/S0163-6383(02)00094-2 Mack, A., Pappas, Z., Silverman, M., & Gay, R. (2002). What we see: Inattention and the capture of attention by meaning. Consciousness and Cognition, 11(4), 488–506. http://doi.org/10.1016/S1053-8100(02)00028-4 Madan, C. R. (2013). Toward a common theory for learning from reward, affect, and motivation: The SIMON framework. Frontiers in Systems Neuroscience, 7, 59. http://doi.org/10.3389/fnsys.2013.00059 Mason, M. F., Tatkow, E. P., & Macrae, C. N. (2005). The look of love: Gaze shifts and person perception. Psychological Science, 16(3), 236–239. http://doi.org/10.1111/j.0956-7976.2005.00809.x Maurer, D., Le Grand, R., & Mondloch, C. J. (2002). The many faces of configural processing. Trends in Cognitive Sciences, 6(6), 255–260. http://doi.org/10.1016/S1364-6613(02)01903-4 McCarthy, A., Lee, K., Itakura, S., & Muir, D. W. (2006). Cultural display rules drive eye gaze during thinking. Journal of Cross-Cultural Psychology, 37(6), 717–722. http://doi.org/10.1177/0022022106292079 McCarthy, A., Lee, K., Itakura, S., & Muir, D. W. (2008). Gaze display when thinking depends on culture and context. Journal of Cross-Cultural Psychology, 39(6), 716–729. http://doi.org/10.1177/0022022108323807 McCarthy, G., Puce, A., Gore, J. C., & Allison, T. (1997). Face-specific processing in the human fusiform gyrus. Journal of Cognitive Neuroscience, 9(5), 605–610. http://doi.org/10.1162/jocn.1997.9.5.605 McConkie, G. W., & Rayner, K. (2013). Asymmetry of the perceptual span in reading. Bulletin of the Psychonomic Society, 8(5), 365–368. http://doi.org/10.3758/BF03335168 186  McKelvie, S. J. (1976). The role of eyes and mouth in the memory of a face. The American Journal of Psychology, 89(2), 311–323. http://doi.org/10.2307/1421414 McPeek, R. M., Han, J. H., & Keller, E. L. (2003). Competition between saccade goals in the superior colliculus produces saccade curvature. Journal of Neurophysiology, 89(5), 2577–2590. http://doi.org/10.1152/jn.00657.2002 McSorley, E., Cruickshank, A. G., & Inman, L. a. (2009). The development of the spatial extent of oculomotor inhibition. Brain Research, 1298, 92–98. http://doi.org/10.1016/j.brainres.2009.08.081 McSorley, E., Haggard, P., & Walker, R. (2004). Distractor modulation of saccade trajectories: spatial separation and symmetry effects. Experimental Brain Research, 155(3), 320–333. http://doi.org/10.1007/s00221-003-1729-5 McSorley, E., Haggard, P., & Walker, R. (2006). Time course of oculomotor inhibition revealed by saccade trajectory modulation. Journal of Neurophysiology, 96(3), 1420–1424. http://doi.org/10.1152/jn.00315.2006 McSorley, E., Haggard, P., & Walker, R. (2009). The spatial and temporal shape of oculomotor inhibition. Vision Research, 49(6), 608–614. http://doi.org/10.1016/j.visres.2009.01.015 Mesibov, G. B. (1984). Social skills training with verbal autistic adolescents and adults: a program model. Journal of Autism and Developmental Disorders, 14, 395–404. http://doi.org/10.1007/BF02409830 Minear, M., & Park, D. C. (2004). A lifespan database of adult facial stimuli. Behavior Research Methods, Instruments, & Computers, 36(4), 630–633. http://doi.org/http://dx.doi.org/10.3758/BF03206543 Mondloch, C. J., Lewis, T. L., Budreau, D. R., Maurer, D., Dannemiller, J. L., Stephens, B. R., & Kleiner-Gathercoal, K. A. (1999). Face perception during early infancy. Psychological Science, 10(5), 419–422. http://doi.org/10.1111/1467-9280.00179 Montague, P. R., Berns, G. S., Cohen, J. D., McClure, S. M., Pagnoni, G., Dhamala, M., … Fisher, R. E. (2002). Hyperscanning: Simultaneous fMRI during linked social interactions. NeuroImage, 16(4), 1159–1164. http://doi.org/10.1006/nimg.2002.1150 Moore, T., Armstrong, K. M., & Fallah, M. (2003). Visuomotor origins of covert spatial attention. Neuron, 40(4), 671–683. http://doi.org/10.1016/S0896-6273(03)00716-5 Moran, J., & Desimone, R. (1985). Selective attention gates visual processing in the extrastriate cortex. Science, 229(4715), 782–784. http://doi.org/10.1126/science.4023713 187  Morton, J., & Johnson, M. H. (1991). CONSPEC and CONLERN: a two-process theory of infant face recognition. Psychological Review, 98(2), 164–181. http://doi.org/10.1037/0033-295X.98.2.164 Moukheiber, A., Rautureau, G., Perez-Diaz, F., Soussignan, R., Dubal, S., Jouvent, R., & Pelissolo, A. (2010). Gaze avoidance in social phobia: Objective measure and correlates. Behaviour Research and Therapy, 48(2), 147–151. http://doi.org/10.1016/j.brat.2009.09.012 Mulckhuyse, M., Van der Stigchel, S., & Theeuwes, J. (2009). Early and late modulation of saccade deviations by target distractor similarity. Journal of Neurophysiology, 102(3), 1451–1458. http://doi.org/10.1152/jn.00068.2009 Müller, H. J., & Rabbitt, P. M. (1989). Reflexive and voluntary orienting of visual attention: time course of activation and resistance to interruption. Journal of Experimental Psychology: Human Perception and Performance, 15(2), 315–330. http://doi.org/10.1037/0096-1523.15.2.315 Mundy, P., & Newell, L. (2007). Attention, joint attention, and social cognition. Current Directions in Psychological Science, 16(5), 269–274. http://doi.org/10.1111/j.1467-8721.2007.00518.x Munoz, D. P., & Everling, S. (2004). Look away: The anti-saccade task and the voluntary control of eye movement. Nature Reviews Neuroscience, 5(3), 218–228. http://doi.org/10.1038/nrn1345 Nation, K., & Penny, S. (2008). Sensitivity to eye gaze in autism: is it normal? Is it automatic? Is it social? Development and Psychopathology, 20(1), 79–97. http://doi.org/10.1017/S0954579408000047 Nummenmaa, L., & Calder, A. J. (2009). Neural mechanisms of social attention. Trends in Cognitive Sciences, 13(3), 135–143. http://doi.org/10.1016/j.tics.2008.12.006 Nummenmaa, L., & Hietanen, J. K. (2006). Gaze distractors influence saccadic curvature: Evidence for the role of the oculomotor system in gaze-cued orienting. Vision Research, 46(21), 3674–3680. http://doi.org/10.1016/j.visres.2006.06.004 Nummenmaa, L., Hyönä, J., & Calvo, M. G. (2009). Emotional scene content drives the saccade generation system reflexively. Journal of Experimental Psychology: Human Perception and Performance, 35(2), 305–323. http://doi.org/10.1037/a0013626 O’Donnell, C., & Bruce, V. (2001). Familiarisation with faces selectively enhances sensitivity to changes made to the eyes. Perception, 30(6), 755–764. http://doi.org/10.1068/p3027 Olk, B., & Garay-Vado, A. M. (2011). Attention to faces: Effects of face inversion. Vision Research, 51(14), 1659–1666. http://doi.org/10.1016/j.visres.2011.05.007 188  Olk, B., & Kingstone, A. (2003). Why are antisaccades slower than prosaccades? A novel finding using a new paradigm. Neuroreport, 14(1), 151–155. http://doi.org/10.1097/00001756-200301200-00028 Ozonoff, S., & Miller, J. N. (1995). Teaching theory of mind: A new approach to social skills training for individuals with autism. Journal of Autism and Developmental Disorders, 25(4), 415–433. http://doi.org/10.1007/BF02179376 Park, D. C., & Huang, C.-M. (2010). Culture wires the brain: A cognitive neuroscience perspective. Perspectives on Psychological Science. http://doi.org/10.1177/1745691610374591 Patterson, M. L., Iizuka, Y., Tubbs, M. E., Ansel, J., Tsutsumi, M., & Anson, J. (2007). Passing encounters East and West: Comparing Japanese and American pedestrian interactions. Journal of Nonverbal Behavior, 31(3), 155–166. http://doi.org/10.1007/s10919-007-0028-4 Patterson, M. L., Webb, A., & Schwartz, W. (2002). Passing encounters: Patterns of recognition and avoidance in pedestrians. Basic and Applied Social Psychology, 24(1), 57–66. http://doi.org/10.1207/S15324834BASP2401_5 Pelphrey, K. A., Sasson, N. J., Reznick, J. S., Paul, G., Goldman, B. D., & Piven, J. (2002). Visual scanning of faces in autism. Journal of Autism and Developmental Disorders, 32(4), 249–261. http://doi.org/10.1023/A:1016374617369 Perrett, D. I., & Emery, N. J. (1994). Understanding the intentions of others from visual signals: Neurophysiological evidence. Current Psychology of Cognition, 13(5), 683–694. Retrieved from http://doi.apa.org/psycinfo/1995-24608-001 Perrett, D. I., & Mistlin, A. J. (1990). Perception of facial characteristics by monkeys. In W. C. Stebbins & M. A. Berkley (Eds.), Comparative perception, Vol. 2: Complex signals. (Vol 2, pp. 187–215). Oxford: John Wiley & Sons. Pessoa, L., & Adolphs, R. (2010). Emotion processing and the amygdala: from a “low road” to “many roads” of evaluating biological significance. Nature Reviews. Neuroscience, 11(11), 773–783. http://doi.org/10.1038/nrn2920 Peterson, M. F., & Eckstein, M. P. (2012). Looking just below the eyes is optimal across face recognition tasks. Proceedings of the National Academy of Sciences of the United States of America, 109(48), E3314–E3323. http://doi.org/10.1073/pnas.1214269109 Peterson, M. S., Kramer, A. F., & Irwin, D. E. (2004). Covert shifts of attention precede involuntary eye movements. Perception & Psychophysics, 66(3), 398–405. http://doi.org/10.3758/BF03194888 Pfeiffer, U., Vogeley, K., & Schilbach, L. (2013). From gaze cueing to dual eye-tracking: Novel approaches to investigate the neural correlates of gaze in social interaction. Neuroscience 189  and Biobehavioral Reviews, 37(10), 2516–2528. http://doi.org/10.1016/j.neubiorev.2013.07.017 Polk, T. A., Drake, R. M., Jonides, J., Smith, M. R., & Smith, E. E. (2008). Attention enhances the neural processing of relevant features and suppresses the processing of irrelevant features in humans: A functional magnetic resonance imaging study of the Stroop task. The Journal of Neuroscience, 28(51), 13786–13792. http://doi.org/10.1523/JNEUROSCI.1026-08.2008 Posner, M. I. (1980). Orienting of attention. The Quarterly Journal Of Experimental Psychology, 32(1), 3–25. http://doi.org/10.1080/00335558008248231 Posner, M. I., & Petersen, S. E. (1990). The attention system of the human brain. Annual Review of Neuroscience, 13, 25–42. http://doi.org/10.1146/annurev.ne.13.030190.000325 Posner, M. I., Petersen, S. E., Fox, P. T., & Raichle, M. E. (1988). Localization of cognitive operations in the human brain. Science, 240(4859), 1627–1631. http://doi.org/10.1126/science.3289116 Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences, 1(4), 515–526. http://doi.org/10.1017/S0140525X00076512 Przyrembel, M., Smallwood, J., Pauen, M., & Singer, T. (2012). Illuminating the dark matter of social neuroscience: Considering the problem of social interaction from philosophical, psychological, and neuroscientific perspectives. Frontiers in Human Neuroscience, 6, 190. http://doi.org/10.3389/fnhum.2012.00190 Qian, H., Gao, X., & Wang, Z. (2015). Faces distort eye movement trajectories, but the distortion is not stronger for your own face. Experimental Brain Research, 233(7), 2155–2166. http://doi.org/10.1007/s00221-015-4286-9 Redcay, E., Dodell-Feder, D., Pearrow, M. J., Mavros, P. L., Kleiner, M., Gabrieli, J. D. E., & Saxe, R. (2010). Live face-to-face interaction during fMRI: A new tool for social cognitive neuroscience. NeuroImage, 50(4), 1639–1647. http://doi.org/10.1016/j.neuroimage.2010.01.052 Redcay, E., Rice, K., & Saxe, R. (2013). Interaction versus observation: a finer look at this distinction and its importance to autism. The Behavioral and Brain Sciences, 36(4), 435. http://doi.org/10.1017/S0140525X12002026 Rhodes, G., Byatt, G., Michie, P. T., & Puce, A. (2004). Is the fusiform face area specialized for faces, individuation, or expert individuation? Journal of Cognitive Neuroscience, 16(2), 189–203. http://doi.org/10.1162/089892904322984508 Ricciardelli, P., Baylis, G., & Driver, J. (2000). The positive and negative of human expertise in gaze perception. Cognition, 77(1), B1–B14. http://doi.org/10.1016/S0010-0277(00)00092-5 190  Richler, J. J., Cheung, O. S., & Gauthier, I. (2011). Holistic processing predicts face recognition. Psychological Science, 22(4), 464–471. http://doi.org/10.1177/0956797611401753 Risko, E. F., Laidlaw, K. E. W., Freeth, M., Foulsham, T., & Kingstone, A. (2012). Social attention with real versus reel stimuli: Toward an empirical approach to concerns about ecological validity. Frontiers in Human Neuroscience, 6, 143. http://doi.org/10.3389/fnhum.2012.00143 Rizzolatti, G., Riggio, L., Dascola, I., & Umiltà, C. (1987). Reorienting attention across the horizontal and vertical meridians: Evidence in favor of a premotor theory of attention. Neuropsychologia, 25(1), 31–40. http://doi.org/10.1016/0028-3932(87)90041-8 Rizzolatti, G., Riggio, L., & Sheliga, B. M. (1994). Space and selective attention. In C. Umiltà & M. Moscovitch (Eds.), Attention and performance XV: Conscious and Nonconscious Information Processing (Vol. 3 Suppl, pp. 232–265). Boston: MIT Press. Ro, T., Russell, C., & Lavie, N. (2001). Changing faces: A detection advantage in the flicker paradigm. Psychological Science, 12(1), 94–99. http://doi.org/10.1111/1467-9280.00317 Ross, L. E., & Ross, S. M. (1980). Saccade latency and warning signals: Stimulus onset, offset, and change as warning events. Perception & Psychophysics, 27(3), 251–257. Rossano, F., Brown, P., & Levinson, S. C. (2009). Gaze, questioning, and culture. In J. Sidnell (Ed.), Conversation Analysis: Comparative Perspectives (pp. 187–249). New York: Cambridge University Press. Rutter, D. R., & Durkin, K. (1987). Turn-taking in mother–infant interaction: An examination of vocalizations and gaze. Developmental Psychology, 23(1), 54–61. http://doi.org/10.1037/0012-1649.23.1.54 Sadr, J., Jarudi, I., & Sinha, P. (2003). The role of eyebrows in face recognition. Perception, 32(3), 285–293. http://doi.org/10.1068/p5027 Sæther, L., Belle, W. Van, Laeng, B., Brennen, T., & Øvervoll, M. (2009). Anchoring gaze when categorizing faces’ sex: Evidence from eye-tracking data. Vision Research, 49(23), 2870–2880. http://doi.org/10.1016/j.visres.2009.09.001 Saslow, M. G. (1967). Effects of components of displacement-step stimuli upon latency for saccadic eye movement. Journal of Optical Society of America, 57(8), 1024–1029. Saxe, R. (2006). Uniquely human social cognition. Current Opinion in Neurobiology, 16(2), 235–239. http://doi.org/10.1016/j.conb.2006.03.001 Schilbach, L., Timmermans, B., Reddy, V., Costall, A., Bente, G., Schlicht, T., & Vogeley, K. (2013). Toward a second-person neuroscience. Behavioral and Brain Sciences, 36(4), 393–414. 191  Schmalzl, L., Palermo, R., Green, M., Brunsdon, R., & Coltheart, M. (2008). Training of familiar face recognition and visual scan paths for faces in a child with congenital prosopagnosia. Cognitive Neuropsychology, 25(5), 704–729. http://doi.org/10.1167/8.6.412 Schmidt, L. J., Belopolsky, A. V, & Theeuwes, J. (2012). The presence of threat affects saccade trajectories. Visual Cognition, 20(3), 284–299. http://doi.org/10.1080/13506285.2012.658885 Schneider, W. X., & Deubel, H. (2002). Selection-for-perception and selection-for-spatial-motor-action are coupled by visual attention : A review of recent findings and new evidence from stimulus-driven saccade control. Attention and Performance XIX: Common Mechanisms in Perception and Action, 19, 609–627. Retrieved from http://www.paed.uni-muenchen.de/~deubel/A_P_2000.PDF Schütz, A. C., Trommershäuser, J., & Gegenfurtner, K. R. (2012). Dynamic integration of information about salience and value for saccadic eye movements. Proceedings of the National Academy of Sciences, 109(19), 7547–7552. http://doi.org/10.1073/pnas.1115638109 Schyns, P. G., Bonnar, L., & Gosselin, F. (2002). Show me the features! Understanding recognition from the use of visual information. Psychological Science, 13(5), 402–409. http://doi.org/10.1111/1467-9280.00472 Sebanz, N., Knoblich, G., & Humphreys, G. W. (2008). Cognitive Ethology for humans: Inconvenient truth or attentional deficit? British Journal of Psychology, 99(3), 347–350. http://doi.org/10.1348/000712608X297080 Sekiguchi, T. (2011). Individual differences in face memory and eye fixation patterns during face learning. Acta Psychologica, 137(1), 1–9. http://doi.org/10.1016/j.actpsy.2011.01.014 Senju, A., & Hasegawa, T. (2005). Direct gaze captures visuospatial attention. Visual Cognition, 12(1), 127–144. http://doi.org/10.1080/13506280444000157 Senju, A., Hasegawa, T., & Tojo, Y. (2005). Does perceived direct gaze boost detection in adults and children with and without autism? The stare-in-the-crowd effect revisited. Visual Cognition, 12(8), 1474–1496. http://doi.org/10.1080/13506280444000797 Senju, A., & Johnson, M. H. (2009a). Atypical eye contact in autism: Models, mechanisms and development. Neuroscience and Biobehavioral Reviews, 33(8), 1204–1214. http://doi.org/10.1016/j.neubiorev.2009.06.001 Senju, A., & Johnson, M. H. (2009b). The eye contact effect: mechanisms and development. Trends in Cognitive Sciences, 13(3), 127–134. http://doi.org/10.1016/j.tics.2008.11.009 192  Senju, A., Johnson, M. H., & Csibra, G. (2006). The development and neural basis of referential gaze perception. Social Neuroscience, 1(3-4), 220–234. http://doi.org/10.1080/17470910600989797 Sheliga, B. M., Riggio, L., Craighero, L., & Rizzolatti, G. (1995). Spatial attention-determined modifications in saccade trajectories. Neuroreport, 6(3), 585–588. http://doi.org/10.1097/00001756-199502000-00044 Sheliga, B. M., Riggio, L., & Rizzolatti, G. (1994). Orienting of attention and eye movements. Experimental Brain Research, 98(3), 507–22. Sheliga, B. M., Riggio, L., & Rizzolatti, G. (1995). Spatial attention and eye movements. Experimental Brain Research, 105(2), 261–75. Simion, F., Valenza, E., Cassia, V. M., Turati, C., & Umiltà, C. (2002). Newborns’ preference for up-down asymmetrical configurations. Developmental Science, 5(4), 427–434. http://doi.org/10.1111/1467-7687.00237 Smallwood, J., Brown, K. S., Tipper, C. M., Giesbrecht, B., Franklin, M. S., Mrazek, M. D., … Schooler, J. W. (2011). Pupillometric evidence for the decoupling of attention from perceptual input during offline thought. PLoS ONE, 6(3), e18298. http://doi.org/10.1371/journal.pone.0018298 Smilek, D., Birmingham, E., Cameron, D., Bischof, W. F., & Kingstone, A. (2006). Cognitive Ethology and exploring attention in real-world scenes. Brain Research, 1080(1), 101–119. http://doi.org/10.1016/j.brainres.2005.12.090 Smith, A. T., Singh, K. D., & Greenlee, M. W. (2000). Attentional suppression of activity in the human visual cortex. Neuroreport, 11(2), 271–277. http://doi.org/10.1097/00001756-200002070-00010 Smith, D. T., & Schenk, T. (2012). The premotor theory of attention: Time to move on? Neuropsychologia, 50(6), 1104–1114. http://doi.org/10.1016/j.neuropsychologia.2012.01.025 Smith, D. T., Schenk, T., & Rorden, C. (2012). Saccade preparation is required for exogenous attention but not endogenous attention or IOR. Journal of Experimental Psychology: Human Perception & Performance, 38(6), 1438–1447. http://doi.org/http://dx.doi.org/10.1037/a0027794 Somers, D. C., Dale, A. M., Seiffert, A. E., & Tootell, R. B. (1999). Functional MRI reveals spatially specific attentional modulation in human primary visual cortex. Proceedings of the National Academy of Sciences of the United States of America, 96(4), 1663–1668. http://doi.org/10.1073/pnas.96.4.1663 193  Staugaard, S. R. (2010). Threatening faces and social anxiety: A literature review. Clinical Psychology Review. http://doi.org/10.1016/j.cpr.2010.05.001 Tanaka, J. W., & Farah, M. J. (1993). Parts and wholes in face recognition. The Quarterly Journal of Experimental Psychology, 46(2), 225–245. http://doi.org/10.1080/14640749308401045 Tanaka, J. W., & Sengco, J. A. (1997). Features and their configuration in face recognition. Memory & Cognition, 25(5), 583–592. http://doi.org/10.3758/BF03211301 Tanaka, J. W., & Sung, A. (2013). The “eye avoidance” hypothesis of autism face processing. Journal of Autism and Developmental Disorders, 1–15. http://doi.org/10.1007/s10803-013-1976-7 Tatler, B. W. (2007). The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. Journal of Vision, 7(14), 4. http://doi.org/10.1167/7.14.4 Teufel, C., Fletcher, P. C., & Davis, G. (2010). Seeing other minds: Attributed mental states influence perception. Trends in Cognitive Sciences, 14(8), 376–382. http://doi.org/10.1016/j.tics.2010.05.005 Thayer, S., & Schiff, W. (1977). Gazing Patterns and Attribution of Sexual Involvement. The Journal of Social Psychology. http://doi.org/10.1080/00224545.1977.9924014 Theeuwes, J. (2010). Top-down and bottom-up control of visual selection. Acta Psychologica, 135(2), 77–99. http://doi.org/10.1016/j.actpsy.2010.02.006 Theeuwes, J., Kramer, A. F., Hahn, S., & Irwin, D. E. (1998). Our eyes do not always go where we want them to go: Capture of the eyes by new objects. Psychological Science, 9(5), 379–385. http://doi.org/10.1111/1467-9280.00071 Theeuwes, J., & Van der Stigchel, S. (2006). Faces capture attention: Evidence from inhibition of return. Visual Cognition, 13(6), 657–665. http://doi.org/10.1080/13506280500410949 Theeuwes, J., & Van der Stigchel, S. (2009). Saccade trajectory deviations and inhibition-of-return: Measuring the amount of attentional processing. Vision Research, 49(10), 1307–1315. http://doi.org/10.1016/j.visres.2008.07.021 Theeuwes, J., Van der Stigchel, S., & Olivers, C. N. L. (2006). Spatial working memory and inhibition of return. Psychonomic Bulletin & Review, 13(4), 608–13. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/17201359 Tipples, J. (2002). Eye gaze is not unique: Automatic orienting in response to uninformative arrows. Psychonomic Bulletin & Review, 9(2), 314–318. http://doi.org/10.3758/BF03196287 194  Todd, R. M., Cunningham, W. A., Anderson, A. K., & Thompson, E. (2012). Affect-biased attention as emotion regulation. Trends in Cognitive Sciences, 16(7), 365–372. http://doi.org/10.1016/j.tics.2012.06.003 Tomasello, M., Carpenter, M., Call, J., Behne, T., & Moll, H. (2005). Understanding and sharing intentions: The origins of cultural cognition. The Behavioral and Brain Sciences, 28(5), 675–691. http://doi.org/10.1017/S0140525X05000129 Tylén, K., Allen, M., Hunter, B. K., & Roepstorff, A. (2012). Interaction vs. observation: distinctive modes of social cognition in human brain and behavior? A combined fMRI and eye-tracking study. Frontiers in Human Neuroscience, 6, 331. http://doi.org/10.3389/fnhum.2012.00331 Valentine, T. (1988). Upside-down faces: A review of the effect of inversion upon face recognition. British Journal of Psychology, 79(4), 471–491. http://doi.org/10.1111/j.2044-8295.1988.tb02747.x Van der Stigchel, S. (2010). Recent advances in the study of saccade trajectory deviations. Vision Research, 50(17), 1619–1627. http://doi.org/10.1016/j.visres.2010.05.028 Van der Stigchel, S., Meeter, M., & Theeuwes, J. (2006). Eye movement trajectories and what they tell us. Neuroscience and Biobehavioral Reviews, 30(5), 666–679. http://doi.org/10.1016/j.neubiorev.2005.12.001 Van der Stigchel, S., Meeter, M., & Theeuwes, J. (2007). The spatial coding of the inhibition evoked by distractors. Vision Research, 47(2), 210–218. http://doi.org/10.1016/j.visres.2006.11.001 Van der Stigchel, S., Mulckhuyse, M., & Theeuwes, J. (2009). Eye cannot see it: The interference of subliminal distractors on saccade metrics. Vision Research, 49(16), 2104–2109. http://doi.org/10.1016/j.visres.2009.05.018 Van der Stigchel, S., & Theeuwes, J. (2005). Relation between saccade trajectories and spatial distractor locations. Brain Research. Cognitive Brain Research, 25(2), 579–582. http://doi.org/10.1016/j.cogbrainres.2005.08.001 Van Der Stigchel, S., & Theeuwes, J. (2005). The influence of attending to multiple locations on eye movements. Vision Research, 45(15), 1921–1927. http://doi.org/10.1016/j.visres.2005.02.002 Van der Stigchel, S., & Theeuwes, J. (2007). The relationship between covert and overt attention in endogenous cuing. Perception & Psychophysics, 69(5), 719–731. http://doi.org/10.3758/BF03193774 Van Zoest, W., & Donk, M. (2005). The effects of salience on saccadic target selection. Visual Cognition, 12(2), 353–375. http://doi.org/10.1080/13506280444000229 195  Vertegaal, R., Vertegaal, R., Slagter, R., Slagter, R., van Der Veer, G., van Der Veer, G., … Nijholt, A. (2001). Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes. In Proceedings of the SIGCHI conference on Human factors in computing systems (p. 308). http://doi.org/10.1145/365024.365119 Vinette, C., Gosselin, F., & Schyns, P. G. (2004). Spatio-temporal dynamics of face recognition in a flash: It’s in the eyes. Cognitive Science, 28(2), 289–301. http://doi.org/10.1016/j.cogsci.2004.01.002 Viviani, P., Berthoz, A., & Tracey, D. (1977). The curvature of oblique saccades. Vision Research, 17(5), 661–664. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/878350 Võ, M. L.-H., Smith, T. J., Mital, P. K., & Henderson, J. M. (2012). Do the eyes really have it? Dynamic allocation of attention when viewing moving faces. Journal of Vision, 12(13), 555–561. http://doi.org/10.1167/12.13.3.Introduction Von Grünau, M., & Anston, C. (1995). The detection of gaze direction: A stare-in-the-crowd effect. Perception, 24(11), 1297–1313. http://doi.org/10.1068/p241297 Vuilleumier, P. (2000). Faces call for attention: Evidence from patients with visual extinction. Neuropsychologia, 38(5), 693–700. http://doi.org/10.1016/S0028-3932(99)00107-4 Vuilleumier, P., & Driver, J. (2007). Modulation of visual processing by attention and emotion: Windows on causal interactions between human brain regions. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 362(1481), 837–855. http://doi.org/10.1098/rstb.2007.2092 Walker, R., Deubel, H., Schneider, W. X., & Findlay, J. M. (1997). Effect of remote distractors on saccade programming: Evidence for an extended fixation zone. Journal of Neurophysiology, 78(2), 1108–1119. Walker, R., McSorley, E., & Haggard, P. (2006). The control of saccade trajectories: Direction of curvature depends on prior knowledge of target location and saccade latency. Perception & Psychophysics, 68(1), 129–138. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/16617837 Walker-Smith, G. J. (1978). The effects of delay and exposure duration in a face recognition task. Perception & Psychophysics. http://doi.org/10.3758/BF03202975 Walker-Smith, G. J., Gale, A. G., & Findlay, J. M. (1977). Eye movement strategies involved in face perception. Perception, 6(1), 313–326. http://doi.org/10.1068/p060313n Wang, Z., Kruijne, W., & Theeuwes, J. (2012). Lateral interactions in the superior colliculus produce saccade deviation in a neural field model. Vision Research, 62, 66–74. http://doi.org/10.1016/j.visres.2012.03.024 196  Weaver, M. D., Lauwereyns, J., & Theeuwes, J. (2011). The effect of semantic information on saccade trajectory deviations. Vision Research, 51(10), 1124–1128. http://doi.org/10.1016/j.visres.2011.03.005 Weiss, M. J., & Harris, S. L. (2001). Teaching social skills to people with autism. Behavior Modification, 25(5), 785–802. http://doi.org/10.1177/0145445501255007 West, G. L., Al-Aidroos, N., Susskind, J., & Pratt, J. (2011). Emotion and action: The effect of fear on saccadic performance. Experimental Brain Research, 209(1), 153–158. http://doi.org/10.1007/s00221-010-2508-8 West, G. L., Anderson, A. K., Ferber, S., & Pratt, J. (2011). Electrophysiological evidence for biased competition in V1 for fear expressions. Journal of Cognitive Neuroscience, 23(11), 3410–3418. http://doi.org/10.1162/jocn.2011.21605 White, B. J., Theeuwes, J., & Munoz, D. P. (2012). Interaction between visual- and goal-related neuronal signals on the trajectories of saccadic eye movements. Journal of Cognitive Neuroscience, 24(3), 707–717. http://doi.org/10.1162/jocn_a_00162 Wierda, S. M., van Rijn, H., Taatgen, N. A., & Martens, S. (2012). Pupil dilation deconvolution reveals the dynamics of attention at high temporal resolution. Proceedings of the National Academy of Sciences, 109(22), 8456–8460. http://doi.org/10.1073/pnas.1201858109 Wieser, M. J., McTeague, L. M., & Keil, A. (2011). Sustained preferential processing of social threat cues: bias without competition? Journal of Cognitive Neuroscience, 23(8), 1973–1986. http://doi.org/10.1162/jocn.2010.21566 Williams, C. C., & Henderson, J. M. (2007). The face inversion effect is not a consequence of aberrant eye movements. Memory & Cognition, 35(8), 1977–1985. http://doi.org/10.3758/BF03192930 Wojciulik, E., Kanwisher, N., & Driver, J. (1998). Covert visual attention modulates face-specific activity in the human fusiform gyrus: fMRI study. Journal of Neurophysiology, 79(3), 1574–1578. http://doi.org/9497433 Wu, D. W.-L., Bischof, W. F., & Kingstone, A. (2013). Looking while eating: The importance of social context to social attention. Scientific Reports, 3, 2356. http://doi.org/10.1038/srep02356 Wu, D. W.-L., Bischof, W. F., & Kingstone, A. (2014). Natural gaze signaling in a social context. Evolution and Human Behavior, 35(3), 211–218. http://doi.org/10.1016/j.evolhumbehav.2014.01.005 Yantis, S., & Jonides, J. (1984). Abrupt visual onsets and selective attention: Evidence from visual search. Journal of Experimental Psychology: Human Perception and Performance, 10(5), 601–621. http://doi.org/10.1037/0096-1523.10.5.601 197  Yantis, S., & Jonides, J. J. (1990). Abrupt visual onsets and selective attention: Voluntary versus automatic allocation. Journal of Experimental Psychology: Human Perception and Performance, 16(1), 121–134. http://doi.org/10.1037/0096-1523.16.1.121 Yarbus, A. L. (1967). Eye movements and vision. New York: Plenum Press. Yee, N., Bailenson, J. N., Urbanek, M., Chang, F., & Merget, D. (2007). The unbearable likeness of being digital: The persistence of nonverbal social norms in online virtual environments. Cyberpsychology & Behavior : The Impact of the Internet, Multimedia and Virtual Reality on Behavior and Society, 10(1), 115–121. http://doi.org/10.1089/cpb.2006.9984 Yin, R. K. (1969). Looking at upside-down faces. Journal of Experimental Psychology, 81(1), 141–145. http://doi.org/10.1037/h0027474 Yovel, G., & Kanwisher, N. (2005). The neural basis of the behavioral face-inversion effect. Current Biology, 15(24), 2256–2262. http://doi.org/10.1016/j.cub.2005.10.072 Zajonc, R. B. (1965). Social facilitation. Science, 149(3681), 269–274. http://doi.org/10.1126/science.149.3681.269 Zelinsky, G., & Bisley, J. (2015). The what, where, and why of priority maps and their interactions with visual working memory. Annals of the New York Academy of Sciences, 1339, 154–164. Zuckerman, M., Miserandino, M., & Bernieri, F. (1983). Civil inattention exists - in elevators. Personality and Social Psychology Bulletin, 9(4), 578–586. http://doi.org/http://dx.doi.org/10.1177/0146167283094007 Zwickel, J., & Võ, M. L.-H. (2010). How the presence of persons biases eye movements. Psychonomic Bulletin & Review, 17(2), 257–262. http://doi.org/10.3758/PBR.17.2.257     198  Appendix  Appendix A  Chapter 6: Additional analyses A.1 Experiment 1 recognition phase reaction time analysis  Reaction times were submitted to a repeated measures ANOVA with face category (New, or previously viewed from FV or DL) and response accuracy (Correct, Incorrect) as within-subject factors, and DL Instruction (DL: Eyes, DL: Mouth), and whether they were informed of the recognition test (uninformed, informed) as between subject factors. A response was considered correct if the participant categorized the image as New for new Images and Old for images previously seen in FV or DL Encoding blocks. Only participants who made at least one mistake were included in RT analyses: this excluded 3 participants. Briefly, this analysis primarily shows that participant responses were fastest for correct than for incorrect responses and that Free Viewing images were responded to the fastest, followed by Don't Look images then New images.  In full, there was a main effect of response accuracy, F(1,97) = 36.74, p < .001, with faster RTs for correct versus incorrect responses. Response accuracy interacted significantly with image type, F(1.96,190.19) = 6.16, p = .003, which was due to significantly slower correct RTs for faces seen during a DL encoding phase than to those seen during FV, t(103) = 3.35, p = .001, and faster incorrect RTs for DL vs. FV faces, t(102) = 2.79, p = .006. Correct responses were also slower for DL than for New faces, t(103) = 2.13, p = .04, while incorrect responses were faster for DL than for New faces, t(101) = 2.23, p = .03. RTs to New faces did not differ to those to FV faces, both ps > .05. Returning to the main ANOVA, there was a significant interaction between image type and DL instructions, F(1.80,174.25) = 3.53, p = .04, though neither main effect was significant. This interaction simply represented a significant effect of face category for the DL: Eyes group, F(1.87,95.23) = 3.57, p = .04 that just failed to reach significance for the DL: Mouth group, F(1.91,97.52) = 2.84, p = .07. For the DL: Eyes group, only New faces were responded to significantly faster than faces originally seen in the DL encoding face, t(51) = 2.32, p = .03. These nuanced differences in RT do not countermand the logd discriminability results and therefore were of little interest to the primary goal of the paper. 199  A.2 Experiment 2 recognition phase reaction time analysis All participants may at least one mistake and this were included in the RT analyses. Analyses of RTs across face category (Attend: Eyes, Attend: Mouth, New) and accuracy (correct, incorrect) revealed only a main effect of accuracy, F(1,38) = 7.61, p = .009, revealing faster responses for correct than for incorrect responses.  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0216552/manifest

Comment

Related Items