UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Attention in a meaningful world: brain responses to behavioral relevance Tipper, Christine 2007

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2008_spring_tipper_christine.pdf [ 5.99MB ]
Metadata
JSON: 24-1.0066164.json
JSON-LD: 24-1.0066164-ld.json
RDF/XML (Pretty): 24-1.0066164-rdf.xml
RDF/JSON: 24-1.0066164-rdf.json
Turtle: 24-1.0066164-turtle.txt
N-Triples: 24-1.0066164-rdf-ntriples.txt
Original Record: 24-1.0066164-source.json
Full Text
24-1.0066164-fulltext.txt
Citation
24-1.0066164.ris

Full Text

ATTENTIONAL ORIENTING IN A MEANINGFUL WORLD: BRAIN RESPONSES TO BEHAVIORAL RELEVANCE by Christine Marie Tipper  B. A., University of British Columbia, 2001 M. A., University of British Columbia, 2003  A DISSERTATION SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in The Faculty of Graduate Studies (Psychology)  THE UNIVERSITY OF BRITISH COLUMBIA December 2007  © Christine Marie Tipper, 2007  ii Abstract While it is known that primitive, low-level visual stimuli such as abrupt visual onsets or luminance changes can bias attentional orienting without willful intent on the part of the observer, comparatively little is known about how attention functions in rich, dynamic, meaningful contexts, such as those that comprise our everyday lives. The primary motivating hypothesis of this investigation is that, given our intrinsic needs as evolved social organisms, as well as our capability for behavioral flexibility, the attention system should be sensitive not only to low-level stimulus features, but also to complex stimuli that provide behaviorally relevant information. Three separate lines of research will be presented, each one providing a unique perspective on this issue. The first examined attentional orienting to socially relevant stimuli, finding that eye gaze serves as particularly potent cue for attentional orienting, driving the cortical orienting network more robustly than non-social stimuli, and resulting in a larger attention-related modulation of the early visual processing of stimuli appearing at attended locations. The second line of inquiry investigated patterns of eye movements while participants viewed naturalistic navigational scenes, revealing a dynamic interplay of orienting to the various behaviorally relevant aspects of the scene. The third set of studies specifically addressed whether, given the relevance of heading information for guiding navigational behavior, there is evidence that attention can be oriented automatically to the heading point in an optic flow field simulating the patterns of visual stimulation that accompany self-motion. Together, the results converge on the conclusion that attention can be oriented automatically in a dynamic, flexible, and continuous manner on the basis of complex visual stimuli that provide behaviorally relevant information.  iii Table of Contents Abstract .........................................................................................................................ii Table of Contents .........................................................................................................iii List of Tables ................................................................................................................vi List of Figures..............................................................................................................vii Acknowledgements.....................................................................................................viii Co-Authorship Statement ............................................................................................ix Chapter 1: Introduction................................................................................................ 1 Dissertation Overview..................................................................................................... 3 Early Theories of Attention ............................................................................................. 5 Attentional Capture ......................................................................................................... 8 Spatial Cueing............................................................................................................... 12 Automatic Orienting to Meaningful Stimuli .................................................................. 17 A New Direction for Attention Research ....................................................................... 23 Chapter 2: Brain responses to biological relevance ................................................... 24 Introduction................................................................................................................... 25 Experiment 1................................................................................................................. 28 Methods............................................................................................................. 29 Results............................................................................................................... 33 Experiment 2................................................................................................................. 39 Methods............................................................................................................. 40 Results............................................................................................................... 42 Experiment 3................................................................................................................. 45 Method .............................................................................................................. 47 Results............................................................................................................... 49 Meta-Analysis: Experiments 1-3 ................................................................................... 50 Method .............................................................................................................. 50 Results............................................................................................................... 51 General Discussion........................................................................................................ 51 A Cortical Network for Reflexive Attention to Meaningful Stimuli ......................... 53 Biologically Relevant Cues as Inherently Meaningful Stimuli ................................. 55 Critical Considerations ............................................................................................ 56 Conclusion .................................................................................................................... 58  iv Chapter 3: Attentional Orienting in Dynamic Scenes ............................................... 59 Method .............................................................................................................. 62 Results............................................................................................................... 65 Discussion ......................................................................................................... 83 Chapter 4: Automatic Attentional Orienting to Optic Flow...................................... 91 Optic Flow in the Control of Navigation........................................................................ 91 Attention Modulates Dorsal Stream Processing ............................................................. 95 Experiment 1................................................................................................................. 98 Method .............................................................................................................. 99 Results............................................................................................................. 106 Discussion ....................................................................................................... 109 Experiment 2............................................................................................................... 111 Methods........................................................................................................... 113 Results............................................................................................................. 115 Discussion ....................................................................................................... 117 Experiment 3............................................................................................................... 119 Method ............................................................................................................ 120 Results............................................................................................................. 121 Discussion ....................................................................................................... 127 Summary of Experiments 1-3...................................................................................... 131 Chapter 5: Attention to Optic Flow in a Visually Immersive Motion Simulation .. 134 Experiment 1............................................................................................................... 139 Method ............................................................................................................ 140 Results............................................................................................................. 143 Discussion ....................................................................................................... 146 Experiment 2............................................................................................................... 147 Method ............................................................................................................ 149 Results............................................................................................................. 151 Discussion ....................................................................................................... 160 Summary of Experiments 1 and 2................................................................................ 167 Chapter 6: Is Attentional Orienting to Optic Flow Strongly Reflexive?................. 170 Experiment 1............................................................................................................... 172 Methods........................................................................................................... 174 Results............................................................................................................. 176 Discussion ....................................................................................................... 182  v Experiment 2............................................................................................................... 184 Methods........................................................................................................... 188 Results............................................................................................................. 189 Discussion ....................................................................................................... 202 Summary of Experiments 1 and 2................................................................................ 205 Chapter 7: General Discussion ................................................................................. 207 Summary..................................................................................................................... 207 Conclusions................................................................................................................. 217 Implications ................................................................................................................ 227 Future Directions......................................................................................................... 230 References.................................................................................................................. 233 Appendix A: Neural Processing of Optic Flow ........................................................ 279  vi List of Tables Table 2.1: Mean RT for Experiment 1 ........................................................................... 34 Table 2.2: BOLD cluster maxima.................................................................................. 37 Table 2.3: BOLD cluster maxima for eye gaze cue > arrow cue statistical contrast........ 39 Table 2.4: Response time data for Experiment 2............................................................ 44 Table 2.5: Mean peak P1 ERP voltage amplitudes for Experiment 2.............................. 44 Table 2.6. Mean response times for Experiment 3. ........................................................ 50 Table 4.1: Accuracy Data for Experiment 1................................................................. 107 Table 4.2: RT Data for Experiment 1........................................................................... 107 Table 4.3: Peak P1 component amplitudes................................................................... 108 Table 4.4: Accuracy data for Experiment 2.................................................................. 116 Table 4.5: Accuracy data for Experiment 3.................................................................. 122 Table 4.6: Peak P1 component amplitudes................................................................... 127 Table 4.7: Peak C1 component amplitudes .................................................................. 127 Table 5.1: Accuracy Data for Experiment 1................................................................. 145 Table 5.2: Accuracy data for Experiment 2.................................................................. 152 Table 5.3: Peak C1 component amplitudes .................................................................. 155 Table 5.4: Peak P1 component amplitudes................................................................... 156 Table 5.5: Peak N1 component amplitudes.................................................................. 157 Table 5.6: Mean P3 component amplitudes ................................................................. 158 Table 6.1: Accuracy data for Experiment 1.................................................................. 177 Table 6.2: Mean EOG amplitudes for Experiment 1 .................................................... 181 Table 6.4: Accuracy data for Experiment 2.................................................................. 190 Table 6.5: Predicted and observed ERP component amplitude modulations................. 202  vii List of Figures Figure 2.1: Ambiguous cue stimulus ............................................................................. 28 Figure 2.2: Attentional orienting network...................................................................... 36 Figure 2.3: Regions preferentially engaged by the eye gaze cue..................................... 38 Figure 2.4: Grand-averaged ERP waveforms from Experiment 2................................... 45 Figure 3.1: Example frames from each scene................................................................. 65 Figure 3.2: Categorical timing vectors for one individual. ............................................. 69 Figure 3.3: Categorical timing vectors averaged across participants............................... 72 Figure 3.4: Inter-Subject Congruence ............................................................................ 74 Figure 3.5: Normalized looking time ............................................................................. 76 Figure 3.6: Number of samples...................................................................................... 78 Figure 3.7: Average sample duration ............................................................................. 80 Figure 3.8: Scene-related gaze activity .......................................................................... 82 Figure 4.1: Virtual environment for Experiment 1 ....................................................... 104 Figure 4.2: Grand-averaged ERP waveforms for Experiment 1.................................... 109 Figure 4.3: Virtual environment for Experiment 2 ....................................................... 112 Figure 4.4: RT data for Experiment 2 .......................................................................... 117 Figure 4.5: RT data for Experiment 3 .......................................................................... 123 Figure 4.6: Grand-averaged ERP waveforms for Experiment 3.................................... 126 Figure 5.1: Virtual environment for large-field viewing............................................... 138 Figure 5.2: RT data for Experiment 1 .......................................................................... 146 Figure 5.3: RT data for Experiment 2 .......................................................................... 152 Figure 5.4: Grand-averaged ERP waveforms for Experiment 2.................................... 159 Figure 6.1: RT data for Experiment 1 .......................................................................... 178 Figure 6.2: Signal-averaged HEOG waveforms for Experiment 1................................ 181 Figure 6.3: RT data for Experiment 2 .......................................................................... 191 Figure 6.4: HEOG waveforms for participants rejected from Experiment 2 ................. 193 Figure 6.5: HEOG waveforms for participants included in Experiment 2..................... 194 Figure 6.6: Grand-averaged ERP waveforms for Experiment 2.................................... 199  viii Acknowledgements This research would not have been possible without the guidance and support of both of my supervisors, Dr. Alan Kingstone and Dr. Todd Handy. Their mentorship, and friendship, over the past six years has meant the world to me. I am tremendously grateful for the financial support provided by the Michael Smith Foundation for Health Research and the Natural Sciences and Engineering Research Council of Canada during my graduate study. These fellowships have enabled my full dedication to this research, and have provided me the opportunity to travel to numerous conferences throughout Canada and the United States. I would also like to thank all of my lab mates for making the countless hours spent in the lab an absolute joy. I will never forget the inspiring discussions, the laughter, and the collaborative spirit that made my graduate school experience the best time of my life. The experiments presented here would not have been possible without the programming genius provided by Ryan Lett, my colleague, my friend, and my partner. From filming videos to programming simulations to standing by me during this past year of hard work. I can not thank Ryan enough for all the encouragement he has provided. Finally, I would like to acknowledge my Mom and Dad for the incredible love and support they have doled out to me over the years. Without them, this document would never have been written. Dad, you have always been my inspiration and my strength. You made me want to be a scientist. And Mom, your kindness, understanding, and absolutely unconditional faith in me gave me the confidence to pursue my dream. I thank you both from the bottom of my heart.  ix Co-Authorship Statement I am the primary author for all experiments presented in this dissertation. Chapter 2 is drawn from a manuscript currently in press in the Journal of Cognitive Neuroscience, co-authored by Dr. Todd Handy, Dr. Barry Giesbrecht, and Dr. Alan Kingstone. Chapters 3-6 arose out of collaborative discussions with Dr. Walter Bischof, Dr. Alan Kingstone, and Dr. Todd Handy.  1 Chapter 1: Introduction As we go about our daily activities, we are unfazed by the tremendous complexity of the neural operations underlying our perception and behavior. Our ability to represent the world around us in a meaningful, coherent way, and to act accordingly to satisfy our goals and manage the demands of the environment reflects the organizational power of our attentional processes. Even mundane tasks, such as crossing a busy street, looking for a friend in a crowd, strolling along a forest trail, or making a cup of tea, necessitate the ability to continuously take in and interpret sensory information, and respond flexibly while adapting to changing circumstances in a dynamic world. As I will review, the study of visual attention to date has focused largely on how visual attention is oriented either to inherently salient visual stimuli such as abrupt visual events in the periphery, to locations or objects having unique primitive stimulus features, or to particular basic elements of the visual scene that are relevant to an ongoing task. This empirical focus on how attention is oriented to primitive, low-level stimulus features and abrupt, salient events, has arisen out of the understandable attempt to achieve precise experimental control. While visual attention can certainly select visual information based on these low-level stimulus features, what has been comparatively neglected is a consideration of whether selection can be based on higher-level stimulus attributes. As a result, very little is known about how visual attention actually operates in the rich, complex, dynamic conditions that define our everyday lives. In this dissertation, I will explore whether (and how) attention is oriented to more complex visual stimuli that are pertinent to observers not necessarily on the basis of their primitive, low-level visual features, but rather on the basis of the significance of the  2 information they convey to the observer as a social, biological organism that has evolved within and adapted to a particular environmental context. Our limited capacity for perceptual and cognitive processing necessitates that our limited processing resources be utilized efficiently on a momentary basis in order to effortlessly negotiate our surroundings, to satisfy our behavioral goals, and to monitor and adapt to continually changing conditions. It is imperative, then, that there be a neural system in place to select the sensory information most pertinent to a given situation. The visual system, for example, is capable of encoding far more sensory information than we can analyze, interpret, and respond to fully at any given moment. Visual attention is the cognitive function that enables the selection of certain visual information while filtering out other visual information. In order for this selection to be useful for guiding coherent perception and coordinated behavior, the appropriate visual information must be selected. Depending on the particular behaviors in which one is engaged, certain aspects of the visual world will be more pertinent than others. While the distance between two stepping stones may be important information for motor planning systems to have access to while one negotiates a soggy garden path, the colour of the flowers alongside the path is far less so. In this example, the distance information is relevant to coordinating safe and effective locomotive behaviors; the colour of the flowers is not. Alternatively, however, consider a painter searching through her box of paints for a particular cerulean blue. In this situation, colour information becomes essential for completing the task at hand. Visual attention selects what aspects of visual information are perceived, analyzed, interpreted, and used to guide behavior. It is therefore imperative that the attention system be 1) sensitive to visual information that is particularly pertinent to constructing a  3 coherent perception of one’s surroundings and organizing appropriate behaviors, and 2) flexible as to what is treated by the system as pertinent at any given time. The primary motivating hypothesis of this investigation is that given both our intrinsic needs and abilities as evolved, social, biological organisms, as well as our capacity for vast cognitive and behavioral flexibility, a great deal of the visual information that is of particular relevance to human cognition and behavior is conveyed by complex visual stimuli not fully represented along unique feature channels during early sensory processing. Importantly then, the attention system should be sensitive to such complex stimuli that are relevant to an observer in a given behavioral context.  Dissertation Overview Chapter 2 addresses the finding that we attend automatically to locations indicated by the eye gaze of others. It has been suggested that the eyes are a meaningful social cue, and as such, serve as a particularly poignant visual stimulus. An active debate in attention research centers on whether there are specialized neural systems in place for orienting attention to locations indicated by eye gaze. The experiments presented in Section 1 were designed to assess whether eyes are in fact “special”, in the sense that orienting attention to eye gaze stimuli engages neural mechanisms distinct from those engaged while orienting to non-social cues. Chapter 3 goes on to explore patterns of oculomotor activity under free viewing conditions in realistic, dynamic scenes. Eye gaze activity was recorded and analyzed in order to assess how people view movies depicting locomotion through everyday environments from a first-person perspective. Gaze activity was analyzed categorically in  4 order to determine whether there are systematic patterns in the types of scene elements people look at most, and how these elements are visually sampled. This analysis provided insights into the kinds of visual stimuli that may be of particular interest to observers in naturalistic conditions. This relatively open-ended approach represents an important methodological advance over previous research limiting the investigation of human attention to the artificial, impoverished stimuli normally utilized in laboratory research. Chapters 4-6 propose the optic flow field generated during self-motion through the environment as a complex stimulus providing meaningful information that is relevant to us as organisms having a stable, physically constrained relationship with the environment. The experiments presented in Chapters 4-6 will address whether attention can be influenced by the visual flow of information in dynamic scenes simulating visual input akin to that encountered during self-motion in depth. The dissertation will begin with an introduction that lays out the historical foundations upon which attention research has been grounded. Early theories of visual attention, and the key empirical results from which those theories arose, will be described. Studies investigating the selection of visual information on the basis of primitive stimulus features will be outlined. I will discuss in detail the spatial cueing paradigm, the primary methodology researchers have used to gain an understanding of the mechanisms of attention, and summarize the experimental results that define our current understanding of the electrophysiological correlates of attention. Finally, I will describe the results of more recent investigations suggesting that the attention system is sensitive to complex, meaningful stimuli. These more recent studies provide a starting point for the present investigation, which directly tests 1) whether there are specialized  5 neural systems for orienting in response to meaningful eye gaze stimuli, and 2) whether attention is oriented automatically to other types of socially or behaviorally relevant stimuli.  Early Theories of Attention Contemporary attention research had its beginnings in information processing theory, which gained a tremendous amount of steam during the communications and computer technology boom of the mid-twentieth century. A central premise of information processing theory was that information was transmitted via individually coded signals. Therefore, the transmittal of more information required the transmittal of more coded signals. Early information processing research established fundamental findings indicating that our cognitive processing systems could only deal appropriately with a certain amount of information at any given time (Crossman, 1953; Hick, 1952; Hyman, 1953; Merkel, 1885). This emerging notion that human information processing was of a limited capacity was exemplified by the shadowing experiments conducted in the 1950s. In these experiments, participants were required to listen to two spoken messages at once, and repeat one of them aloud. When the two messages were presented from the same speaker, and spoken in the same voice, this task was very difficult. When the messages were presented to different ears, or spoken in different voices, however, the task became much simpler (Cherry, 1953; Egan, Carterette, & Thwing, 1954). These results suggested that the to-be-repeated message was selected for processing based on some basic stimulus feature, such as the tone of the voice, or the spatial location of its presentation.  6 Importantly, the selection of one message resulted in the failure to fully process the other; participants could report very little of the to-be-ignored message. Thus, the concept of selective attention emerged as a means of describing how the human information processing system deals with competing or interfering signals in the face of a given set of perceptual or behavioral needs and a limited capacity for information processing. The work of Broadbent (1958) followed directly from the shadowing studies of the early 50s. Broadbent’s theory of attention suggested that the solution to our limited processing capacity was a filtering mechanism that passed on only selected information channels for processing at any given time. Prior to the filter, sensory information underwent a rudimentary analysis of basic, low-level stimulus features such as colour, orientation, or spatial location, which were transmitted in parallel within unique information channels. Based on this low-level analysis, certain sensory information would be selected and passed through the filter for further analysis of higher-level properties, such as form and semantic meaning. The important point here is that the attentional filter was seen as operating at the level of basic stimulus information. Despite the earliest attention research focusing on dichotic listening and shadowing studies, most attention researchers over the past fifty years have focused on visual attention. Narrowing the focus of attention research to the visual modality has made the study of attention more tractable by enabling comparison across studies, by simplifying the precise control of input available to research participants, by disentangling the notion of attention from perception, awareness, and working memory, and, by virtue of the visual system being the best-understood sensory system, allowing the examination of the effects of attention on sensory processing itself.  7 Triesman and Gelade’s (1980) influential feature integration theory (FIT) of visual attention divides conscious visual perception into two stages. These include a preattentive stage in which the primitive features that make up the visual scene such as colour, brightness, edges, and orientation are analyzed, and an attentive stage in which these primitive features are bound to form meaningful perceptual objects. Processing in the pre-attentive stage is thought to occur in parallel, simultaneously neurally coding for numerous primitive feature dimensions across the entire visual field. Visual attention, however, selects a restricted portion of the visual field in order to bind primitive features into more complex representations for further analysis. The classic empirical support for FIT are visual search experiments demonstrating drastically different response time functions for locating a target amongst distractors depending on the specific characteristics of the display. A feature, or pop out, search function occurs when the target differs from distractors along a single primitive feature dimension, such as colour. For example, when searching for a black ‘X’ amongst an array of yellow ‘X’s, the black ‘X’ will “pop out” regardless of how many yellow distractors there are. This results in a flat search function in which response times for target identification remain the same as the number of distractors is increased. A conjunction search function, on the other hand, is characterized by a linear increase in response times for target identification as the number of distractors increases. This occurs when the target of a visual search must be distinguished from the distractors through a conjunction of at least two feature dimensions. For example, the time needed to find a black ‘X’ in an array of black ‘O’s and yellow ‘X’s depends on how many distractors are present, indicating the need to search serially through the display in order to find the target.  8 The theoretical claim of FIT most pertinent to the present dissertation is that while the processing of primitive feature dimensions can occur pre-attentively, the processing of more complex stimuli defined by a conjunction of these primitive features requires focused attention. Critically, this implies that attentional selection occurs on the basis of primitive visual features. This premise, along with Broadbent’s more general two-stage model of information processing, is evident throughout the attention research literature.  Attentional Capture Visual search experiments became widely used in attempting to determine what kinds of basic stimulus features would pop out of a display. Feature and conjunction search functions provided quantitative evidence relating to whether a given stimulus in a search display was visually attention-grabbing. Yantis and Jonides (1984) provided evidence that an abrupt onset acts as a particularly salient singleton stimulus, having the ability to capture attention. Participants were asked to indicate as quickly as possible whether a target letter was present or absent in an array of similar stimuli. On each trial, participants were presented with a single letter indicating the identity of the target stimulus, followed by a fixation stimulus, and then the onset of a search display consisting of either two or four items. Search display items were masked at the onset of the display, and the masks were gradually removed to reveal the identity of the display items without any abrupt stimulation. At some interval following the gradual removal of the masks, an abrupt onset stimulus occurred, that could either be the target, or a distractor. The target was detected much more quickly when it was presented as an abrupt onset than when it was presented gradually. In addition, when the target was an abrupt  9 onset, the search function was flat (i.e., feature search), meaning the target was detected equally fast regardless of the display size. In contrast, when the target was presented gradually, the time to detect it increased with increasing display size (i.e., conjunction search). These results were interpreted as indicating that an abrupt onset singleton is treated by the visual system as a salient primitive stimulus feature by which visual attention was obligatorily captured without intention by the participant. Theeuwes (1991, 1992) subsequently provided evidence, using similar experiments, that static feature singletons such as display elements unique in colour, form, or intensity could capture attention. Several authors, however, have suggested primitive dynamic stimuli, such as an abrupt onset, discrete object motion, or a change in a unique feature dimension are particularly potent signals for attentional capture (Abrams & Christ, 2003, 2005; Franconeri & Simons, 2003, 2005; Franconeri, Hollingworth, & Simons, 2005; Jonides & Yantis, 1988; Remington, Johnston, & Yantis, 1992; Theeuwes, 1995; Von Muhlenen, Rempel, & Enns, 2005; Yantis, 1993a; Yantis & Hillstrom, 1994). In their initial study demonstrating attentional capture by abrupt onsets, Yantis and Jonides (1984) provided a discussion of how the underlying neurophysiology may yield a special status for abrupt onsets within the visual system, enabling the automatic capture of attention regardless of intentional task goals. In particular, they cite the existence of two classes of ganglion cells in the retina, including those that respond maximally to abrupt, transient, or movement stimuli, and those that respond maximally to sustained stimuli (e.g. Cleland, Levick, & Sanderson, 1973; Enroth-Cugell & Robson, 1966). In addition, these cell types are unevenly distributed across the retina, with cells responding to sustained input concentrated at the fovea, and cells responding to transient stimuli  10 more evenly distributed across the retina (Fukada & Stone, 1974).  This retinal  organization provides a neurophysiological basis for the visual system to process abrupt onsets and luminance changes as unique feature channels (Franconeri, Hollingworth, & Simons, 2005). Von Muhlenen, Rempel, and Enns (2005) weighed in on the special status of abrupt feature changes for capturing attention, suggesting that any visual feature can capture attention, provided that it undergoes a discriminable change over time. The defining characteristic of a strong, salient visual signal capable of capturing attention, they argued, is a unique change in one or more stimulus features that stands out against an otherwise calm background. They found that colour, motion, and onset singletons captured attention strongly when changes in these features occurred as temporally unique events, but were much less likely to do so when occurring in conjunction with the onset of the whole display. They argued that these results were consistent with a unique event hypothesis of attentional capture, which emphasizes the importance of the neural sensitivity of the visual system to novelty across many feature dimensions in determining what stimuli will capture attention. They refer to the tuning of the visual system for the detection of novel or changing stimuli as a “general biological readiness to detect novelty” (p. 985). Although there is some debate as to which types of primitive stimulus features capture attention, it is clear from these studies that attention can be oriented automatically on the basis of low-level stimulus attributes. Numerous researchers, however, have questioned the impermeability of automatic attentional capture to modulation by an observer’s intentions, task goals, or strategy on capture. For example, Bacon and Egeth  11 (1994) presented evidence suggesting that the strategy used to complete a search task affected whether or not capture would occur. Their data showed that a color singleton would only capture attention if the task required detection of a singleton target. In addition, Bacon and Egeth (1997) suggested that task instructions allowed participants to restrict visual search to a particular subset of display elements sharing a particular feature, such as color (see also Yantis & Jonides, 1990). This view was consistent with Cave and Wolfe’s (1990) guided search theory. Essentially an elaboration of FIT, guided search theory stated that goal-driven selection strategies could influence the early parallel computation of a salience map, thus playing a role in defining what primitive features would capture attention. Folk, Remington, and Johnston (1992) proposed the similar idea of “contingent capture”, in which volitional, strategic criteria, referred to as “attentional control settings”, play a role in controlling automatic orienting to particular stimulus features. Combined, the attentional capture literature indicates that in the context of a visual search task, display items that are unique in some primitive feature dimension, or undergo an abrupt feature change, can draw attention automatically to their location. Whether capture occurs in a given situation, however, is subject to the particular task conditions, stimulus displays, and strategies employed. Importantly, the goals of attentional capture research have largely been to establish whether there are certain basic visual features that are given priority by the attentional system, and to investigate the degree of control we have over automatic orienting to simple salient stimuli. While visual search experiments have provided a means of assessing what kinds of simple visual stimuli capture attention, the act of orienting itself has been investigated  12 largely with the spatial cueing approach made famous among attention researchers by Posner (1980).  There is little doubt that the cueing paradigm is a powerful  methodological tool for studying the spatial and temporal dynamics of attentional orienting. As I will review, however, this line of research has also focused on very simple, low-level visual stimuli as the basis for attentional selection.  Spatial Cueing Posner (1978) proposed that the cognitive processing of stimuli in the brain occurs via functionally isolable systems. Under this view, a stimulus is initially represented through the automatic, parallel activation of numerous independent codes. Physical codes represent the physical features of a stimulus, such as shape, size, colour, orientation, or spatial location. Semantic codes represent information such as whether a text stimulus is a letter or a number, a consonant or vowel, or a word or a non-word. Attention serves to select one activated code, or pathway, for further analysis, resulting again in the facilitation of processing stimuli activating the same codes, and also resulting in the inhibition of processing in unselected pathways, owing to the limited capacity and selective nature of attentional resources (Posner & Snyder, 1975). Spatial cueing experiments followed from this initial work investigating the costs and benefits of attentional allocation. Posner indicated a need to simplify the tasks used to study the mechanisms of attention as much as possible in order to eliminate the influence of learned associations or the possibility for strategic variation. Such factors were a concern for Posner (1978) because the linguistic stimuli he initially employed were complex in the sense that they were represented by multiple stimulus codes. Given  13 his aim to precisely control the processes involved in selective attention in isolation from the influence of past experience, Posner developed the spatial cueing paradigm. This procedure made use of stimuli stripped to their simplest possible form. The typical attentional cueing experiment involves having participants fixate a central point in a display before a spatial cue is presented. At some variable interval following the spatial cue, a target stimulus appears, at which point the participant must make a detection or discrimination response. The purpose of the spatial cue is to allow the attentional selection of the cued spatial location. The primary objectives of these experiments were to determine whether or not responses to simple targets would be faster when presented at a cued location than at an uncued location, and to chart the time course of this attentional selection for cued locations. In one of the first experiments of this sort (Posner et al., 1978), participants were presented with an arrow cue predicting the likely location of the target (80% validity) or a neutral plus sign at fixation one second before a target flash occurred in either the left or right periphery. The results indicated that when participants knew the likely location of the target, they were about 25 ms faster to respond to the target than when the target was preceded by a neutral plus sign. Subsequent experiments utilizing signal detection analyses confirmed that the RT benefit was the result of increased perceptual sensitivity at the cued location, not just a strategic change in the response criterion for cued targets (Bashinski & Bacharach, 1980; Bonnell, Possamai, & Schmitt, 1987; Downing, 1988). While initial research tested cueing effects brought about by a predictive arrow cue presented centrally (e.g. Posner, 1980; Posner & Cohen, 1984), RT benefits have also been found for targets appearing at the same peripheral location as an abrupt, high  14 salience, high contrast luminance change (Jonides, 1981; Posner & Cohen, 1984; Taylor & Klein, 1998; Lambert & Hockey, 1991). Importantly, the RT benefit for peripherally cued targets occurs even when the location of the cue is not a reliable indicator of the location at which the target will appear. Because in these studies, the cue provided no information necessary to completing the task at hand, participants had no reason to willfully attend to the cue’s location. Thus, the observed cueing effects were thought to arise automatically. This logic has formed the basis for a large body of spatial cueing research; flashing a salient, spatially non-predictive peripheral cue has become the standard means of eliciting an automatic shift in spatial attention. Although RT is facilitated for targets presented to attended locations regardless of whether attention is oriented to those locations volitionally or automatically, the two types of orienting give rise to different response time profiles, suggesting differences in how and when attentional movements are made (Jonides, 1981; Muller & Rabbit, 1989; Klein, Kingstone, & Pontefract, 1993; Rafal & Henik, 1994; Klein, 1994). The temporal profile of the cueing effect has been assessed by comparing RTs to cued and uncued targets presented at a range of cue-target intervals. A RT benefit for a cued relative to an uncued target at a given point in time is indicative of attention having been oriented to the cued location at that time. The timing of the emergence and decline of RT facilitation for cued targets is different depending on whether attention has been oriented volitionally via predictive central cues or automatically via non-predictive peripheral cues. Orienting elicited automatically by spatially non-predictive peripheral cues is understood to occur more quickly than orienting controlled volitionally based on predictive cues presented centrally (Remington, 1978; Muller & Rabbit, 1989; Cheal &  15 Lyon, 1991). The effects of volitional orienting have been found to reach their optimal level approximately 300 ms following the onset of the predictive cue and persist for a second or more (Muller & Findlay, 1988). Cueing effects reflecting automatic orienting, on the other hand, were found in these early studies to arise within the first 100 ms following a peripheral cue, and fade out after about 300 ms following the onset of the cue (Yantis & Jonides, 1984). Given that cueing effects reflecting volitional orienting to predictive central cues arise more slowly than cueing effects in response to salient nonpredictive peripheral cues, and that spatially non-predictive cues provide no explicit reason to attend to any one location, the rapid onset of a cueing effect (i.e. 100 ms or less) in response to a spatially non-predictive cue is considered to be indicative of an automatic shift in attention to the cued location. This pattern of automatic orienting being manifested more rapidly following the onset of a cue is paralleled in electrophysiological event-related potential (ERP) studies. These experiments demonstrate that the time-course of attentional enhancements in the early perceptual processing of target stimuli presented at attended locations depends on whether attention is oriented automatically or volitionally. An attention-related enhancement of early visual processing in extrastriate cortical regions indexed by the P1 ERP component (Clark & Hillyard, 1996; Fu, Greenwod, & Parasuraman, 2005) is routinely found at short cue-target intervals following non-predictive peripheral cues (Hopfinger & Mangun, 1998, 2001; Fu et al., 2005; Martinez et al., 2001), and longer cue-target intervals following predictive central arrows (Anllo-Vento & Hillyard, 1996; Clark & Hillyard, 1996; Eason, 1981; Fu, Greenwod, & Parasuraman, 2005; Gomez et al., 1994; Handy & Mangun, 2000; Handy et al., 2001; Handy & Khoe, 2005; Heinze et  16 al., 1994; Hillyard & Anllo-Vento, 1998; Hopfinger & West, 2006; Luck et al., 1994; Mangun, Hillyard, & Luck, 1993; van Voorhis & Hillyard, 1977). The time course for attention-related modulations of the visual-evoked P3 waveform, which indexes cognitive information processing beyond the occipital cortex (Duncan-Johnson & Donchin, 1982), is also dependent on whether attention is oriented via peripheral or central cues, following the same pattern as the P1 modulation (Hoffman et al, 1985; Hopfinger & Mangun, 1998, 2001; Mangun & Hillyard, 1991b). The distinct time courses for both behavioral and ERP measures of cueing effects brought about by central predictive and peripheral non-predictive cues have been widely accepted as indicating distinct automatic and volitional modes of orienting (Muller & Rabbit, 1989). Although there is ongoing debate (see Hopfinger & West, 2006), the general consensus is that automatic and volitional orienting are executed by separate, interacting attentional systems (Mayer et al., 2004). This conclusion has been based largely upon neuroimaging results suggesting that different neural circuits may be involved in orienting to central predictive and peripheral non-predictive cues (Klein, 2004). In summary, spatial cueing studies have yielded fundamental insights regarding the functioning of visual attention. However, they have exclusively utilized cues and targets consisting of the most basic visual stimuli, including boxes flashing in the periphery or centrally-presented predictive arrows as cues, and small dots or asterisks appearing briefly as targets. While these basic stimuli have certainly benefited the study of attentional orienting by removing the potential confound of complex stimuli activating stored associations, the trade-off has been that attention researchers know comparatively  17 little about how the attention system responds to more complex stimuli. Only recently have a few studies begun to indicate that attention may be oriented automatically not only on the basis of abrupt changes in low-level visual features, but also on the basis of complex visual stimuli represented at a higher level in the visual system.  Automatic Orienting to Meaningful Stimuli The first two decades of research using the spatial cueing paradigm led to the conclusion that abrupt peripheral onsets or luminance cues could induce automatic orienting, but symbolic cues presented at fixation could not (e.g. Jonides, 1981). More recent research, however, has demonstrated that visual attention can be driven automatically to peripheral locations from a cue presented centrally when that cue is strongly associated with a particular spatial representation. For example, arrow cues presented at fixation induce a rapid (i.e. within 100 ms of the onset of the cue) attentional shift to the visual field to which they point, even when participants were informed that the arrows did not reliably predict the location of the target (Tipples et al., 2002). In addition, Fischer et al. (2003) showed that digits presented centrally induce attentional orienting to spatial locations corresponding to the locations those digits would occupy on a “mental number line”. Lower digits produced an attentional shift to the left, and higher digits produced an attentional shift to the right. Although this cueing effect was not observable until 300 ms following the onset of the number cue, the fact that it occurred even though the number in no way predicted the location of the target, and that it occurred without any explicit instruction to attend to one location or another suggested that attention was oriented automatically in response to the spatial (number-line)  18 representation evoked by the digit cue. Combined, these results suggest that spatially meaningful stimuli, that is, stimuli evoking well-learned spatial representations, may drive attentional orienting without intention on the part of the viewer, even when presented at a central location. Such results highlight a growing concern in the field of attention research that the distinction between volitional and automatic attentional orienting as it has been framed traditionally may be artificial. Typically, it has been assumed that volitional orienting is triggered endogenously, while automatic orienting is triggered exogenously. Strictly speaking, endogenous means “arising from within”, and exogenous means “arising from outside”. This distinction implies, then, that any given shift of attention occurs as a result of one of these triggers or the other, that is, either as a result of an observer’s intention to direct attention to a particular location, or as a result of external stimulation by a physically salient stimulus. Recent results suggesting that attention can be oriented automatically on the basis of learned associations evoked by central cues, however, call into question the validity of this dichotomy. The findings that arrows (Tipples et al., 2002) and digits (Fischer et al., 2003) presented centrally caused participants to orient attention automatically to the periphery provide compelling demonstrations that internal contributions such as over-learned spatial associations can in fact induce automatic, unintentional shifts of attention. There is also a growing body of evidence in support of the idea that attention may be oriented automatically based on the significance that a stimulus holds for us as evolved biological organisms. It is now well known, for example, that a symbolic averted eye gaze cue presented centrally elicits an automatic shift of attention to the gazed-at location (Friesen  19 & Kingstone, 1998; Ristic, Friesen & Kingstone, 2002). This finding has been established by using a modified version of Posner’s spatial cueing task, in which a pair of eyes gazing to the left or right was presented centrally, and followed by a response target that appeared in the left or the right periphery. Importantly, participants were correctly informed that the direction of gaze did not reliably predict the location of the impending target. Response times to the target were faster when it was presented at the gazed-at location, relative to the opposite location, indicating an automatic attentional cueing effect. This cueing effect began very quickly following the presentation of the eye gaze cue (within 100 ms), and was relatively short-lived (about one second), two characteristics that had previously been associated with automatic orienting (Kingstone et al., 2000; Langdon & Smith, 2005). Cueing effects for gazed-at locations occur even when participants are informed that the target is far more likely to appear at the location opposite the gazed-at location (Driver et al., 1999; Friesen, Ristic, & Kingstone, 2004). This provides strong evidence for the automaticity of orienting to a gaze cue. The finding that an eye gaze cue could induce automatic attentional orienting when presented at fixation suggested that eye gaze cues may have unique properties for guiding attention spatially, owing to their status as biologically relevant stimuli (Friesen & Kingstone, 1998). The idea that eye gaze cues trigger attentional orienting as a result of their potential significance for us as social biological organisms is given some indirect empirical support with data indicating a more robust response for mutual gaze stimuli (making direct eye contact) than for averted gaze stimuli (looking away) in regions of the superior temporal sulcus (STS; Pelphrey et al., 2004). The authors hypothesized that the  20 more robust response was the result of mutual gaze being a powerful route to communication, and thus a more socially meaningful, and therefore biologically relevant, stimulus than averted gaze. Since the initial finding by Friesen and Kingstone (1998) that symbolic eye gaze cues could drive spatial shifts in attention, there has been a plethora of research addressing whether eye gaze cues hold a special status as an attentional cue (e.g. Hietenan & Lappanen, 2003; Langton & Bruce, 1999). Ristic, Friesen, and Kingstone (2002) directly tested this question by comparing the behavioral response profiles for orienting to centrally-presented, spatially non-predictive eye gaze and arrow cues. They too found that both types of cues produce equivalent automatic attentional orienting effects in healthy adults and preschoolers. A subsequent experiment conducted with a split-brain individual, however, demonstrated that both hemispheres in the split-brain patient oriented to arrow cues. This finding, coupled with an earlier split-brain study demonstrating that only the face-processing hemisphere oriented to eye gaze cues (Kingstone et al., 2000), suggested to the authors that while the behavioral signatures of attentional orienting to eye gaze and arrow cues are indistinguishable, the two processes seem to be mediated by different neural systems. Additional evidence that orienting to eye gaze may represent a unique form of attentional orienting was provided by the finding that the temporal response profile associated with attentional orienting to eye gaze cues is somewhat different than that observed for orienting to abrupt peripheral cues. Specifically, Kingstone et al. (2000) found that the ability of an eye gaze cue to facilitate target detection RT does not fade out as quickly following the presentation of the cue as it would following an abrupt  21 peripheral stimulus. Since a rapid decline of facilitation has been traditionally considered a hallmark of automatic attentional orienting to abrupt peripheral cues (Friesen & Kingstone, 2003), its absence despite the rapid onset of a cueing effect suggests there may be unique neural systems underlying automatic attentional orienting to gaze cues. While there is little doubt that eye gaze can serve as a powerful cue for attention, there is growing evidence that it is not unique in this ability. Head turning and finger pointing are examples of biologically-relevant social cues that also bring about automatic shifts in attention (Langton & Bruce, 1999; Langton, Watt, & Bruce, 2000). In addition, emotionally-negative stimuli such as angry facial expressions among happy or neutral expressions are also detected more quickly than happy expressions among angry or neutral expressions (Eastwood, Smilek, & Merikle, 2001; Hansen & Hansen, 1988; Ohman, Flykt, & Esteves, 2001). This effect has been qualified by Hunt and colleagues (Hunt Cooper, Hungr, & Kingstone, 2007), who demonstrated that while emotionallyvalent stimuli in general (both angry and happy faces) could capture attention, they would only do so if an emotional expression was the target of visual search. There was no evidence in this study of an attentional preference for threatening as opposed to positive stimuli. There is also some evidence to suggest that fear-provoking stimuli may bring about attentional orienting automatically. Threatening stimuli commonly associated with phobias, such as snakes and spiders are more rapidly detected in a search display than are non-threatening stimuli such as mushrooms and flowers (Ohman, Flykt, & Esteves, 2001). Such stimuli are not social in nature, but nevertheless carry biological relevance for humans as evolved organisms. Together, these studies provide evidence that eye gaze  22 is not the only complex, biologically relevant stimulus that can drive automatic attentional orienting. Most of the research pointing to the idea that attention can be oriented automatically to complex, relevant stimuli has focused on orienting to biologically relevant stimuli. One might imagine, however, that there are other types of meaningful stimuli that could serve as potent signals for attentional orienting. For instance, recent work suggests that an implicit recognition of an object’s motor affordances can elicit an automatic attentional shift to its location. Handy and colleagues (Handy et al., 2003) have shown that inherently graspable objects, such a doorknob or a screwdriver, can capture attention. On each trial in their experiment, participants were presented with a central fixation point, flanked on each side by a simple line drawing of an everyday object, while participants performed a target discrimination task. Importantly, the flanker objects could be graspable, such as a hammer or a coffee cup, or un-graspable, such as a house or a car tire. A target stimulus, either a horizontal or vertical grating, would appear over one of the two objects. Importantly, the location of the target was not predictable based on the object display. In the critical condition, one of the objects was inherently graspable, while the other was not. ERP measures indicated that the amplitude of the P1 component associated with the target stimulus was larger when the target appeared over a graspable object than when it appeared over a non-graspable object, but only for graspable objects in the right visual field. The authors interpreted this visual field asymmetry in attentional capture by a graspable object to be the result of the left hemisphere’s jurisdiction over object-specific ideomotor representations automatically activated by graspable stimuli.  23 A New Direction for Attention Research As we have seen, there are many types of primitive stimuli that can grab or drive attentional orienting, including new objects, abrupt changes, and stimuli inducing highintensity changes along particular feature channels in the visual system, such as luminance, motion, or colour. In addition to these simple salient events, however, there seems to be growing evidence to suggest that more complex stimuli, represented at a higher level of perceptual processing in the visual system, might also have the ability to drive attentional orienting automatically. Indeed, it seems that attention can be oriented automatically to meaningful (e.g., arrows), biologically relevant (e.g., eye gaze), or behaviourally relevant (e.g., graspable objects) stimuli. Such results lead one to suspect that there may be other types of complex stimuli that provide useful, behaviourally relevant visual information and serve as potent cues for the automatic orienting of attention. Formal theories of attention, however, have largely overlooked this possibility in favour of rigorous experimental control. Following three decades of research investigating attentional orienting to rudimentary visual stimuli, it is now important that studies of human attention take into account the significance of various complex stimuli that may be meaningful for human beings as social organisms engaged actively and interactively with their environments (e.g. Kingstone et al., 2003). This dissertation represents a step in this direction.  24 Chapter 2: Brain responses to biological relevance In Chapter 2, I present a series of experiments that directly tests whether automatic orienting in response to biologically relevant eye gaze stimuli is underlain by a neural system distinct from that involved in automatically orienting in response to arrow cues. As reviewed in the introduction to this dissertation, the finding that attention could be oriented automatically in response to eye gaze cues provided one of the first demonstrations that a spatially non-predictive cue presented centrally could elicit automatic orienting (Friesen & Kingstone, 1998). This result suggested that the ability of an eye gaze cue to drive attentional orienting from a central location may owe to the special status of eye gaze as a socially meaningful, biologically relevant stimulus. Subsequent studies, which will be reviewed in the introduction to Chapter 2, suggested that this seemingly unique form of attentional orienting might be underlain by a specialized neural system. The more recent finding that automatic orienting could also be elicited by spatially non-predictive arrow cues presented centrally (Tipples, 2002), however, called into question the special status of eye gaze as an attentional cue. The studies presented in Chapter 2 present a direct test of this question. Combined, the results of Chapter 2 suggest that while eye gaze cues do serve as a particularly poignant cue for attentional orienting, the same neural system underlies automatic orienting to biologically relevant eye gaze cues and automatic orienting to another type of meaningful stimuli, namely arrows. This chapter is drawn from a manuscript that has been accepted for publication in the Journal of Cognitive Neuroscience (Tipper, Handy, Giesbrecht, & Kingstone, in press).  25 Introduction Most of us have had the experience of trying to carry on a conversation with someone who looks away distractedly. When this happens, it is often difficult to continue the conversation because your attention is diverted to whatever our conversation partner is looking at. Far from being anecdotal, this phenomenon – that one’s visual attention can be directed reflexively to locations indicated by another’s eye gaze – is well documented (Friesen & Kingstone, 1998; Langton & Bruce, 1999; Frischen &Tipper, 2004). These studies have given rise to the theoretical claim that eye gaze is a particularly powerful, “special” cue for visuospatial attention. While there is evidence that contextual information such as head orientation and body movements modulates attention to gaze direction (Langton, Watt & Bruce, 2000), the importance of the eyes themselves as a social cue is hard to deny. The special status of eye gaze as a cue for spatial attention may owe, as least in part, to specialized neural systems for processing eye gaze information. The superior temporal sulcus (STS) has been implicated in numerous studies as a region specialized for processing eye gaze (Allison, Puce, & McCarthy, 2000; Hoffman & Haxby, 2000; Kingstone, Tipper, Ristic, & Ngan, 2004; Perrett et al., 1985). More recent neuroimaging work suggests that specific regions within the STS may be specialized not only for the processing of eye gaze information, but also for the processing of several forms of biological motion, including mouth, eye, and hand movements (Pelphrey et al., 2005; Pelphrey & Morris, 2006). Although there is ongoing debate regarding the specificity of STS functionality, there is growing consensus that the STS plays an integral role in the perception of social cues in particular, rather than simply any directional stimuli (Hooker  26 et al., 2003). Nevertheless, while the STS is an important player in the perceptual processing of eye gaze, the question remains open whether there are neural systems specialized for orienting attention to this biologically-based social cue. The use of biologically-based cues in shifting visual attention, what we will call social attention, provides information regarding one’s surroundings even in the absence of direct visual perception. The social extension of one’s own attentional and perceptual reach would have constituted a beneficial cognitive adaptation in an ancestral environment rife with inter-group conflicts and predators with far greater strength and speed. This evolutionary argument for specialized mechanisms mediating social attention is supported by the finding that monkeys and humans may share a homologous neural mechanism for social attention (Deaner & Platt, 2003). The present study examines whether specialized neural mechanisms facilitate the orienting of attention to social cues in humans. Specifically, we asked whether visuospatial attentional orienting to directional biological cues (eyes) engages neural mechanisms distinct from those engaged by orienting to directional nonbiological cues (arrows). We will refer to the former as social cues, and the latter as nonsocial cues. One possibility is that orienting to social cues does utilize specialized neural modules. In patients with visuospatial neglect, for example, gaze direction cues can induce shifts in attention to regions of space to which these patients can not otherwise attend (Vuilleumier, 2002). In addition, while both hemispheres in a split-brain patient were recruited in orienting attention to arrow stimuli, only the predominant faceprocessing hemisphere was engaged while orienting attention to eye gaze stimuli (Kingstone et al., 2000). Consistent with these patient studies, one recent fMRI study  27 reported that while arrow-cues engaged areas of frontal and parietal cortex typically involved in volitional orienting, gaze cues did not (Hietanen et al., 2006). These results suggest that attentional orienting to eye gaze may utilize neural mechanisms distinct from those needed for orienting in response to non-social stimuli, such as arrows. A second possibility, however, is that the same neural mechanisms subserve attentional orienting to any meaningful or symbolic stimulus. This possibility is consistent with behavioral studies demonstrating equivalent orienting to centrally-presented eyes and arrows (Ristic, Friesen & Kingstone, 2002; Tipples, 2002; Quadflieg et al., 2004). To test between these competing hypotheses, we asked participants to view a perceptually ambiguous object that could be interpreted either as an eye in profile (Figure 2.1, looking to the right), or an arrowhead (Figure 2.1, pointing to the left). By instructing participants to alternate between viewing this object as an eye or an arrow while they underwent fMRI scanning, we were able to compare the neural mechanisms of attentional orienting to social and non-social cues while holding the physical cue stimulus constant. If there are specialized neural modules for orienting to social cues such as eye gaze, then we would expect to find greater blood oxygen level-dependent (BOLD) activity in these regions when viewing the ambiguous object as an eye relative to when viewing that same stimulus as an arrow. If, on the other hand, both eye gaze and arrow cues utilize the same orienting network, we would expect both percepts to equally engage a fronto-parietal orienting network (Corebetta & Shulman, 2002).  28  Figure 2.1: Ambiguous cue stimulus The stimulus was designed such that it could be viewed either as an arrowhead, in this case pointing to the left, or as an eye in profile, in this case gazing to the right. How the participants viewed this ambiguous shape was alternated via instructions.  Experiment 1 While undergoing fMRI scanning participants viewed the centrally-presented object as either an eye or an arrow. Periodically, we instructed participants to switch how they perceived the object. The results of a pilot study ensured that participants were able to maintain each percept with equal ease, and switch percepts effectively when instructed to do so (and see Experiment 3 for direct behavioral evidence supporting these claims). Participants responded with a button press as quickly as possible when they saw an asterisk appear in the left or right periphery either 100 ms or 600 ms following the onset of the cue. On half of the trials, the target appeared at the location to which the eye gazed or the arrow pointed (cued). On the other half of trials, the target appeared at the opposite location (uncued). Because participants could not predict the location of the target based on the central object, there was no explicit reason for participants to attend to one location or the other in response to the central cue stimulus. Shorter response latencies to targets appearing at the cued relative to the uncued location could therefore be interpreted  29 as indicating a reflexive, or automatic, shift in visuospatial attention in the direction cued by the central object (Posner, 1980).  Methods Participants Eight neurologically healthy, right handed participants (mean age 23.75 years, 3 females) from the University of British Columbia took part in the study with written consent. All participants had normal, or corrected-to-normal vision. Experimental procedures were approved by the University of British Columbia Clinical Research Ethics Board. Participants were remunerated with structural images of their brains on CD.  Stimuli and Task Visual stimuli were presented to participants via rear projection through the scanner bore onto a mirror, which reflected the image to the participant. Experimental stimuli consisted of a central fixation point, centrally-presented cues and peripherallypresented response targets. The cue was always presented at fixation. The response target was simply an asterisk presented to the left or the right of center. All stimuli were black shapes presented on a white background. The task was to fixate centrally on a small point, and to actively perceive the cue stimulus according to instruction (either as an eye or as an arrow). In addition, participants were told to press a response button as quickly and as accurately as possible when they saw the target appear. Importantly, the direction of the reflexive attentional  30 shift induced by the cue should vary depending on whether it is being viewed as an eye or as an arrow.  Procedure At the beginning of the testing session, participants were instructed to see the cue stimulus as either an eye or as an arrow. They were not given any indication at the outset that the cue stimulus could be viewed any other way. Halfway through the testing session, however, participants were instructed to switch their perceptual set in order to see the cue as the alternative object; successful perceptual switching could be confirmed via the pattern of response times to the targets as a function of their visual field and the orienting direction of the percept (see Experiment 3). The testing session was divided into four functional scanning runs, including two consecutive runs viewing the cue as an eye, and two consecutive runs viewing the cue as an arrow. Whether the cue was first viewed as an eye or as an arrow was counterbalanced across participants. In all cases participants were informed that cue direction did not reliably predict the target location. Each scanning run consisted of 93 trials, which included 64 cue-target trials (in which both a cue and a target occurred), 20 catch trials (in which a cue was not followed by a target), and 9 fixation only trials (in which neither a cue nor a target occurred) that lasted a duration of either one, two or three TRs. Each cue-target and catch trial began with a small fixation point appearing for 750 ms, at which time it would be replaced by the cue stimulus. For cue-target trials, either 100 ms or 600 ms following the onset of the cue, the target would appear on the left or the right of the cue. The target stayed on the screen for the remainder of the trial, which  31 lasted for 2250 ms, irrespective of when the response was made. The screen then blanked for a 750 ms inter-trial interval (ITI). Half of all cue-target trials were cued trials, in which the target would appear at the gazed-at or pointed-at location, and the other half of trials were uncued trials, in which the target would appear at the opposite location. An equal number of left gazing/pointing and right gazing/pointing cues were presented randomly in each experimental condition.  Functional MRI Acquisition and Image Processing Functional MRI data were collected on a research-dedicated Phillips 3-T system fitted with a SENSE head coil. Conventional spin-echo T1-weighted sagittal localizers were used to view the positioning of the participant’s head and to set the functional image volumes to be acquired in line with the AC-PC plane. Functional image volumes were collected with an EPI gradient echo pulse sequence (TR = 2000 ms, TE = 30 ms, 90º flip angle, FOV 240 x 240 mm, 128 x 128 matrix, 62.5 kHz bandwidth, 1.86 x 1.88 mm online-reconstructed in-plane resolution, down-sampled to a 3.00 x 3.00 mm final measured in-plane resolution, 3.00 mm slice thickness, 1.00 mm slice gap, 36 interleaved transverse slices), which is sensitive to BOLD contrast. A TTL pulse from the scanner started the scanning run such that the scanner and the visual presentation of stimuli were synched for timing. The run consisted of four initial dummy scans (to allow time to achieve steady-state magnetization) and 166 functional scans. After participants completed four functional runs, a high resolution T1weighted structural image was acquired with a 3D gradient echo pulse sequence (TR =  32 shortest, TE = shortest, 8º flip angle, FOV 256 x 256 mm, 256 x 256 matrix, 1.00 x 1.00 mm in-plane resolution, 1.00 mm slice thickness, 170 slices). The task was programmed and presented on a personal computer running Presentation software (Neurobehavioral Systems, San Francisco, CA). An event-related fMRI design was used, which allowed the pseudo-random intermixed presentation of cued, uncued, and catch trials, as well as fixation-only trials. Responses were collected on a fiber-optic MR-compatible response device (Lightwave Medical, Vancouver, BC). Stimulus and response timing was recorded in a logfile that was later analyzed to generate behavioral response times. Functional images were reconstructed on-line. Statistical Parametric Mapping software (SPM2, Wellcome Institute of Cognitive Neurology, London, UK) was used for image orientation, motion correction, and spatial normalization into modified Talairach anatomical space. A low-pass filter (high frequency cutoff = 6.25 s) implemented in MATLAB (The MATHWORKS Inc., Natik, MA) was applied to the data prior to statistical interrogation in order to eliminate high-frequency noise confounds not associated with the BOLD effect.  Statistical Analyses SPM2 was used to construct a 2 x 2 x 3 fixed-effects general linear model for analyzing the group-wise data, with Percept (eye vs. arrow), Cue Direction (left vs. right), and Cueing (cued vs. uncued vs. catch) as factors. Each condition was modeled as a set of events time-locked to the onset of the cue and convolved with a synthetic hemodynamic response function. Temporal derivative regressors were also included in the model.  33 Specific BOLD effects of interest were examined by creating linear contrasts of the parameter estimates for each condition. The linear contrasts resulted in a t-statistic for each voxel (unit of fMRI spatial resolution), which could then be assessed for statistical significance across the whole brain, thresholded at p < 0.05, corrected for multiple comparisons. In order to identify the cortical networks associated with reflexive attentional orienting to centrally-presented spatially non-predictive cues, we looked at the BOLD response to Eye Cues and Arrow Cues independently. In order to ascertain whether there are any differences in attentional networks for orienting to eyes as opposed to arrows, we looked at the relative BOLD effects (i.e., Eye Cues > Arrow Cues).  Results Behavior Table 2.1 shows mean response times. Regardless of whether the cue was perceived as an eye or an arrow, responses to cued targets were faster than responses to uncued targets. This behavioral cueing effect did not vary as a function of SOA or percept order. An ANOVA with Percept (eye vs. arrow), Cueing (cued vs. uncued), and SOA (100 ms vs. 600 ms) as within-subject factors, and Percept Order (eye first vs. arrow first) as a between-subject factor established the statistical significance of these findings. The main effect of Percept was not significant [F (1, 7) = 1.5, p > 0.05], indicating no overall difference in response times between eye and arrow percepts. The main effect of Cueing was significant [F (1, 7) = 29.5, p < 0.05], as was the main effect of SOA [F (1, 7) = 10.0, p < 0.05]. Neither the Cueing factor nor the SOA factor interacted with any other  34 factors (all p’s > 0.05). Given that both percepts gave rise to equivalent reflexive attentional orienting, the question then was whether or not these two types of attentional cues would engage distinct cortical mechanisms.  Table 2.1: Mean RT for Experiment 1 Mean RTs (ms) and standard errors (ms) for each condition are listed. The results indicate faster responses for cued relative to uncued targets at both short and long cuetarget intervals, regardless of whether the cue was perceived as an arrow or as an eye. These equivalent attentional orienting effects occurred despite the fact that participants were assured that the cue did not reliably predict the location of the impending target.  Cue Type Eye Gaze Arrow  Cued Uncued Cued Uncued  SOA 100 ms Mean 428.78 460.97 417.37 427.91  SE 28.79 34.62 31.62 38.67  600 ms Mean 363.55 381.85 348.65 374.78  SE 15.75 15.96 18.84 19.38  fMRI We conducted a two-part analysis of the fMRI data. First, we looked at BOLD responses to the eye gaze and arrow cues independently. This allowed us to identify the cortical regions subserving attentional orienting to each type of cue. Second, we directly compared BOLD responses to eye and arrow cues in order to identify regions having a differential response to the two types of cue. Both analyses were time-locked to the onset of the cue stimulus, and included cued trials only. The cortical regions in which the BOLD response increased significantly (p < 0.05, corrected) with the presentation of the central object are shown in Figure 2.2. The BOLD responses to eye and arrow cues were similar. For both cue types, posterior  35 regions of activity included large clusters in bilateral intraparietal sulcus, superior parietal lobule, and the temporal-parietal junction, including the inferior parietal lobule, and the superior temporal gyrus. In more anterior brain regions, there were significant clusters in bilateral dorsal frontal cortex, including the middle frontal and superior frontal gyri. Bilateral anterior clusters were also found more ventrally in the superior temporal gyrus. Prominent BOLD responses in occipital cortex, extending into posterior ventral temporal regions were also observed. Table 2.2 provides coordinates and t-values for some of the local maxima in each of these regions. Although orienting to eye and arrow cues activated largely similar cortical regions, we were interested in directly probing the question of whether any cortical regions were uniquely associated with attentional orienting to eye gaze cues. The results of the relative BOLD contrast (p < 0.05, corrected) showing regions responding more vigorously to eye cues than arrow cues are depicted in Figure 3. Four clusters were identified, including two clusters in bilateral middle occipital gyri, cluster on the ventral surface of the right medial frontal gyrus, and a cluster in the right inferior precentral gyrus. Table 2.3 lists coordinates and t-values for the most highly activated voxel in each of these clusters. When we conducted the reverse contrast, we found no regions with a significantly greater BOLD response for arrows than for eyes.  36 Figure 2.2: Attentional orienting network Independent analyses of the BOLD response to eye gaze and arrow cues are shown here. A general linear model approach (see Methods) was used to identify BOLD activity specific to the processing of the cue. Activation maps, representing voxels with a tstatistic greater than 4.63 (p < 0.05, corrected), are overlaid on the SPM2 single-subject rendered brain template (MNI). Largely the same network was activated while orienting attention to eye gaze and arrow stimuli. R = right hemisphere, L = left hemisphere, r = rostral, c = caudal, a = anterior, p = posterior, IFG = inferior frontal gyrus, MFG = middle frontal gyrus, SFG = superior frontal gyrus, PreCG = precentral gyrus, Ins = insula, IPL = inferior parietal lobule, PoCG = postcentral gyrus, SPL = superior parietal lobule, STG = superior temporal gyrus, MTG = middle temporal gyrus, MOG = middle occipital gyrus, IOG = inferior occipital gyrus.  37 Table 2.2: BOLD cluster maxima BOLD cluster maxima for eye gaze and arrow cues, p < 0.05, corrected for multiple comparisons. Regions listed are represented in Figure 2. R = right hemisphere, L = left hemisphere, r = rostral, c = caudal, a = anterior, p = posterior, IFG = inferior frontal gyrus, MFG = middle frontal gyrus, SFG = superior frontal gyrus, PreCG = precentral gyrus, Ins = insula, IPL = inferior parietal lobule, PoCG = postcentral gyrus, SPL = superior parietal lobule, STG = superior temporal gyrus, MTG = middle temporal gyrus, MOG = middle occipital gyrus, IOG = inferior occipital gyrus. Region Frontal RIFGr RIFGc RMFGa RMFG RMFGp RSFG LIFGr LIFGc LMFGa LMFGp LPreCG LIns Parietal RIPL LPoCG LSPL LIPL Temporal RSTG RMTG Occipital RMOGp RMOGa RIOG LMOGp LMOGa LIOG  Eye Gaze Cue x,y,z Coordinates (mm)  t-score  Arrow Cue x,y,z Coordinates (mm)  t-score  52 48 36 52 36 20 -56 -56 -36 -32 -32 -40  12 16 56 36 0 56 12 12 52 -4 -20 8  32 -4 16 28 64 -16 28 -4 24 48 68 0  6.42 5.32 6.86 7.34 10.03 7.41 8.21 4.68 6.25 10.89 8.30 6.46  52 48 48 52 36 -56 -48 -40 -36 -32 -40  12 16 48 36 0 12 12 44 -4 -20 8  32 -4 16 28 64 28 -4 20 48 68 0  7.55 8.05 5.76 4.78 10.97 p > 0.05 11.01 5.05 8.52 12.31 12.08 8.41  24 -60 -24 -28  -76 -16 -64 -40  56 24 56 44  11.43 6.41 12.87 12.06  40 -60 -24 -44  -44 16 -64 -32  44 24 56 40  10.89 8.73 7.88 12.51  48 52  -40 16  8 -4  7.75 5.92  48 52  -40 16  8 -4  4.88 6.53  32 48 44 -32 -44 -44  -92 -72 -76 -92 -76 -84  12 -8 8 8 -12 -8  11.29 14.72 15.88 14.94 14.10 16.14  32 48 44 -32 -44 -44  -92 -72 -76 -92 -76 -84  12 -8 8 8 -12 -8  10.06 14.63 6.41 11.74 13.77 10.69  38 Figure 2.3: Regions preferentially engaged by the eye gaze cue. Regions exhibiting a larger BOLD response (t > 4.63, p < 0.05, corrected) for the eye gaze percept than the arrow percept are shown here. Activation maps are overlaid on two slices of the SPM single-subject T1 image template (MNI). Four clusters responded more vigorously while attending to the eye gaze cue than while attending to the arrow cue. Panel A shows a cluster in the right inferior precentral gyrus. Panel B shows 3 clusters, one on the ventral surface of the right medial frontal gyrus, and two located in bilateral middle occipital gyri. R = right hempisphere, L = left hemisphere, MFG = middle frontal gyrus, PreCG = precentral gyrus, MOG = middle occipital gyrus.  39 Table 2.3: BOLD cluster maxima for eye gaze cue > arrow cue statistical contrast. Data reported here surpass the statistical threshold of p < 0.05, corrected for multiple comparisons. Regions listed are represented in Figure 2.3. R = right hempisphere, L = left hemisphere, MFG = middle frontal gyrus, PreCG = precentral gyrus, MOG = middle occipital gyrus. Region Frontal RMFG RPreCG Occipital RMOG LMOG  Eye Gaze > Arrow Cue x,y,z Coordinates (mm)  t-score  16 36  60 0  -4 28  5.57 7.18  28 -24  -100 -100  -4 4  6.27 5.14  Experiment 2 The finding that lateral occipital cortex showed a larger BOLD response for eye gaze than for arrow cues was interesting given that the physical stimulus for each type of cue was identical. One possible explanation for the enhanced occipital activation is that an eye gaze cue may be particularly effective for enhancing visual sensory processing for stimuli appearing at gazed-at locations (Hopfinger & Ries, 2005). That is, although both eye and arrow cues induce reflexive shifts in spatial attention, and utilize largely the same cortical regions to do so, eye gaze cues may be associated with a larger visual sensory gain effect in lateral occipital cortex than arrow cues. In order to test this possibility, a second experiment was conducted with an independent group of participants using eventrelated potentials (ERPs). An attention-related sensory gain effect is characterized by a larger-amplitude P1 ERP component in response to the onset of a visual stimulus when that stimulus is presented at an attended (cued) location than when it is presented at an unattended  40 (uncued) location (cf. Mangun & Hillyard, 1991). If indeed the more robust BOLD response in occipital cortex was caused by a larger sensory gain effect, then we would expect to find a larger difference in P1 amplitudes for cued and uncued targets when the attention-orienting stimulus is perceived as an eye than when it is perceived as an arrow. The stimuli and task used in Experiment 2 were identical to those employed in Experiment 1, with the exception of some changes made to the timing of stimulus presentation in order to facilitate ERP data collection.  Methods Participants Thirteen neurologically healthy, right handed participants from the University of British Columbia took part in the study with written consent. The data from two participants were discarded due to technical problems leading to excessive noise in the EEG, and a failure to evoke a distinguishable P1 ERP component. Of the remaining eleven participants, 5 were female, and the mean age was 20.55 years. All participants had normal or corrected-to-normal vision. Experimental procedures were approved by the University of British Columbia Behavioral Research Ethics Board. Participants were remunerated with ten dollars per hour of their time.  Stimuli and Task The stimuli and task employed were nearly identical to those used in Experiment 1. On most trials, a cue stimulus was presented, followed shortly by an asterisk target. The directionality of the cue did not predict the location of the target. There were a few  41 changes made in order to accommodate the requirements of an ERP study. The display was presented on a 17-inch CRT monitor at a viewing distance of approximately 100 cm. In addition, the timing of the stimulus presentation differed from that used in Experiment 1. The interval between the cue and target was randomly selected on each trial from a rectangular distribution between 500 and 700 ms. A long inter-trial interval, randomly varied between 2400 and 2600 ms was added for the purpose of obtaining accurate baseline measures for event-related potentials.  Procedure Participants were fitted with an elastic cap containing an array of 31 tin electrodes (Electro-Cap International, Eaton, OH). Half of the participants were instructed to perceive the ambiguous cue shape as an arrow; the other half were instructed to see it as an eye. After ten trial blocks, participants were then instructed to see the cue stimulus as the other possible shape, and another ten blocks were run. Each block lasted approximately 4 minutes, and consisted of 30 trials, including 28 cue-target trials, and 2 catch trials, in which no target occurred. EEG data were collected from 24 scalp electrodes sites (FP1, FP2, FZ, F7, F8, CZ, C3, C4, T3, T4, P1, P2, PZ, P5, P6, PO1, PO2, OZ, OL, OR, P3, P4, T5 and T6) using a Grass Instruments Model 12 amplifier, referenced to the left mastoid. Three additional channels were recorded, one from the right mastoid (for off-line referencing of the data to the average of the two mastoid signals) one from a pair of electrodes mounted on the outer canthi of each eye (to record horizontal eye movements) and one from below the right eye (to record vertical eye movements and blink artifacts). EEG was amplified  42 with a gain of 50,000 and a half-amplitude bandpass of 0.1 to 30 Hz. Data were digitized at 256 Hz. Off line, trials with eye movement artifacts were flagged and not included in any subsequent analysis. ERP waveforms were digitally re-referenced to the average of the left and right mastoids, and low-pass filtered (25.6 half-amplitude cutoff) prior to analysis. Peak amplitude measures for the P1 waveform were obtained by identifying the latency of the P1 peak for each condition of interest in the grand-averaged waveforms, and obtaining the voltage measure at that latency within each participant. All statistical analyses and waveform displays were conducted with a -100 to 0 ms pre-stimulus baseline.  Results Behavior Table 2.4 shows mean response times and standard errors. The results indicate faster responses to cued relative to uncued targets regardless of whether the cue was perceived as an eye or an arrow. Response time data were analyzed with an ANOVA, with Percept (eye vs. arrow), and Cueing (cued vs. uncued) as within-subject factors. The main effect of Percept was not significant, [F (1, 10) = 0.5, p > 0.05], indicating no overall difference in response times between eye and arrow percepts. There was a significant main effect of Cueing [F (1, 10) = 9.7, p < 0.05]. The interaction of Percept and Cueing was not significant (p > 0.05), replicating the finding from Experiment 1 that eye and arrow cues induce equivalent attentional orienting effects.  43 Electrophysiology ERP waveforms time-locked to target onset and averaged across participants are plotted in Figure 2.4. These data indicate that while both eye gaze and arrow cues produced behavioral cueing effects, an attention-related sensory gain effect was present only for eye gaze cues. Peak amplitude values in the P1 time window (Table 2.5) were submitted as the dependent variable in a 2 x 2 x 2 x 2 ANOVA with Percept (eye vs. arrow), Cueing (cued vs. uncued), Visual Field of the target (left vs. right), and Electrode (ipsilateral vs. contralateral) as within-subject factors. A significant 3-way interaction between Percept, Cueing, and Electrode [F (1, 10) = 4.9, p = 0.05] was followed up with simple effects analyses. These analyses indicated that the 3-way interaction was driven by the presence of a significant sensory gain effect (greater P1 amplitudes for cued relative to uncued targets) for the eye gaze percept at electrode sites contralateral [F (1, 50) = 4.6, p < 0.05], but not ipsilateral [F (1, 50) = 0.4, p > 0.05], to the visual field of target presentation. There was no statistically significant sensory gain effect for the arrow percept at either contralateral [F (1, 50) = 0.03, p > 0.05] or ipsilateral [F (1, 50) = 0.6, p > 0.05] electrode sites. The results are consistent with the hypothesis that attentional orienting in response to an eye gaze cue is particularly effective at highlighting sensory information being processed at the gazed-at location.  44  Table 2.4: Response time data for Experiment 2 Mean RTs (ms) and standard errors (ms) for each condition are listed. The results show statistically equivalent attentional orienting effects for arrow and eye gaze percepts.  Cue Type Eye Gaze Arrow  Cued Uncued Cued Uncued  SOA 500 - 700 ms Mean 276.31 281.08 264.46 276.44  SE 14.67 16.41 14.03 14.99  Table 2.5: Mean peak P1 ERP voltage amplitudes for Experiment 2. Mean peak P1 ERP voltage amplitudes and standard errors, in microvolts, are listed. Results are averaged over target location. A significant Percept x Cueing x Electrode interaction indicates a larger sensory gain effect at contralateral electrode sites for eye gaze cues than for arrow cues.  Cue Type Eye Gaze Arrow  Cued Uncued Cued Uncued  Electrode Contralateral Mean 1.84 1.03 1.50 1.57  SE 0.59 0.53 0.53 0.64  Ipsilateral Mean 3.02 3.26 2.95 2.66  SE 0.43 0.45 0.44 0.43  45  Figure 2.4: Grand-averaged ERP waveforms from Experiment 2. Event-related potentials recorded at left and right lateral occipital electrode sites (OL and OR, respectively), time-locked to the onset of the peripheral target stimulus, were averaged across subjects. The time window shown spans from 100 ms pre-target to 300 ms post-target. The vertical axis crosses at the 0 ms time point, at which the target occurred. Tick marks along the vertical axis represent ± 2 µV. The first large positive deflection in the waveform represents the P1 ERP component, peaking approximately 120 ms post-target. Shaded boxes represent the conditions in which a significant sensory gain effect (greater P1 amplitudes for cued relative to uncued targets) was observed. The results indicate the presence of a sensory gain effect in response to eye gaze, but not arrow cues at contralateral electrode sites.  Experiment 3 To date, all studies comparing the neural mechanisms of attentional orienting in response to eye gaze and arrow stimuli have used physically distinct stimuli. While this is an obvious and unavoidable fact of comparing the orienting response to realistic  46 depictions of distinct objects, it is important to note that it represents an inherent methodological confound for controlled investigations. Specifically, when comparing the orienting response to eye gaze and arrow cues with a study that utilizes physically distinct cues (e.g., Hietanen et al., 2006), one can never be certain of whether observed differences arise because of differences in the physical stimulus parameters, or due to differences in the meaningful (semantic) representations of those stimuli. In the present study, having participants perceive the same physical stimulus as either an eye or an arrow enabled a direct comparison of attentional orienting to eye gaze and arrow cues without confounding different cues with different stimulus attributes. However, the use of a perceptually ambiguous cue stimulus poses its own procedural and inferential challenges, where it is of primary importance to design a stimulus that 1) could be viewed either as an eye or an arrow with equal ease, 2) would allow perceptual switches without any negative transfer, and 3) would produce equivalent attentional orienting. While the data from Experiments 1 and 2 support the conclusion that we have met these criteria, one might argue that the relatively small sample sizes used in the present study undermined the ability to detect significant differences in the effects of eye gaze and arrow percepts in Experiments 1 and 2. To address this concern, we conducted Experiment 3; a behavioral study designed to 1) ensure that switching percepts without negative carryover effects is in fact possible, and 2) replicate the behavioral findings of Experiments 1 and 2 with a larger group of participants enabling more power. Experiment 3 specifically addressed the issue of whether participants are able to switch their perception of the ambiguous cue-stimulus without negative transfer effects. We tested two groups of participants; one group experienced an exact replication of the  47 experimental design used in Experiment 1, in which the cue percept was switched once halfway through testing; and one group experienced a different design in which the percept was switched several times during testing. The logic here is that if switching percepts interferes with the orienting effect to eye gaze and/or arrow percepts, these negative transfer effects will be more pronounced when participants switch percepts more frequently, leading to differences in the pattern of results between the single-switch and multi-switch groups.  Method Participants Seventeen neurologically healthy, right-handed participants from the University of British Columbia took part in the study with written consent. The data from one participant was discarded due to a technical failure leading to the loss of a large proportion of response time data. Of the remaining sixteen participants, 9 were female, and the mean age was 21.32 years. Participants were assigned to one of two groups. The single-switch group was instructed to switch percepts only once, mid-way through testing, as in Experiments 1 and 2. The multi-switch group was instructed to switch percepts repeatedly throughout the experiment. All participants had normal or corrected-to-normal vision. Experimental procedures were approved by the University of British Columbia Behavioral Research Ethics Board. Participants were remunerated with ten dollars per hour of their time.  48 Stimuli, Task, and Procedure The stimulus design and task were identical to that of Experiment 1, with the exception that half the participants were requested via an on-screen instruction to switch percepts six times throughout the experiment. At the beginning of the testing session, participants in the single-switch group were instructed to perceive the cue stimulus as either an eye or as an arrow. They were not given any indication at the outset that the cue stimulus could be viewed any other way. Halfway through the testing session, however, these participants were instructed to switch their perceptual set in order to perceive the cue as the alternative object. Participants in the multi-switch group, however, were instructed from the outset that they would have to make perceptual switches, and to do so by following instructions presented on the screen at regular intervals throughout the experiment. As in Experiment 1, the testing session was divided into four blocks of trials, each separated with a short break. For the single-switch group, there were two consecutive blocks viewing the cue as an eye, and two consecutive blocks viewing the cue as an arrow. For the multi-switch group, participants were instructed to switch percepts midway through each of the four blocks, as well as between most blocks. There were two consecutive eye gaze then arrow blocks, and two consecutive arrow then eye gaze blocks. Whether the cue was first viewed as an eye or as an arrow was counterbalanced across participants. This design resulted in six perceptual switches throughout the experiment with a switch order of either eye/arrow-eye/arrow-arrow/eye-arrow/eye or arrow/eyearrow/eye-eye/arrow-eye/arrow. In all cases participants were informed that cue direction did not reliably predict the target location.  49 Results Mean response times are presented in Table 2.6. The data were analyzed by a 2 x 2 x 2 x 2 ANOVA, with Percept (eye vs. arrow), Cueing (cued vs. uncued), and SOA (100 ms vs. 600 ms) as within-subject factors, and Switch Group (single-switch vs. multiswitch) as a between-subject factor. The results indicate statistically equivalent patterns of response times associated with eye gaze and arrow percepts across both groups. There was no significant main effect of Switch Group [F (1, 14) = 0.08, p > 0.05], and no interaction of Switch Group with any other factor (all p’s > 0.05). Overall, there was no main effect of Percept [F (1, 14) = 0.05, p > 0.05]. There was, however a significant main effect of Cueing [F (1, 14) = 21.4, p < 0.05], as well as a main effect of SOA [F (1, 14) = 44.0, p < 0.05], but no Percept x Cueing interaction [F (1, 14) = 3.6, p > 0.05]. Thus, in both the single-switch and multi-switch groups, participants were switching percepts appropriately, and the percepts triggered equivalent attentional orienting to the cued locations. These results support the conclusion that switching between interpreting the ambiguous cue stimulus as an eye or an arrow produces no negative carryover effects.  50 Table 2.6. Mean response times for Experiment 3. The results indicate no significant differences between groups, suggesting effective perceptual switching that does not produce any negative transfer effects. For both groups, responses were faster for cued relative to uncued targets at both short and long cue-target intervals, regardless of whether the cue was perceived as an arrow or as an eye.  Group  Cue Type Eye Gaze  Single-Switch Arrow Eye Gaze Multi-Switch  Arrow  Cued Uncued Cued Uncued Cued Uncued Cued Uncued  SOA 100 ms Mean 422.97 441.18 401.74 449.30 408.31 418.46 395.29 410.40  SE 24.39 28.96 25.34 32.00 29.23 28.86 28.56 31.47  600 ms Mean 337.91 354.09 340.66 361.34 362.52 366.68 363.22 376.79  SE 17.86 16.33 20.05 18.28 19.70 20.90 22.98 21.19  Meta-Analysis: Experiments 1-3 Although the same pattern of results have been demonstrated in Experiments 1-3, one still might wish to argue that the relatively small sample sizes within each of these studies precluded the detection of differences between either the overall response times within each percept condition, or the orienting effects generated by each percept. To maximize statistical power, we combined the data from Experiments 1-3 in a metaanalysis with a total of 35 participants.  Method As Experiment 2 included only one 600 ms SOA condition, the RTs from the 100 ms and 600 ms SOA conditions within both Experiment 1 and Experiment 3 were averaged. A 2 x 2 x 4 ANOVA was performed, with Percept (eye vs. arrow) and Cueing  51 (cued vs. uncued) as within-subject factors, and Experiment (Experiments 1, 2, 3) as a between-subject factor.  Results While the meta-analysis revealed a difference in overall response times between experiments with a significant main effect of Experiment [F (3, 31) = 8.5, p < 0.05], Experiment did not interact with any other factor (all p’s > 0.05). There was a significant Cueing effect [F (1, 31) = 55.5, p < 0.05], indicating faster response times for cued than for uncued targets. This cueing effect was equivalent for both eye gaze and arrow percepts, as indicated by the absence of a Percept x Cueing interaction [F (1, 31) = 2.2, p > 0.05]. In addition, there was no main effect of Percept [F (1, 31) = 2.3, p > 0.05], indicating equivalent response times in both eye gaze and arrow percept conditions. This more powerful meta-analysis converges with, and reinforces, the conclusion that participants resolve and maintain eye and arrow cue percepts with equal ease and efficiency, and that these percepts induce behaviorally equivalent reflexive attentional orienting effects.  General Discussion The present study was conducted to test the hypothesis that humans may have evolved neural mechanisms specialized for orienting attention to biologically-based social cues. To answer this question, we used fMRI to test whether different regions of the brain were engaged when orienting to biologically-based eye gaze cues and nonbiologically-based arrow cues. Experiment 1 revealed an extensive and highly similar  52 BOLD response for both types of cue, suggesting that, at least in the context of the present study, the same cortical network subserved attentional orienting to social and non-social cues. While we did not identify distinct neural modules that responded only to eye gaze cues, there were specific regions that responded more vigorously to eye gaze relative to arrow cues, including the lateral occipital cortex. The results of Experiment 2, a follow-up ERP study, revealed that this larger BOLD response in occipital cortex may well have been attributable to the eye gaze cues evoking a larger sensory gain effect for targets presented at attended locations. Before discussing the broader implications of the present study, it is important to note that the present results contrast that of a previous fMRI study that reported that orienting to arrows recruited areas of the frontal and parietal attention network, but gaze cues did not (Hietenan et al., 2006). Hietenan and colleagues (2006) reported the nonoverlapping networks for directional arrow and diverted gaze cues within the context of a blocked fMRI design that required an initial subtraction of activity observed in a baseline control conditions that consisted of a non-directional line stimulus and a undiverted gaze stimulus, respectively. Given the blocked design of the Hietenan et al. (2006), it is possible that either the target stimuli and/or the undiverted gaze stimulus in the control condition evoked activity in frontal and parietal cortex, thus reducing the likelihood of revealing activations in those areas in the diverted gaze condition (the authors themselves note this possibility on p. 411). In contrast, the present event-related design permitted the deconvolution of activity evoked by gaze cues which were physically identical to the arrow cues without relying on nonequivalent baseline control  53 conditions thereby making the present design much more sensitive to detecting activity in frontal and parietal cortex evoked by gaze cues.  A Cortical Network for Reflexive Attention to Meaningful Stimuli In addition to our primary research goal – the question of whether there are distinct neural systems for attentional orienting to social and non-social cues – the present study addresses more generally the neural systems involved in reflexive attentional orienting to centrally presented cues. There has been some debate as to whether reflexive and volitional orienting are underlain by distinct neural mechanisms. Early work by Posner (1980), as well as more recent studies (Sapir, Soroker, Berger & Henik, 1999) provided evidence of a role for the superior colliculus in reflexive attentional orienting. This role for the superior colliculus, however, is largely associated with abrupt peripheral events that activate the eye movement system (Rafal et al., 1989; Rafal & Smith, 1990). As the present study used centrally presented cues, and prohibited eye movements, it is not surprising that there was no BOLD activation found for the superior colliculus. To date, there have been very few neuroimaging studies of reflexive attention in and of itself. The few that have been conducted have manipulated orienting via abrupt peripheral events. These results are difficult to compare with the vast majority of neuroimaging studies of attention, which typically use centrally presented predictive arrow cues to induce volitional attentional shifts (cf. Ristic & Kingstone, 2006 for a detailed consideration if this issue). By using a nonpredictive centrally-presented directional cue in the present study, the results can be directly compared to the existing  54 literature regarding the cortical networks associated with volitional attention without confounding reflexive attention with eye movement preparation. The large fronto-parietal networks that showed a significant BOLD response when orienting to both eye and arrow cues map very well onto the dorsal and ventral fronto-parietal networks identified by Corbetta and Shulman (2002). Their meta-analysis revealed that the dorsal fronto-parietal network, consisting of regions of the superior parietal lobule, intraparietal sulcus, middle and superior frontal gyri, has in most studies been associated with volitional or willful processes for directing spatial attention. The ventral fronto-parietal network, conversely, consisting of regions in the temporal-parietal junction (inferior parietal lobule and superior temporal gyrus), as well as ventral frontal cortex, has been associated with reflexive orienting to particularly salient stimuli or infrequent events. This pattern of results seems to support the idea that indeed, distinct neural systems underlie volitional and reflexive attentional orienting. However, it is grounded largely on a confound of central and peripheral cuing. The present results demonstrate that when this confound is removed both dorsal and ventral fronto-parietal networks are involved in reflexive orienting to a directional cue presented at fixation. This finding suggests an interaction between dorsal and ventral fronto-parietal networks in reflexive attentional orienting to meaningful stimuli. This is consistent with Corbetta and Shulman’s (2002) alternative proposal that effective reflexive attentional orienting to a spatial location may require the co-activation of both of these networks – the ventral network subserving a circuit-breaking, or attentional reorienting function in response to a salient or meaningful stimulus, and the dorsal network contributing a spatial selectivity function. An additional implication of the present results  55 is that dorsal and ventral fronto-parietal networks should not simply be functionally mapped to volitional (endogenous) and reflexive (exogenous) orienting processes. Nor for that matter should central and peripheral cuing be mapped to volitional and reflexive orienting, as is mistakenly done on occasion (Vecera & Rizzo, 2006).  Biologically Relevant Cues as Inherently Meaningful Stimuli A direct comparison of the cortical networks associated with attentional orienting to eye gaze and arrow cues revealed two clusters in frontal cortex (in addition to those found in lateral occipital cortex) that were more vigorously engaged by eye gaze than by arrow cues. While these regions showed a BOLD response for both eye gaze and arrow cues, eye gaze cues evoked a significantly larger BOLD response in these areas. This finding is consistent with previous work suggesting that eye gaze cues do not engage distinct neural mechanisms for orienting attention, but recruit the same neural resources more efficiently (Quadflieg et al., 2004). Both of the ventral frontal regions showing a larger BOLD response for eye gaze cues are part of the ventral frontal-parietal network, which is associated with attentional re-orienting to particularly salient or meaningful stimuli. Enhanced BOLD activity in these ventral frontal regions is consistent with the idea that eye gaze cues are particularly meaningful, and are recognized by the attentional system as such. This result lends itself to the provocative possibility that other types of inherently meaningful stimuli may also induce reflexive attentional orienting, and bring about enhanced activity in these ventral frontal regions. Our results indicate that eye gaze is one such meaningful cue. Other biologically relevant social stimuli, such as finger pointing or  56 head turning may also serve as particularly effective cues for ventral fronto-parietal activation and reflexive orienting. In addition, nonsocial stimuli that are inherently meaningful in their provision of information directly useful for planning and coordinating adaptive behavior, such as looming stimuli (Franconeri & Simons, 2003), or one’s heading point or obstacles on one’s path during self-motion, may serve as particularly effective reflexive cues.  Critical Considerations Does the STS specialize in biological relevance? Interestingly, the STS, a structure implicated in attentional orienting to eye gaze stimuli in both lesion (Akiyama et al., 2006) and fMRI (Hoffman & Haxby, 2000) studies, did not show greater activation for eye gaze than for arrow cues. Akiyama and colleagues (2006) showed that a patient with a circumscribed lesion to the right superior temporal gyrus (STG) was not able to orient attention to eye gaze stimuli, but orienting to arrow cues was left intact. The authors concluded that the STS specializes in processing eye gaze. While this result may seem to provide compelling evidence for the specialization of the STS for processing eye gaze, it is important to note that the eye gaze and arrow cues employed in the Akiyama study consisted of physically different stimuli. Thus, it is quite possible that the physical construction of the stimuli, rather than the meaning of the stimuli, produced the observed differences in orienting. In any case, it is also true that while Akiyama’s results speak to the necessity of the STS in processing eye gaze, they do not preclude the engagement of the STS during orienting to arrow cues if the STS is  57 functionally intact. Our data show that when cue-type and stimulus-construction are not confounded and the STS is functionally intact, there is a significant BOLD response in the right STS for both eye gaze and the arrow cue conditions (Figure 2).  Are the present orienting effects truly reflexive? One might question whether reflexive orienting to gaze and arrows cues should be considered spatially reflexive, for instance, in light of the fact that volitional top-down processes are critical to whether the stimulus cue is perceived as an eye or an arrow. This concern, however, confuses volitional acquisition of a percept with volitional spatial orienting itself. In all cases, and regardless of whether an eye or arrow percept is acquired, participants have no incentive to attend volitionally to the cued location, i.e., they are informed repeatedly that the direction of the cue never predicts reliably the location of a target. Nevertheless, and for both percepts, attention is shifted rapidly and consistently to the cued location, thereby satisfying the key criteria for reflexive spatial orienting (Friesen & Kingstone, 1998; Gibson & Kingstone, 2006; Ristic, Wright, & Kingstone, 2007; Ristic, Friesen & Kingstone, 2002; Tipples, 2002). With this point in place, it is also important to note that while both cues engage reflexive orienting, and their behavioral effects in the present study are equivalent, it does not follow that their attention effects must be identical on all fronts (Ristic, Wright & Kingstone, 2007; in press). Indeed, as we found in the present study, the sensory gain effect from eye gaze cues is significantly greater than for arrow cues.  58 Conclusion The present study demonstrates that reflexive social attention, at least in the context of perceiving eye gaze cues, does not require the involvement of a specialized attentional module network per se. Rather, eye gaze cues more vigorously engage ventral frontal regions within a common attentional network and bring about a larger sensory gain effect at the attended location than do arrow cues. Ventral frontal regions associated with the detection of stimulus salience were more highly activated by eye gaze than arrow cues, even though these cues consisted of identical physical stimulation on the retina. Thus, the salience of eye gaze cues cannot be attributed solely to low-level visual properties such as high contrast or spatial frequency. Rather, any benefit for eye gaze cues had to have been brought about by how that cue was being represented. In other words, the enhanced sensory processing at the gazed-at location occurred because the eye gaze cue was more salient to the attentional system, presumably because eyes are socially and biologically meaningful (Birmingham, Bischof & Kingstone, in press, a, b). A final, intriguing implication of these data, is that our ability to orient volitionally and reflexively to socially irrelevant stimuli, including arrowheads, may have arisen as a useful byproduct of a system that developed first, and foremost, to promote social orienting to stimuli that are biologically relevant.  59 Chapter 3: Attentional Orienting in Dynamic Scenes The experiments presented in Chapter 2 indicated that attention could be oriented on the basis of a high-level cognitive representation of a socially relevant eye gaze stimulus. While Chapter 2 indicated that both eye gaze and arrow percepts drove attentional orienting, it seemed that eye gaze served as a particularly potent cue for attention, resulting in larger attention-related neural responses in frontal and occipital cortex. These results are suggestive of a particular attentional sensitivity to social stimuli providing information of relevance to guiding appropriate behavior in social contexts. As discussed in Chapter 2, this observed attentional sensitivity to behaviorally relevant social information raises the provocative possibility that attention may be oriented automatically not only in response to eye gaze, but also to other types of meaningful stimuli providing information of potential use in guiding effective behavior. Given the complexity and dynamism that characterizes our visual experience as we go about our daily activities, the ability to effortlessly orient to potentially meaningful or behaviorally relevant stimuli would be invaluable in the planning and coordination of effective behavior. If attention indeed serves to select visual information relevant to guiding behavior in the real world, then one might expect to find evidence for the attentional selection of behaviorally relevant stimuli in the midst of complex, dynamic visual scenes depicting real-world visual experience. Chapter 3 tested this hypothesis by investigating patterns of eye movement activity while participants viewed movies that captured the visual stimulation accompanying locomotion through various everyday environments from a first-person perspective.  60 The goal of Chapter 3 was to assess whether attention is oriented preferentially to meaningful, behaviorally relevant stimuli in dynamic, realistic visual scenes. The results of Chapter 2 suggested that social stimuli might serve as one such type of behaviorally relevant stimulus. If this were the case, one might expect that when viewing a scene depicting first-person navigation, participants may look preferentially at the other people present in the scene. This possibility is supported by the results of studies investigating patterns of eye gaze associated with viewing realistic static scenes, which indicate a strong tendency for viewers to fixate the other people in the scene (Buswell, 1935; Yarbus, 1967). Social stimuli are likely to provide behaviorally relevant information in a wide range of situations. Surely, however, social stimuli are not the only source of behaviorally relevant information. Rather, whether or not a particular stimulus is behaviorally relevant will likely depend on the specific task or behavior in which one is momentarily engaged. The ability to attend to behaviorally relevant stimuli across a variety of situations would therefore necessitate the ability to recognize the significance of particular visual stimuli in a given behavioral context. Indeed, eye gaze activity associated with the viewing of static scenes does seem to be affected by an implicit recognition of the meaning or significance of the stimuli present in the scene (Biederman, 1972; Loftus & Mackworth, 1978; Smilek et al., 2006). As a case in point, the tendency to fixate people in a static realistic scene is modulated depending on the observer’s task (Birmingham, Bischof, & Kingstone, 2007; Yarbus, 1967). For example, Yarbus (1967) found that when participants were asked to estimate the ages of the people depicted in the scene, fixations were almost completely focused on faces. When participants were asked to estimate the  61 social status of the people in the scene, however, fixations tended to be more widely distributed amongst the other objects in the scene. Given the likelihood that there are multiple sources of behaviorally relevant information in any given context, we hypothesized that in viewing the dynamic navigational scenes presented in Chapter 3, participants would demonstrate a tendency to attend not only to the other people in the scenes, but also to the navigation-relevant aspects of the scenes. Navigating everyday environments is a basic requisite for independent life, and feels effortless despite the fact that the tasks of following a path, avoiding obstacles, and keeping track of important events in the surroundings require the seamless coordination of sensory, perceptual and motor systems. It seems feasible, then, that attention might facilitate the gathering and processing of navigationally-relevant visual information, such as the heading point of motion or the objects and obstacles defining the path. These stimuli provide important sources of information needed to effectively plan and coordinate navigational behavior. There is some evidence to suggest that eye gaze tends to be directed to the navigationally relevant aspects of a dynamic first-person perspective scene. For example, in virtual reality simulations depicting simple, controlled environments, participants look most of the time in the direction to which they are moving or planning to move (Cutting & Readinger, 2002; Hollands et al., 2002). This result makes intuitive sense, given that looking in the heading direction allows the gathering information about the future path, which is necessary in the prospective planning of appropriate actions. For example, when steering through a curve in the road in a driving context, people usually fixate the tangent  62 point in the curve, which can be used to predict the curvature of the road ahead (Land & Lee, 1994). The present experiment tested whether attention was preferentially allocated to behaviorally relevant stimuli while participants viewed complex, dynamic scenes depicting walking motion through everyday environments. Systematic patterns of eye gaze activity observed under free-viewing conditions of complex, photo-realistic navigational scenes could provide insights regarding the types of stimuli that may be particularly potent cues for attention in our everyday lives. This was exploratory research, designed to answer three specific questions: 1) Is there evidence to suggest that participants attend preferentially to other people depicted in dynamic scenes? 2) Is there evidence that attention is also allocated preferentially to visual stimuli that provide information of potential use in preparing appropriate navigational behavior, such as the path ahead or potential obstacles on or near the path? 3) If attention is directed to both socially and navigationally relevant aspects of these dynamic scenes, can the pattern of eye movements that unfolds over time reveal anything about how multiple demands on attention are mediated in a dynamic context?  Method Participants Twelve graduate students or postdoctoral fellows (7 female) at the University of British Columbia participated with written consent. The mean age was 27 years. Experimental procedures were approved by the University of British Columbia  63 Behavioral Research Ethics Board. Participants were remunerated with five dollars for their participation in the experiment, which lasted about 20 minutes.  Apparatus Participants were seated in a small testing room approximately 57 cm from a 17” CRT monitor. Head position was maintained by having participants rest their chins in a comfortable chin rest while viewing the experiment. Eye tracking was carried out with an SR Research Ltd. EyeLink eye tracking system. A lightweight headband holding two high-speed cameras monitored the position of the left eye with a sampling rate of 250 Hz and a spatial resolution of 0.005º. Prior to experimental testing, a nine-point calibration procedure was performed, in which participants tracked the position of a fixation dot sequentially presented in each cell of a 3 x 3 grid distributed across the monitor. Two computers controlled the experiment from adjacent rooms. One computer was used to display stimuli to the participant, and the other was used to collect the eye tacking data. The two computers were linked with an Ethernet cable, which enabled the real-time monitoring from the control room of eye tracking data superimposed on the experimental stimuli as the experiment proceeded. The display computer was a Pentium III PC running SR Research Ltd. Experiment Builder software. The control computer recorded data regarding fixation, smooth pursuit, saccade, and blink activity in log files for later analysis using Data Viewer (SR-Research) and proprietary software built in Matlab (Mathworks, Inc.).  64 Stimuli and Task Each experimental trial consisted of a video clip depicting navigation through various environments from a first person perspective. The goal when filming these movies was to capture the visual stimulation one might encounter while walking through these environments. Figure 3.1 depicts a single frame from each scene submitted to a detailed analysis. The video clips were approximately thirty seconds, and were filmed in order to present participants in the laboratory with the visual stimulation that approximates what one might encounter while navigating everyday environments. Twenty video clips were presented to each participant. Participants were instructed to view the movies as naturally as possible, as if they were walking through the scenes themselves. Before the presentation of each video clip, participants were required to fixate a small dot at the center of an otherwise blank screen.  65 Figure 3.1: Example frames from each scene. A single frame from each scene submitted to a detailed analysis. The four scenes were selected to be representative of various everyday conditions, taking place either outdoors (Scenes 1 and 3) or indoors (Scenes 2 and 4), depicting people close up (Scenes 2, 3, and 4) or farther away (Scene 1), and being relatively cluttered (Scenes 3 and 4) or relatively sparse (Scenes 1 and 2).  Results Given the complexity and time-intensiveness of the data analysis procedures, including the hand-coding of fixation activity, four representative scenes were selected for analysis.  66 Superimposition of Dynamic Gaze Data on Video Clips Proprietary software developed using Python scripts was used to superimpose an eye gaze marker onto the video clips for each trial, for each participant in order to allow off-line viewing of the eye gaze data. First, the binary data file generated by the SR Research EyeLink software was converted to an ASCII text file using a conversion utility provided by SR Research with their Data Viewer software. These ASCII files were then read in using the Python script, and parsed to extract the current eye gaze coordinate that corresponded with each frame of the video clip. Next, the video clips were divided into individual images corresponding to each video frame, and a small colored dot was added to each image in a location corresponding to the gaze position for that video frame. A new video containing the superimposed eye gaze data was then rendered and saved as an MPG video file for future viewing and categorical analysis.  Categorical Data Coding A categorical coding scheme was developed in order to classify gaze activity according to the types of objects and events fixated and tracked by participants. Participants’ fixation activity was classified into the following eight categories: near heading point, far heading point, near path-defining objects, far path-defining objects, near other objects, far other objects, near people, and far people. These categories were selected based on an initial viewing of the data, which suggested that most fixations would fit into at least one of these categories. In order to classify gaze activity according to meaningful categories on a momentto-moment basis, the experimenter performed the data coding by hand, identifying the  67 time points at which fixations were directed to stimuli falling within each category of interest. While this was a subjective process, there were several objective criteria defined in order to ensure the reliability of classification across the entire data set. Near and far were distinguished based on whether the relevant stimulus appeared in the upper or lower portion of the scene. Generally, stimuli appearing the lower half of the scene were classified as near, and stimuli appearing in the upper half of the scene were classified as far. The heading point was defined as the apparent eventual location of the motion trajectory, and any location along the apparent path of motion. Path-defining objects were defined as any objects that either comprised the boundaries of the apparent path of motion, or appeared as potential obstacles within the path. Other objects were defined as any other objects present in the scene, including objects located off the path of motion and background scenery. Importantly, a single visual sample of a particular scene element may involve several individual fixations. The categorical coding scheme employed coded for visual samples as opposed to individual fixations. For example, if the participant looked at a person walking off in the distance, and made several individual fixations on that same person, a single visual sample would be coded, lasting the duration of all fixations on that person. Each category was coded individually in an on-off manner using custom-designed Matlab software, by playing through the movie at half speed, and holding a specific key down every time fixations were made to scene elements falling under that particular category. This key press data was recorded at 15 Hz in a Matlab data file to be entered into further analyses. The result of this coding operation was the creation of a timing  68 vector for each category, for each scene, for each participant, representing the timing of ‘on’ and ‘off’ periods for samples of that category. The timing vector for each category provides information regarding both the temporal onsets of visual samples to scene elements within that category, as well as the duration of each sample. The individual timing vectors for each category were concatenated into a single matrix representing eye gaze data for a particular video, for a particular participant, with each row in the matrix defining the ‘on’ and ‘off’ periods for a different category. Figure 3.2 is a graphical representation of an example categorical matrix for a single subject, for a single scene. Time is depicted on the x-axis, and the different categories are represented on the y-axis. The black bars correspond to time points at which this particular participant sampled a region of the scene corresponding to the category represented on the row in which the bar appears. Once the eye gaze data were categorically coded in this way, several quantitative analyses were performed.  69 Figure 3.2: Categorical timing vectors for one individual. The categorical timing vector for one participant, for one scene, is represented graphically. Time, in seconds, is represented on the x-axis, with the full 30 seconds of the scene represented in the six 5-second panels shown here. Each stimulus category is represented as a separate row on the y-axis. Dark patches represent periods of fixation activity within the corresponding category.  70 Inter-Subject Congruence In viewing the data movies for all participants for any given video clip, it was evident that there was a large degree of congruence between individuals in patterns of eye gaze activity. Figure 3.3 illustrates the categorically coded gaze activity timing vectors for each trial, averaged across participants. The movie frames corresponding approximately to periods in which there was a particularly high degree of overlap in fixation activity across participants are displayed. Darker patches in the timing vectors represent periods of higher inter-subject congruence. A bootstrapping procedure was employed to assess the degree of inter-subject congruence quantitatively. The objective of this analysis was to determine whether the degree of inter-subject congruence found in the actual data was likely to occur by chance alone. The first step was to determine a numerical value to represent the degree of overlap between patterns of gaze activity across participants within each category, for each video clip. I will call this value the congruence metric. The congruence metric was calculated by summing the “on-off” timing vectors for all subjects, for a given category and scene, then dividing by the total number of nonzero time-points within the timing vector for that category and scene. If there are concentrated clusters of inter-subject congruence within a category, there will be fewer nonzero time-points, resulting in a smaller divisor and thus a higher final inter-subject consistency value, than if there were less congruence between subjects. The resultant congruency metric values were averaged across scenes, and are plotted in Figure 3.4. The next step was to simulate a data set with the same structure as the actual data, i.e., containing the same number of cases (“participants”), the same number of categories,  71 the same number of “scenes”, and the same number and size of “on” and “off” temporal events within each category’s timing vector. A random permutation of the periods of alternating “on” and “off” activity within each actual timing vector for each participant constituted the simulated data. A congruence metric was then computed for each “category” within this simulated data set, and averaged across “scenes”. This procedure was bootstrapped, such that a congruence metric was calculated for each category within 40,000 simulated data sets. The congruence metric calculated for the actual data could then be compared to the distribution of congruence metrics obtained from the bootstrapped random data simulations in order to determine the likelihood that the amount of congruence actually observed in patterns of eye gaze activity across participants for each category could be due to chance alone. The results of the bootstrapping procedure revealed that the amount of intersubject congruence present in the data for every category was significantly greater than that which would be expected to occur by chance alone [all p’s < 0.00035]. As shown in Figure 3.4, however, the degree of congruence was greatest within the heading point and people categories.  72 Figure 3.3: Categorical timing vectors averaged across participants Average categorical timing vectors are shown for each of the four scenes analyzed. Time is represented along the x-axis in seconds. Dark patches correspond to periods of maximal overlap between participants. Images represent single frames from specific time-points at which the greatest overlap between participants was observed. The greatest overlap between participants was observed within the people, heading point, and path object categories. Notice the relative lightness of the other objects categories, indicating a lesser degree of overlap between participants.  73  74 Figure 3.4: Inter-Subject Congruence The congruence metric for each category is shown here. The congruence metric represents the degree of overlap in fixation activity across participants, and was calculated based on the average categorical timing vectors (shown in Figure 3.3), collapsed across time, and divided by the number of non-zero time points. The results of the bootstrapping analysis revealed a higher degree of inter-subject congruence than would be expected by chance for all categories. Fixations of people, the heading point, and path object were more congruent across participants than fixations to other objects.  Fixation Dynamics Time Spent Looking The total time spent looking at stimuli within each category was calculated for each participant, and averaged across scenes. Because there were differences between categories in the amount of time they were actually available in the scene to look at (i.e. people were not always present in the scenes), the total time spent looking at each category was normalized by the total amount of time that category was available. The resultant value represents the proportion of time participants spent looking at a category  75 when it was available to look at. Figure 3.5 depicts the normalized times spent looking at each category. The normalized times were submitted to a 4 x 2 repeated-measures ANOVA, with category (heading point, path objects, other objects, and people) and distance (near, far) as within-subject factors. The results revealed a significant category x distance interaction [F(3, 30) = 13.23, p < 0.0001], as well as a significant main effect of category [F(3, 30) = 56.80, p < 0.0001]. The main effect of distance was not significant [F(3, 30) = 0.87, p = 0.37]. The significant interaction was followed up by conducting paired t-tests between near and far distances, for each category. The results revealed near-far differences for the heading point [t(10) = 2.59, p = 0.03] and people [t(10) = 2.80, p = 0.02] categories, but not for the path objects [t(10) = 1.88, p = 0.09] or other objects [t(10) = 0.17, p = 0.87] categories. The heading point and people categories reveal opposite near-far effects, where more time was spent looking at the far heading point than the near heading point, and more time was spent looking at the near people than the far people. The significant main effect of category was followed up by averaging each category across distance (near and far), and conducting separate paired t-tests for each category pairing. The results revealed that participants spent significantly more time looking at people than any other category [all t’s > 4.10, p’s < 0.0025], significantly more time looking at the heading point than either path objects or other objects [both t’s > 4.30, p’s < 0.002], and significantly more time looking at path objects than other objects [t(10) = 16.20, p < 0.0001].  76 Figure 3.5: Normalized looking time The proportion of time spent looking at a given category, relative to the amount of time that category was available to look at is plotted here. The results indicate a hierarchal pattern, in which people were fixated preferentially, followed next by the heading point, then path objects, and finally, other objects. There were also opposite significant near-far effects for the people and heading point categories, indicating a preference to attend to near people relative to far people, and the far heading point relative to the near heading point. Error bars represent the standard error.  Number of Samples The average number of samples (fixations) within each category, averaged over scenes, are plotted in Figure 3.6. The number of samples was submitted to a 4 x 2 repeated-measures ANOVA, with category (heading point, path objects, other objects, and people) and distance (near, far) as within-subject factors. The results revealed a significant category x distance interaction [F(3, 30) = 9.79, p = 0.0001], as well as a significant main effect of category [F(3, 30) = 22.84, p < 0.0001]. The main effect of distance was not significant [F(3, 30) = 3.75, p = 0.08].  77 The significant interaction was followed up by conducting paired t-tests between near and far distances, for each category. The results revealed a near-far difference for the people category [t(10) = 5.15, p = 0.0004], indicating more samples on near people than on far people. The near-far differences were not significant for the heading point [t(10) = 1.06, p = 0.31], path-defining objects [t(10) = 1.57, p = 0.15], or other objects [t(10) = 0.58, p = 0.58]. The significant main effect of category was followed up by averaging each category across distance (near and far), and conducting separate paired t-tests for each category pairing. The results revealed that participant looked significantly more frequently at path objects than any other category [all t’s > 3.90, p’s < 0.003], and significantly more frequently at both the heading point and people than at other objects [both t’s > 2.80, p’s < 0.02].  78 Figure 3.6: Number of samples The number of samples (fixations) within each category is plotted here. The results indicate that participants looked more frequently at path objects than at any other category, and more frequently at both people and the heading point than any other object. There was also a near-far effect for the people category, indicating more frequent fixations on far people than on near people. Error bars represent the standard error.  Average Sample Duration The average duration of each sample within a category, averaged over scenes, are plotted in Figure 3.7. The average sample durations were submitted to a 4 x 2 repeatedmeasures ANOVA, with category (heading point, path objects, other objects, and people) and distance (near, far) as within-subject factors. The results revealed a significant category x distance interaction [F(3, 30) = 11.50, p < 0.0001], as well as a significant main effect of category [F(3, 30) = 26.85, p < 0.0001]. The main effect of distance was not significant [F(3, 30) = 4.27, p = 0.07].  79 The significant interaction was followed up by conducting paired t-tests between near and far distances, for each category. The results revealed a near-far difference for the path objects [t(10) = 2.74, p = 0.02] and people [t(10) = 3.43, p = 0.01] categories, but not for the heading point [t(10) = 2.03, p = 0.07], or other objects [t(10) = 1.49, p = 0.17]. The significant main effect of category was followed up by averaging each category across distance (near and far), and conducting separate paired t-tests for each category pairing. The results revealed that the average sample duration was significantly longer for both the heading point and people than for either path objects or other objects [all t’s > 5.0, p’s < 0.0005], and significantly longer for path objects than for other objects [t(10) = 4.97, p = 0.0006].  80 Figure 3.7: Average sample duration The average sample duration (in seconds) for fixations within each category is plotted here. The results indicate that participants looked longest on average at both people and the heading point, and longer at path objects than at other objects. There near-far effects were significant for both the people and path objects categories, indicating longer fixations for nearer people and path objects. Error bars represent the standard error.  Scene-Related Gaze Activity Based on the analysis of fixation activity, it was clear that both the heading point and the people in the scene were preferentially fixated. An analysis of scene-related gaze activity was conducted in order to more closely investigate the factors determining when either the heading point or people would be fixated. The fixation activity within each of these categories was analyzed with respect to whether or not the path ahead was in view, and whether people were present in the center or the periphery of the scene. The time periods at which each of these factors were in play were determined via a categorical scene analysis similar to that used to categorize fixation activity. The proportion of time  81 spent looking at either the heading point or people within these time periods was calculated. Figure 3.8a and 3.8b illustrate the relationship of heading point and people fixations, respectively, to each of these scene factors. Paired t-tests revealed that fixations to the heading point varied as a function of whether or not the path ahead was visible in the distance [t(10) = 10.68, p < 0.0001], but not as a function of whether people were present in the center or periphery of the scene [t(10) = 0.37, p = 0.72]. Conversely, fixations to people varied as a function of whether people were present in the center or the periphery of the scene [t(10) = 11.21, p < 0.0001], but not as a function of whether the path ahead was in view [t(10) = 0.42, p = 0.68].  82 Figure 3.8: Scene-related gaze activity A) Proportion of time spent looking at the heading point when each scene factor was in play. The results indicate that the tendency to fixate the heading point was affected by whether or not the path was visible in the distance, but was not affected by the location of people in the scene. The tendency to fixate the heading point was greater when the path was visible in the distance than when it was obstructed. Error bars represent the standard error.  B) Proportion of time spent looking at people when each scene factor was in play. The results indicate that the tendency to fixate people was affected by the location of people in the scene, but not by whether or not the path was visible in the distance. The tendency to fixate people was greater when people were present in the center of the scene than when they were in the periphery. Error bars represent the standard error.  83 Discussion The results indicated that attention was preferentially allocated to both socially and navigationally relevant aspects of the dynamic scenes presented. There was a greater degree of congruence between participants in terms of when people, the heading point, and path objects were fixated than when any other objects were fixated. When people were present in the scenes, they provided a more compelling cue for attention than did any other objects in the scenes. The heading point and path objects were also fixated preferentially relative to any other objects. The analysis of fixation dynamics revealed a pattern of less frequent but longer pursuit fixations to both people and the heading point, and more frequent but shorter fixations to the objects and obstacles defining the path.  Attention to Behaviorally Relevant Stimuli Together, the results presented here indicate that while viewing dynamic scenes depicting self-motion through everyday environments, participants’ eye movements were preferentially directed to stimuli with the potential to provide behaviorally relevant information. This finding is consistent with previous studies demonstrating that while engaged in behaviors such as making a sandwich, hitting a cricket ball, playing table tennis, driving, or reading music, eye movements are directed to regions of the visual field providing information pertinent to the task at hand (Hayhoe & Ballard, 2005; Land & Furneaux, 1997). These studies have shown that fixation activity while performing real-world behaviors is tightly linked to the task as it unfolds over time, with the necessary visual information being gathered just as it is needed (Ballard et al., 1995). In many cases, eye movements in real-world behavior are prospective in nature, with  84 anticipatory fixations to behaviorally relevant locations, such as the expected bounce point of a table tennis ball (Ripoll et al., 1987). While these studies clearly demonstrate that eye movements are tightly linked to the particular task at hand, the means by which the eyes “know” where to look has remained unclear (Land & Furneaux, 1997). The present results suggest that attention may play an important role in guiding eye movements during real-world behaviors. Because participants simply viewed these scenes without having to perform any other behavioral task, the fact that patterns of eye movement activity were congruent across participants is suggestive that eye movements were directed preferentially to behaviorally relevant stimuli because those stimuli served as potent cues for the orienting of attention. The analysis of the distance factor revealed a preference to fixate people in the foreground as opposed to the background, and a preference to fixate the heading point in the background as opposed to the foreground. These results are consistent with attention being allocated preferentially to the most behaviorally relevant stimuli in the scene. Presumably, people in the foreground are more pertinent to behavior than people in the background. This makes intuitive sense, given that most meaningful social interactions occur within close range. Additionally, the distant heading point is a better source of navigation-relevant information than heading information available in the foreground. When only the foreground is in view, heading control suffers greatly (Strelow & Brabyn, 1981). Additionally, when the heading point is obscured, the control of navigation is based more heavily on the relative motion of the various objects in the foreground (Cutting et al., 1995; Royden & Hildreth, 1996). Taken together, these findings indicate that the processes underlying the overt orienting of attention in a dynamic, naturalistic  85 scene are sensitive to the behavioral relevance of the visual information available in the scene.  Resolving Multiple Demands on Attention As indicated by studies investigating eye movements during real-world behavior, the information that will be most relevant to an observer depends on that observer’s particular behavioral goals. For example, when intentionally following another car in a simulated driving context such as a police chase, the pattern of fixations across the visual field narrows (relative to a no-following condition), focusing almost entirely on the pursued car (Crundall et al., 2004). Thus, in a dynamic, real-world context, the eye movement system must take into account the momentary intentions of the observer while continuing to acquire the information necessary for navigational control (Shinoda et al., 2001). The results of the present experiment suggest that this may be accomplished via the orienting of attention to visual stimuli that are momentarily most pertinent to providing behaviorally relevant information. The analysis of scene-related gaze activity supports the idea that the momentary behavioral relevance of various scene elements guides fixation activity as the dynamic scene unfolds over time. For example, whether or not the path was visible in the distance affected the likelihood of the heading point being fixated, but did not affect fixations on people in the scene. Conversely, whether people were present in the center or the periphery of the scene affected the likelihood of fixating people, but did not affect fixations to the heading point. These results suggest that overt orienting is driven, at least in part, by the momentary availability of various types relevant information (navigational  86 vs. social, for example), such that attention is preferentially allocated to the aspects of the scene most likely to be relevant to behavior at that point in time. The observed pattern of frequent, short fixations to objects defining the path, and less frequent, longer fixations to both people and the heading point is suggestive that multiple demands on attention in dynamic scenes may be resolved via the orienting of attention on a “need-to-know” basis. That is, the rapid, frequent fixations to path objects suggests that fixations remained on path objects only as long as necessary to gather navigational information, freeing up attentional resources the rest of the time for the more demanding task of tracking the people in the scene or “charting out” the course ahead. Indeed, such a strategy has been identified in studies of real world behavior indicating that fixations in are terminated when the information needed for a momentary task is acquired (Hayhoe & Ballard, 2005). There is also evidence that while acquiring navigational information requires little focused intention, the tracking of moving objects in a scene requires substantial willful effort (Royden & Hildreth, 1999). Thus, multiple demands for attention in dynamic scenes may be resolved by attention being directed preferentially to the visual stimuli momentarily most relevant to ongoing behavior, and maintained at that location only as long as necessary to gather the needed visual information.  High-Level Meaning vs. Low-Level Salience While I have argued that attention was oriented preferentially to people, the heading point, and path objects on the basis that these stimuli carry the potential to provide behaviorally-relevant information in everyday navigational contexts, there are  87 some authors that claim that eye movements while viewing complex scenes are strongly driven by low-level stimulus salience (Itti & Koch, 2000). Distinguishing between these two alternatives has proven to be an empirically challenging enterprise, due the difficulty in attributing any single act of attentional orienting to being the result of solely low-level or high-level representations. In studies of complex static scene viewing, there is evidence that the first few fixations tend to be directed preferentially to the most visually salient regions of the image – that is, regions that stand out with contrasting colour, intensity, or orientation (De Graef et al., 1990; Itti & Koch, 2000). However, later viewing seems to be determined primarily by the meaning or significance of the various stimuli present in the scene (Henderson, Weeks & Hollingworth, 1999). According to this view, in the case of dynamic, realistic scenes, which necessarily persist over time, the meaningful (highlevel) information present in the scene may play a particularly important role in guiding eye movement behavior. Thus, rather than being a strict “either/or” dichotomy, it is likely that eye movements while viewing complex scenes reflect the influences of both lowlevel salience and high-level meaning, with a particular emphasis on high-level meaning with continued exposure to a scene (Henderson & Hollingworth, 1999; Osberger & Maeder, 1998). Although the present study did not control for the low-level salience of the various scene elements, there is reason to suspect that the orienting of attention was influenced by high-level, meaningful representations, as opposed to being driven by stimulus salience alone. First, very little attention was paid to any objects or locations other than people, the heading point, or path-defining objects, even though “other  88 objects” included all other stimuli present, and therefore made up the majority of most scenes. If attention were oriented on the basis of stimulus salience alone, one would not expect such a large categorical difference, particularly between path objects and other objects, for which the only distinguishing feature was their location in the scene relative to the observer. Second, the results indicated that the preference for attending to a particular stimulus category was influenced by the overall context of the scene. For example, the analysis of scene-related gaze activity indicated that people were more likely to be attended when located in the center of the scene than when located in the periphery. This result can not be explained by a difference in low-level salience, as defined by luminance, colour, or orientation contrast.  Eye Movements and Attention The present experiment examined patterns of eye gaze activity as a means of assessing attentional orienting in the context of dynamic, realistic scenes. While attention and eye movements can be dissociated empirically, there is a clear consensus that attention and eye movements are coupled at least to some extent. There is disagreement, however, regarding how tight this coupling is. At one end of the spectrum, there are theories suggesting a very tight coupling between eye movements and attention (e.g. Rafal & Egly, 1994; Posner, Crippin, Cohen, & Rafal, 1986). Premotor theory is the strongest such example, stating that covert attention is the preparation of an eye movement to a particular location (Rizzolatti, Riggio, Dascola, & Umilta, 1987). At the other end of the spectrum are those theories arguing that attention and eye movements are functionally independent (e.g. Klein, 1980; Klein & Pontefract, 1994; Hunt & Kingstone,  89 2003). Importantly, however, in studies demonstrating independence between attention and eye movements, task conditions are set up in which eye movements are prepared but are not actually executed on critical trials. This methodological approach, while essential to testing the boundary conditions of the coupling between attention and eye movements, fails to account for their relationship in more natural circumstances in which people move their eyes freely and saccades need not be suppressed. The substantial overlap in the neural structures underlying eye movements and attention is evidence for their functional dependence (Grosbras, Laird, & Paus, 2005). Reflexive orienting via eye movements and covert attention is thought to be mediated, at least partially, in phylogenetically primitive neural systems, including the superior colliculus (Posner, 1980; Posner, 1981; Posner, Crippin, Cohen, & Rafal, 1986; Rafal & Egly, 1994, Rafal, Posner, Friedman, Inhoff, & Bernstein, 1988; Rafal, 1996; Muller, Philiastides, & Newsome, 2005). In addition, the functional region known as the frontal eye field (FEF), is consistently active during volitional and reflexive eye movements and covert attentional orienting, as are paracentral, cingulate, and parietal regions. Ventral fronto-parietal regions including a region at the tempero-parietal junction and a region in the right ventral frontal area are also associated with both reflexive eye movements and covert attentional orienting. The anatomical overlap of eye movement and covert attentional orienting systems is suggestive that eye movements and attention evolved to work in tandem in order to promote coordinated interactions with the environment. Thus, while there is evidence to suggest that attention and eye movements are separate cognitive processes, there is also a great deal of data to suggest that in most circumstances, particularly those encountered in everyday life, these processes are  90 intimately linked. Given this tight coupling between attention and eye movements, under free viewing conditions such as those employed in the present study, it is reasonable to use eye movements as a robust, ecologically valid measure of attentional orienting (Findlay & Gilchrist, 2003). In this light, the present results indicating a tendency for eye movements to be directed to the social and navigational aspects of the scenes is suggestive that behaviorally relevant stimuli may serve as particularly potent cues for attention.  91 Chapter 4: Automatic Attentional Orienting to Optic Flow Chapter 3 demonstrated that in dynamic scenes simulating self-motion through real-world environments, there was a strong tendency for participants to fixate the heading point. The chapter concluded that attention was oriented to the heading point on the basis of its being a behaviorally relevant stimulus, owing to its provision of potentially important navigational information. Chapter 4 will review research describing the concept of optic flow, and the specific features of optic flow that might be especially relevant for controlling navigation, particularly the heading point. Research demonstrating that people actually use the heading point in an optic flow field to guide navigational behavior, including heading maintenace, will also be described. The plausibility of attentional sensitivity to specific patterns of optic flow will be discussed with respect to previous studies indicating a link between attention and processing in the dorsal visual stream, which represents the motion information present in optic flow. Following this basic review, the experiments presented in Chapter 4 seek to directly test whether the observed bias to fixate the heading point (Chapter 3) is underlain by a tendency for attention to be oriented automatically to its location in an optic flow field.  Optic Flow in the Control of Navigation The continuously changing visual stimulation that accompanies self-motion through the environment is referred to as optic flow (Gibson, 1979). The specific pattern of optic flow generated by a given motion trajectory through a particular environment is referred to as an optic flow field. To visualize the concept of an optic flow field, imagine taking a picture that captures what you are seeing from a certain location. Now imagine  92 taking a step or two forward, and snapping another picture. Although very similar, these two pictures will be slightly different due to your change in location with respect to all of the objects in the environment. If you were to then superimpose these two images, you would find that corresponding objects and edges would occupy slightly different locations in each image. Now, imagine connecting the corresponding locations in each image by drawing a straight line that connects and passes through both element X in the first picture and element X in the second picture. This line represents one field line in the optic flow field. If you were to repeat this line-drawing step for every corresponding pixel, the result would be a graphical representation of the optic flow field. With forward motion between your two successive pictures, these superimposed field lines would take on a radiating pattern, with a distinct center point. This center point is known as the focus of expansion (FOE), and defines the point to which you are heading. In essence then, the optic flow field connects corresponding visual elements across successive views, thus defining spatial changes that occur over time. What this example illustrates is the notion that a particular velocity (speed and direction) of motion through the environment will yield a specific optic flow field. Since the laws of physics dictate the specific pattern of optic flow encountered as an observer moves through a given environment, an optic flow field can provide reliable information about speed (Prokop et al., 1997), heading direction (Warren & Hannon, 1988) and one’s position relative to environmental layout (Lee, 1976). Optic flow is therefore a rich source of information available for use in the planning, coordination, and execution of goal-directed actions during navigation.  93 Some of the specific features of an optic flow field that are likely pertinent to navigational control include, 1) radial velocity (expansion) caused by the movement of an observer’s vantage point in depth (such as in the above illustrative example), which provides information about the speed and direction of heading, 2) transverse velocity (translation) caused by up/down or left/right movements of the vantage point in two dimensions, which can provide information about observer direction changes, and 3) shear arising from discontinuities in the slant, tilt, and curvature of flow field lines, which describe properties of the surface layout, such as the distance, orientation, and curvature of surrounding objects, and 4) rigid rotation caused by eye movements, essentially rotational movements around the vertical axis running through the observer that describe the pan and tilt of an observer’s eye movements (Koenderink, 1986). Given the availability of this reliable information to an observer, it makes sense that optic flow be used in the control of navigation. Indeed, both laboratory research with healthy participants and clinical research with patients experiencing navigational deficits provide evidence that the information provided by the optic flow field is in fact used for navigation. The laboratory research aimed at investigating whether optic flow is actually used for navigation in healthy participants has focused largely on the use of optic flow for controlling heading. Discontinuities in radial velocity across the entire flow field define a FOE, which is particularly important with respect to heading control (Warren & Hannon, 1988; Gibson, 1950; Spiro, 2001; Lappe et al., 1999). In a recent study, Warren et al. (2001) immersed participants in one of four different virtual reality worlds that varied in the amount of optic flow information available to participants via texture and depth cues.  94 The results indicated that given sufficiently rich optic flow information, participants maintained heading control by aligning their goal point with the FOE. While there are other features of the flow field that can contribute to heading control when one is not looking in the direction of heading, such as the displacement direction of objects nearer than fixation (Cutting et al., 1992, 1995, 1999; Priest et al., 1985; Wann & Land, 2000), when the FOE is in view, it is relied upon most heavily for accurate heading judgments (Warren & Saunders, 1995). Notably, the importance of optic flow for navigation is borne out when one considers what happens when one’s ability to process this information degrades. Impaired visual processing of optic flow brought about by neurological damage to areas of the brain involved in motion processing disrupts navigational abilities. For example, deficits in perceiving optic flow occur regularly in the early stages of Alzheimer's disease (O’Brien et al., 2001), leading to impairments in postural control and navigational abilities (Kavcic et al., 2006). Such impairments manifest themselves as an increased incidence of traumatic injuries due to both falls and car accidents (Rizzo & Nawrot, 1998). Impairments of optic flow perception in Alzheimer’s disease arise from damage to the neural systems involved in the perception of motion, including the magnocellular visual pathway, the pulvinar of the thalamus, and the dorsal visual association cortex, including MT+/V5 (O’Brien et al., 2001). Damage to these regions arising from stroke leads to similar navigational difficulties arising from deficits in maintaining a constant heading due to a tendency to veer off course, perceiving direction from a pattern of coherent motion, and recognizing the 3D structure of objects and their position relative to oneself as one moves (Vaina & Rushton, 2000).  95 Attention Modulates Dorsal Stream Processing Given the importance of optic flow, particularly the heading point, for navigation, it follows that there are neural systems in place for picking up this information, and processing it in such a manner that promotes its effective use in guiding navigational behavior. An attentional sensitivity to optic flow information would thus serve to facilitate the visual processing of stimuli that are critical to coordinating effective navigational behavior. As I will now describe, there is evidence for a link between attention and visual processing in the dorsal stream, which is critical to the analysis of the motion information that comprises optic flow. Attention to a spatial location increases the sensory-evoked excitability of neurons representing that location in extrastriate visual cortex. This increased sensory excitability is referred to as a sensory gain effect, and is observable as an increase in the amplitude of the P1 ERP component (Eason, 1981). The P1 ERP component reflects the visual processing that occurs during the feed-forward sweep (Lamme, 2000) of that stimulus representation through occipital cortex, beginning about 70 ms following the onset of the stimulus (Mangun et al., 1997). In humans, the P1 component is underlain by early sensory activity in the lateral extrastriate occipital region, particularly V4 (Clark & Hillyard, 1996; Heinze et al., 1994; Woldorff et al., 1997). The effect of attention is typically conceptualized as facilitating or enhancing the perceptual processing of stimuli appearing at attended locations (Handy et al., 2001; Luck & Ford, 1998; Pashler, 1998; Posner, 1980). More recently, however, it has been suggested that attention may also serve to enhance action-related processing in visuomotor processing regions of the dorsal visual stream (Handy et al., 2003, 2005). For  96 example, attending to an image of a graspable stimulus, such as a coffee cup, increases BOLD activity observable with fMRI in visuomotor processing regions, including inferior parietal lobule and supplementary motor area (Handy et al., 2005). These results suggest that attention may play a role in the activation of the specific motor representations associated with a particular behaviorally relevant stimulus. Speaking to this possibility, attention-related sensory gain occurs for attended stimuli in the periphery, but not for stimuli appearing at the fovea (Handy & Khoe, 2005). Given the greater representation of peripheral stimuli (> 10º eccentricity) in the magnocellular LGN and subsequently the dorsal visual stream relative to foveal stimuli (Baizer, Ungerleider, & Desimone, 1991; Morel & Bullier, 1990), this finding suggests that attention-related sensory gain effects may play a significant role in dorsal stream visuomotor processing. Indeed, neuroimaging studies indicate more BOLD activity in the motion processing areas of the dorsal stream, including complex motion processing regions MT and MST, when a moving stimulus is attended (Gray, 2000; Treue & Maunsell, 1996). In addition, motion aftereffects in MT are reduced when attention is divided under dual task conditions, suggesting that dorsal stream motion processing may be at least partially attention-dependent (Chadhuri, 1990). There is evidence that the dorsal cortical processing of optic flow is enhanced by visual attention. To this point, several studies have demonstrated that attending to the motion of a stimulus is associated with an increased neural response in cortical regions associated with motion processing. For example, BOLD activity in MT+ is increased when the speed of a moving stimulus is attended, compared to when other attributes of the same stimulus are attended (Corbetta et al., 1991; O’Craven et al., 1997). Neuronal  97 activity in LIP, a region associated with processing flow stimuli in order to guide contextappropriate eye movements (Shadlen & Newsome, 1996), is particularly sensitive to attentional enhancements. LIP activity associated with a given stimulus more than doubles (Colby & Goldberg, 1999) when that stimulus is attended for the purpose of performing a task relative to when it is passively viewed (Colby et al., 1996). This finding suggests that attention directed to an object based on its relevance for action serves to enhance the processing of that stimulus in dorsal visuomotor regions. There is also evidence to suggest that the selectivity of parietal Area 7a neurons to particular types or directions of optic flow (clockwise rotation, counter-clockwise rotation, radial expansion, or radial compression) can be modulated by attention to particular aspects of motion in a task requiring the inference of structure from patterns of coherent motion created by otherwise unconnected dots (Siegel & Read, 1997). Attentional enhancements in dorsal stream visuomotor processing may influence the planning and execution of context-appropriate action. For example, attention to a nontarget distractor stimulus can disrupt target-directed actions (Chang & Abrams, 1994), suggesting a critical role for attention in preparing or executing visually guided actions. Such evidence supports the notion that an attentional system sensitive to information provided by the optic flow field might contribute to effective navigational behavior. What is more, there is evidence that knowledge of one’s intended actions can influence attentional selectivity. The particular type of target-directed movement required (e.g. pointing vs. grasping) can determine whether attentional tuning is space-based or objectbased (Fischer & Hoellen, 2004). Additionally, a manual response to a target button can be disrupted when a distractor is presented near the hand or on the movement path toward  98 the target, but not when a distractor is presented on the far side of the target (Tipper et al., 1992). Such results are consistent with the idea that attention may be selective for behaviorally relevant information. Together, these lines of inquiry provide converging evidence in support of a link between attention and visuomotor processing in the dorsal visual stream that facilitates the selection and neural representation of behaviorally relevant visual information. Given this link, and the eye movement results reported in Chapter 3, it is reasonable to hypothesize that attentional processes may be sensitive to aspects of an optic flow field providing information that is particularly relevant for navigational behavior, such as the heading point. The experiments presented in Chapter 4 begin to examine this possibility.  Experiment 1 Evidence to suggest that the radial FOE may serve as a potent cue for automatic attentional orienting was provided by one study demonstrating that attention was captured by looming arrays of random dots (Judd et al., 2004). This investigation, however, utilized an object-competition paradigm, in which two individual patches of moving dots flanked a fixation crosshair, one of which displayed random motion, the other of which displayed radial expansion. The results showed that target detection responses were faster for targets appearing over the looming motion patch than for targets appearing over the random motion patch, despite there being no task-related reason to attend to one patch or the other. Thus, it appears that attention was drawn to the looming patch in an automatic, stimulus-driven manner. Although the small, discrete patches used in this study hardly approximate the full-field radial flow that would be experienced  99 during navigation, the results are nevertheless suggestive that attention may be drawn to prominent looming regions of an optic flow field, such as a heading point indicated by the FOE. Experiment 1 directly tested whether spatial attention was oriented reflexively to the heading point in a dynamic display simulating self-motion in depth. Participants were presented with dynamic scenes depicting motion in a particular direction through a simple virtual environment. Each dynamic simulation spanned the entire display but gave rise to a discrete heading point located in either the left or right periphery. Attentional orienting to the heading point was tested by measuring responses to target stimuli presented either at or opposite the heading point. Responses to targets were assessed for the presence of well known behavioral and electrophysiological indices of covert attentional orienting, i.e., spatial attention being committed to a location without any accompanying eye shift. Specifically, behavioral performance was examined for the presence of RT facilitation and P1 ERP amplitudes were examined for the presence of an attention-related sensory gain effect for targets appearing at the heading point. Such enhancements in behavioral and electrophysiological measures of target processing would indicate attentional orienting to the heading point.  Method Participants Sixteen neurologically healthy undergraduate students at the University of British Columbia participated with written consent. Participants were right-handed and had normal or corrected-to-normal vision. The data from four participants was discarded, in  100 one case due to equipment malfunctioning, in two cases due to more than half of all trials being rejected due to eye movements, and in one case due to a failure to evoke a recognizable P1 ERP component. Of the twelve participants included in the analysis, the mean age was 21.25 years (sd = 1.96), and 8 were female. Experimental procedures were approved by the University of British Columbia Behavioral Research Ethics Board. Participants were remunerated with ten dollars per hour. The experiment lasted two hours.  Apparatus Participants were seated in a comfortable chair approximately 100 cm from a 17” CRT monitor used to present visual stimuli. A gamepad was used to record responses to the experimental task. Participants used the index or middle finger of their right hand to make one of two possible responses regarding target stimuli. The experimental task was programmed and presented on a Macintosh G4 1.5 GHz Powerbook running MATLAB 7.0.1 (The Mathworks Inc., Natik, MA), with an auxiliary CRT monitor attached via a VGA extension cable. The Powerbook was connected with a USB to serial converter cable to a 386 DX PC computer running a DOS operating system and custom event recording software, which allowed the on-line monitoring of experimental progress and participant performance. The event recording software received coded inputs from the presentation computer describing all types of unique experimental events, and passed them on via the parallel port to a Pentium III workstation running a Solaris operating system (Sun Microsystems, Inc.), which recorded event codes, behavioral responses, and digitized EEG data.  101 Stimuli and Task Each experimental trial consisted of a computer-generated movie clip simulating forward-left or forward-right motion across a textured ground plane. Figure 4.1 provides an illustration of the simulated environment. The motion simulation was designed such that the heading point would appear in either the left or right periphery (11.05 cm (5.26 degrees) lateral eccentricity from center), while participants fixated a crosshair presented centrally. The simulated motion generated an optic flow field that had a focus of expansion (FOE) corresponding to the heading point. The horizon of this simple virtual scene bisected the vertical axis of the monitor, such that the lower half of the screen represented the ground plane, and the upper half of the screen was white, representing empty space. The movie clips consisted of 100 individually rendered frames presented at 60 Hz using real-time Open GL texture mapping implemented in the Psych Toolbox (Brainard, 1997; Pelli, 1997) for Matlab. The simulated motion lasted approximately 1670 ms. At the beginning of each trial, before the simulated motion began, the first frame of the movie clip was held for between 500 and 700 ms (selected randomly). The computer-generated 3D environment was created using POV-Ray 3.5 (Persistence of Vision Raytracer Pty. Ltd.). A simple 3D environment, involving nothing more than a ground plane, was created and textured. The texture for the ground plane began with a black and white random noise, which was then smoothed with a Gaussian filter. A virtual camera facing straight ahead was positioned above the ground to simulate viewing of the ground plane from eye level. Using the 3D software, the camera was  102 animated such that it moved through the scene on either a forward-left or forward-right path, while continuing to face straight ahead. At the center of the virtual scene, there was a stationary fixation cross, 0.55 cm square, subtending 0.32 degrees of visual angle. The center of the cross was positioned 0.65 cm (0.37 degrees) above the horizon, and remained stationary as simulated motion across the ground plane took place. Participants were instructed to fixate the cross while they viewed the movie clips, and to respond to target stimuli flashing briefly just above the horizon, either at the heading point of simulated motion, or at an equally eccentric location on the opposite side of the fixation cross. Targets were centered 11.05 cm (5.26 degrees) to either the left or right of center, at the same level as the fixation cross, such that half of the targets appeared at the heading point, and half appeared on the opposite side of the fixation cross. Targets consisted of 1.20 cm x 1.20 cm (0.69 degrees) square patches of five equally-spaced horizontal or vertical black bars. On 80% of trials, a target appeared at a time point randomly selected between 1000 and 1200 ms following the onset of the simulated motion, and remained on the screen for 250 ms. On the remaining 20% of trials, no target appeared. Following the offset of the target, and the completion of the simulated motion through the scene, the screen blanked for a 1500 ms inter-trial interval. The experimental task was to respond as quickly and as accurately as possible using the corresponding response finger as to whether the target patch consisted of horizontal or vertical bars. Participants were informed that although the target would be present most of the time, on some trials it would not appear. Importantly, instructions to participants also emphasized the fact that the direction of motion through the scene did  103 not reliably predict the location of the target stimulus and was of no use in completing the task since the location of the target (left or right) was randomly chosen on each trial. Participants completed 14 experimental blocks of about 3.5 minutes each. In order to insure participant alertness, participants were allowed to rest between experimental blocks for as long as they wished before resuming testing. Each experimental block consisted of 50 trials. In each block, 40 trials (80%) were randomly selected to contain a target. The parameters manipulated in trials containing targets were motion direction (forward-left or forward-right), target location (at or opposite heading point), target type (horizontal or vertical bars). In trials containing no target, motion direction (forward-left or forward-right) was manipulated. Five repetitions of each possible combination of these parameters were presented in each block of trials.  104 Figure 4.1: Virtual environment for Experiment 1 Example of a single frame from the video clips presented to participants in Experiment 1. The scene consisted of a textured ground plane, with the horizon bisecting the display horizontally. The fixation cross and an example of the target stimulus (to the left of the fixation cross) are also depicted.  Electrophysiological Recording and Analysis Participants  were  fitted  with  a  25-channel  tin  electrode  cap,  four  electroocculogram (EOG) electrodes, and two reference electrodes. The EOG electrodes were positioned such that they would record both horizontal and vertical eye movements. The two horizontal EOG electrodes were placed on the temples at eye level, on the outer canthus of each eye. The two vertical EOG electrodes were placed about 1.5 cm below the center of each eye. One reference electrode was placed on each mastoid. All electrode impedances were kept below 5 kΩ. EEG data were collected relative to the left mastoid  105 electrode with a Grass Instruments Model 12 amplifier. The EEG was amplified with a gain of 50,000 and a half-amplitude bandpass of 0.1 to 30 Hz. Data were digitized on-line at 256 Hz using an analog to digital signal converter box (National Instruments, model pc-6170e) running between the amplifier and the Pentium III workstation. ERPSS software (Event-Related Potential Software System, UCSD ERP Lab) was used for the analysis of EEG data. Trials including eye movement artifacts, blinks, muscle potentials, or amplifier blocking were not included in subsequent analysis. The EEG data were digitally rereferenced to the average of the left and right mastoids, and low-pass Gaussian filtered (25.6 half-amplitude cutoff) to eliminate high frequency artifacts prior to statistical analysis. ERPs were created by defining 3000 ms epochs beginning 1500 ms before stimulus onset. Event-related potentials (ERPs) were identified for both target-present and target-absent trials. In the case of target-present trials, ERPs were time-locked to the onset of the target stimulus. For target-absent trials, ERPs were time-locked to equivalent time-points following the onset of simulated motion through the scene. This was done in order to assess any systematic neural response to the simulated movement that would have been ongoing when the target was presented. The ERP waveforms associated with target-absent trials were subtracted from the waveforms associated with target-present trials prior to statistical analysis so as to isolate the ERP associated with the onset of the target from any ERP associated with the ongoing motion in the display. Target-absent waveforms for trials depicting forward-left motion were subtracted from target-present trials depicting forward-left motion. Likewise, target-absent waveforms associated with forward-right motion were subtracted from target-present waveforms associated with  106 forward-right motion. The residual ERP waveforms were used in all statistical analyses. All amplitude measures, statistical analyses and waveform displays were conducted relative to a -200 to 0 ms pre-stimulus baseline.  Results Behavioral Data Trials containing no artifacts in the EEG signal were included in an analysis of accuracy and response time. Differences in accuracy and RT between responses to consistent and inconsistent targets were assessed with two separate repeated measures ANOVAs, with heading-target correspondence (consistent vs. inconsistent) as the withinsubject factor in each instance. Response anticipations (RTs less than 100 ms) and failures to respond in a timely manner (RTs greater than 1000 ms) were considered incorrect responses, and removed from the RT analysis, as were inaccurate responses. If attention was automatically oriented to the heading point, then target discrimination RTs should be faster on consistent trials, in which the target appears at the heading point, than on inconsistent trials, in which the target appears opposite the heading point. Incorrect responses, response anticipations and slow responses represented less than 10% of the data. Accuracy data is listed in Table 4.1. There was no difference in the accuracy of responses to consistent and inconsistent targets [F (1,11) = 0.15, p = 0.91]. Average response times and standard deviations for correct responses are listed in Table 4.2. The ANOVA performed on RT also showed no significant main effect of headingtarget correspondence [F(1,11) = 0.56, p = 0.48].  107 Table 4.1: Accuracy Data for Experiment 1 Accuracy of target discrimination responses and standard deviations for each condition are listed. There was no significant difference in accuracy between responses to consistent and inconsistent targets. Heading-Target Correspondence  Accuracy (% Correct)  SD  Consistent  90.74  6.49  Inconsistent  90.86  5.93  Table 4.2: RT Data for Experiment 1 Mean RTs and standard deviations (ms) for each condition are listed. There was no significant difference in RT between responses to consistent and inconsistent targets. Heading-Target Correspondence  Response Time (ms)  SD  Consistent  590.84  109.43  Inconsistent  593.75  109.34  Electrophysiological Data An average of 31.21% of trials, including both target-present and target-absent trials, were rejected due to EEG artifacts, primarily due to eye movements and blinks. Grand-averaged residual ERP waveforms time-locked to target onset are plotted in Figure 4.2. Peak amplitude measures for the P1 waveform associated with target onset were obtained by identifying the latency of the P1 peak at lateral occipital electrode sites (OL and OR) contralateral to the visual field of target presentation for each condition of interest in the grand-averaged residual waveforms. Voltage measures at those latencies were then obtained for each participant’s residual waveforms. The peak P1 amplitude  108 measures, averaged over participants for each condition are listed in Table 4.3. Peak P1 amplitude measures were analyzed with a repeated measures ANOVA, with headingtarget correspondence (consistent vs. inconsistent) as the within-subject factor. If attention was automatically oriented to the heading point, then these P1 amplitudes should be larger for consistent trials than for inconsistent trials. The ANOVA showed there was no significant main effect of heading-target correspondence [F(1, 11) = 0.29, p = 0.61].  Table 4.3: Peak P1 component amplitudes Peak P1 ERP component amplitudes associated with target onset, in microvolts (µV), are listed for each condition. There was no significant difference in the amplitude of the P1 component between consistent and inconsistent targets. Heading-Target Correspondence  P1 Amplitude (µV)  SD  Consistent  1.96  1.46  Inconsistent  1.85  1.77  109 Figure 4.2: Grand-averaged ERP waveforms for Experiment 1 Grand-averaged ERP waveforms, time-locked to the onset of the target are plotted for consistent and inconsistent target conditions. Time is represented along the x-axis, ranging from -200 ms to 300 ms, with tick marks denoting 100 ms intervals. The y-axis crosses at the 0 ms time point, when the target occurred, and extends vertically to ± 2 µV. The top panel depicts target-related activity for each condition at the horizontal EOG electrode. The flat HEOG plots indicate the absence of eye movements associated with the onset of the target following the rejection of trials containing eye movement artifacts. The bottom panel depicts target-related ERP waveforms measured at lateral occipital (OL/OR) electrode sites, contralateral to the target location. The Target Absent column depicts the signal-averaged EEG activity for trials in which no target appeared, timelocked to the time point at which the target would have appeared had it been presented. There was no difference in the amplitude of the P1 waveform at occipital electrode sites between consistent and inconsistent targets.  Discussion The hypothesized heading-target correspondence effect was not significant in any of the analyses. Statistically, accuracy, RTs and P1 amplitudes were equivalent for consistent and inconsistent targets. These results indicate that in this experiment,  110 attention was not automatically oriented to the simulated heading point. This finding runs counter to the hypothesized outcome. It is possible, however, that the stimulus employed here did not provide adequate cues for simulating the visual stimulation associated with self-motion in depth. The simulated environment was very sparse, with the only depth cue being the texture of the ground plane. Since this texture was filtered with a Gaussian kernel, the result was a fairly smooth, low contrast surface, which may not have provided a sufficiently robust visual cue for simulating motion in depth. There is some evidence to suggest that the use of optic flow information in the control of heading during navigation is weighted by the amount of textural information present in the optic flow field (Warren et al., 2001). By presenting a self-motion simulation with textural and depth information that was severely impoverished compared to that encountered in the real world or in the study reported in Chapter 3, it is possible that we set up conditions under which the optic flow field lacked potential behavioral utility. The central question addressed in the present chapter is whether attention can be automatically oriented to optic flow as a meaningful, behaviorally relevant stimulus. If the optic flow stimulus used in Experiment 1 was impoverished in terms of providing reliable heading information akin to that used for navigational control, then it is possible that this optic flow field was not a behaviorally relevant stimulus. Another potential explanation for the null result in the present study is the use of a relatively long 1000-1200 ms SOA. A long SOA was selected in order to maximize the likelihood of finding an observable P1 ERP component by presenting target stimuli well after the onset of the motion display, which we thought might lead to sensory saturation effects in the visual cortical regions that give rise to the P1 component. It is possible,  111 however, that the failure to observe attention-related facilitation in behavioral and ERP measures was the result of sampling these measures at too late a time point following the onset of motion. Speaking to this possibility, typical peripheral cueing studies reveal reflexive orienting effects that reach their maximum by about 300 ms, and decline rapidly, often being replaced by an inhibitory (IOR) effect (Jonides, 1981; Posner & Cohen, 1984). At the outset we suspected that orienting in response to optic flow may lead to facilitation effects that persist for longer periods, as demonstrated in more recent studies of orienting in response to meaningful stimuli such as arrows or eye gaze (e.g. Friesen & Kingstone, 1998; Ristic, Friesen & Kingstone, 2002). It is possible, however, that attention was oriented to the heading point rapidly following the onset of motion, and declined by the time performance measures were sampled at 1000-1200 ms post-onset.  Experiment 2 In order to better test for a tendency to orient attention automatically to the heading point in a simulation of self-motion in depth, Experiment 2 improved upon Experiment 1 in two ways. First, the quality of the motion simulation was improved such that a more compelling sense of self-motion in depth was achieved. Textural and depth information was increased by situating numerous solid objects at random locations within the scene (Figure 4.3). Second, two SOA ranges were tested, including a much earlier SOA (200-400 ms). Given these substantial changes, and the uncertainty regarding the explanation for the null result of Experiment 1, Experiment 2 tested behavioral measures only, omitting the ERP measure in favor of first establishing whether there is behavioral evidence to suggest attentional orienting to the heading point.  112 By adding objects to the scene, the increased textural and depth information provided a richer, more complex pattern of optic flow, which is known to increase the degree to which optic flow information is relied upon for navigational control (Warren et al., 2001). As a result, in addition to addressing the primary research question of whether attention is oriented automatically to the heading point in a dynamic navigational simulation, the results of Experiment 2 will speak to the issue of whether automatic orienting to the heading point is modulated by the utility of the visual information present in a given optic flow field for controlling navigational behavior.  Figure 4.3: Virtual environment for Experiment 2 Example of a single frame from the video clips presented to participants in Experiment 2. The scene consisted of a textured ground plane, with the horizon bisecting the display horizontally, and 100 striped columns dispersed pseudo-randomly throughout the scene. The fixation cross and an example of the target stimulus (to the left of the fixation cross) are also depicted.  113 Methods Participants Eleven neurologically healthy undergraduate students at the University of British Columbia participated with written consent. Participants were right-handed and had normal or corrected-to-normal vision. Data from two participants were discarded due to technical malfunctions. Of the nine remaining participants, the mean age was 20.5 years (sd = 2.45), and 7 were female. Experimental procedures were approved by the University of British Columbia Behavioral Research Ethics Board. Participants were remunerated with course credit. The experiment lasted one hour.  Apparatus Participants were tested with an apparatus identical to that employed in Experiment 1, with the exception that EEG was not collected.  Stimuli and Task The stimuli and task were very similar to those described in Experiment 1. The virtual scene employed in Experiment 2 depicted the same textured ground plane as that presented in Experiment 1. In order to create a more richly textured environment in Experiment 2, however, static objects were added to the scene, and distributed pseudorandomly across the ground plane. These objects consisted of 100 short cylindrical columns textured with barbershop-style black and white stripes that sloped leftward or rightward down the columns (Figure 4.3). The distribution of the columns in the scene was random with the exceptions that the columns did not occupy any location along the  114 motion path, and did not overlap. Five different scenes were created, each with a unique random distribution of columns. The scene to be presented on a given trial was selected randomly prior to the start of the trial. The experimental target discrimination task was identical to that in Experiment 1. In Experiment 2, however, for the 80% of trials in which a target would appear, it could appear during one of two possible intervals following the onset of motion. By testing for the presence of attentional cueing effects at two time points following the onset of motion, the time course of attentional cueing could be assessed. Experiment 2 tested both early (200-400 ms) and late (800-1000 ms) SOA intervals. An average of 8.3 experimental blocks (SD = 1.34) was completed by each participant. Participants completed as many experimental blocks as possible within the allotted one hour testing period. There was some variability in the number of blocks possible due to differences in how much time each participant needed to rest between blocks. In order to insure participant alertness, participants were allowed to rest between experimental blocks for as long as they wished before resuming testing. Each block lasted about 7 minutes, and consisted of 100 trials. On half of trials, forward-left motion was simulated. On the other half of trials, forward-right motion was simulated. In each block, 80 trials (80%) were randomly selected to contain a target. The parameters manipulated in trials containing targets were motion direction (forward-left or forward-right), target location (at or opposite heading point), target type (horizontal or vertical bars), and SOA (200-400 ms Or 800-1000 ms). In trials containing no target, motion direction (forwardleft or forward-right) and SOA (200-400 ms Or 800-1000 ms) were manipulated (note: for target-absent trials, SOA refers to the interval between the onset of motion and the  115 time point at which the target would have been presented had it appeared). Five repetitions of each possible combination of these parameters were presented in each block of trials.  Results Incorrect responses, response anticipations and slow responses represented less than 10% of the data. Accuracy data are plotted in Table 4.4. Accuracy (proportion correct over all trials) was analyzed with a 2 x 2 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (200-400 ms vs. 800-1000 ms) as within-subject factors. The ANOVA of the accuracy data revealed no significant difference in response accuracy to targets appearing at or opposite the heading point [F(1, 8) = 2.01, p = 0.19]. Neither the main effect of SOA [F(1, 8) = 0.92, p = 0.37] nor the interaction [F(1, 8) = 1.39, p = 0.27] were significant. Response anticipations (RTs less than 100 ms) and failures to respond in a timely manner (RTs greater than 1000 ms) were removed from the RT analysis. The resulting average response times are plotted in Figure 4.4. RT was analyzed using a 2 x 2 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (200-400 ms vs. 800-1000 ms) as within-subject factors. The ANOVA revealed a significant main effect of heading-target correspondence [F(1, 8) = 7.43, p = 0.02], indicating significantly faster responses to targets appearing at the heading point than to target appearing at the opposite location. There was also a significant main effect of SOA [F(1, 8) = 10.79, p = 0.01], indicating faster response times for targets appearing at the later SOA than for targets appearing at the earlier SOA. This finding is a well-known  116 phenomenon referred to as the foreperiod effect (Bertelson, 1967). In the context of the present experiment, the onset of the motion served as a warning signal to prepare temporally for the onset of the target. At the longer SOA, participants had more time to prepare to respond to the target than at the earlier SOA, which resulted in faster responses. The heading-target correspondence x SOA interaction was not significant [F(1, 8) = 0.14, p = 0.72].  Table 4.4: Accuracy data for Experiment 2 Accuracy of target discrimination responses and standard deviations for each condition are listed. There was no significant difference in accuracy between responses to consistent and inconsistent targets for either SOA range.  SOA 200 - 400 ms 800 - 1000 ms  Heading-Target Correspondence  Accuracy (% Correct)  SD  Consistent  92.60  5.10  Inconsistent  90.30  5.70  Consistent  90.00  8.30  Inconsistent  89.70  6.50  117 Figure 4.4: RT data for Experiment 2 Mean RTs (ms) for each condition are plotted. Error bars indicate the standard error. Responses to targets appearing at the heading point were faster than responses to targets appearing opposite the heading point, at both SOAs.  Discussion The results showed that participants were faster to discriminate the target when the target appeared at the heading point than when it appeared opposite the heading point. This suggests that attention was oriented to the heading point despite the fact that motion direction was irrelevant to completing the task, and despite the fact that participants were instructed to ignore the motion direction. Thus, it appears that the provision of richer texture and depth cues via the placement of static objects in the scene had the desired effect of improving the effectiveness of the self-motion simulation, bringing about automatic attentional orienting to the heading point, even when the observer was purposefully trying to ignore the movement direction. Importantly, the facilitation of RT for targets appearing at the heading point occurred at both the earlier and later SOAs, suggesting a rapid orienting of attention in  118 response to optic flow that persisted for at least a second following the onset of motion. In typical spatial cueing experiments, the rapid orienting of attention has been thought to indicate reflexive orienting (e.g. Jonides, 1981). While cueing effects seen in spatial cueing tasks utilizing non-predictive peripheral onsets are typically replaced with an inhibition of return effect within about half a second following the cue stimulus (Friesen & Kingstone, 2003; Posner & Cohen, 1984), the present data pattern parallels more recent work demonstrating longer-lasting automatic orienting effects in response to more meaningful stimuli (Friesen & Kingstone, 1998; Kingstone et al., 2000). The present results provide evidence that a particular pattern of optic flow can influence the spatial allocation of visual attention. A distinction can be drawn between the present results and studies demonstrating attentional capture by moving stimuli (e.g. Abrams & Christ, 2005; Franconeri & Simons, 2005). Critically, Experiment 2 revealed that attention was directed automatically to a particular location within a global motion stimulus that spanned the entire display. While it has been suggested that attentional capture by moving stimuli is brought about by the onset of motion rather than the quality of motion itself (Abrams & Christ, 2005), Experiment 2 yields evidence of spatially localized attentional orienting to the heading point despite the onset of motion occurring simultaneously across the entire stimulus display. Thus, what served as a cue for attentional orienting in the present experiment was not simply the onset of discrete motion within an otherwise static display, but rather an informative, localized attribute within the global pattern of optic flow.  119 Experiment 3 Given the RT facilitation observed for targets appearing at the heading point in Experiment 2, we hypothesized that stimuli appearing at the heading point would be subject to an attention-related sensory gain effect. Initial research investigating the modulation of visual perceptual processing by attention used predictive cues to inform participants where to direct their attention volitionally (Mangun & Hillyard, 1991; Van Voorhis & Hillyard, 1977). This early research established an increase in the amplitude of the P1 ERP waveform as a reliable correlate of visual attention directed volitionally. It is only more recently that the same methodology has been used to demonstrate that the reflexive, automatic orienting of attention also results in an enhancement of early perceptual processing, as indicated by a larger P1 amplitude (e.g. Hopfinger & Mangun, 1998; 2001; Kennett, Eimer, Spence & Driver, 2001; McDonald & Ward, 2000). The aim of the present study was to test for the presence of a sensory gain effect associated with the automatic orienting of attention to the heading point in a self-motion simulation. The presence of a sensory gain effect, as indicated by relatively larger P1 component amplitudes for targets appearing at the heading point, would demonstrate an attentional sensitivity to patterns of optic flow that serves to increase the visual-evoked excitability of neurons in lateral occipital visual cortex representing that spatial location. Experiment 3 exactly replicated the design of Experiment 2, but added EEG recording as a dependent measure.  120 Method  Participants Eleven neurologically healthy undergraduate students at the University of British Columbia participated with written consent. Participants were right-handed and had normal or corrected-to-normal vision. Data from three participants was discarded, one due to equipment malfunctioning, two due to a failure to evoke a recognizable P1 ERP component. Of the eight remaining participants, the mean age was 20.5 years (sd = 2.45), and 5 were female. Experimental procedures were approved by the University of British Columbia Behavioral Research Ethics Board. Participants were remunerated with ten dollars per hour of their time. The experiment lasted two hours.  Apparatus and Electrophysiological Recording The experimental apparatus and electrophysiological recording procedures were identical to those employed in Experiment 1.  Stimuli and Task The stimuli and task were identical to those described in Experiment 2. An average of 7.5 experimental blocks (SD = 2.20) was completed by each participant. There was some variability in the number of blocks possible due to differences in rest times and how much time it took to fit the electrode cap.  121 Results Behavioral Data Accuracy data are listed in Table 4.5. Accuracy (proportion correct over all trials) was analyzed with a 2 x 2 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (200-400 ms vs. 800-1000 ms) as within-subject factors. The ANOVA performed on the accuracy data revealed a significant heading-target correspondence x SOA interaction [F(1, 7) = 11.28, p = 0.01]. Follow-up t-tests revealed that this interaction was driven by the presence of a headingtarget correspondence effect at the later [t(7) = 3.66, p = 0.01], but not the earlier SOA [t(7) = 1.31, p = 0.23]. At the later SOA, responses to targets appearing opposite the heading point were more accurate (95.30%) than responses to target appearing at the heading point (92.35%). Overall, the ANOVA main effect of heading-target correspondence on response accuracy was not significant [F(1, 7) = 1.07, p = 0.34]. Average response times are plotted in Figure 5. The data for correct responses on trials containing no eye movement artifacts in the EEG signal were submitted to an analysis of response times. Response anticipations (RTs less than 100 ms) and failures to respond in a timely manner (RTs greater than 1000 ms) were removed from the analysis. Incorrect responses, response anticipations and slow responses represented less than 6% of the data after EEG artifact rejection. The remaining response times were analyzed using a 2 x 2 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (200-400 ms vs. 800-1000 ms) as within-subject factors. The only significant effect revealed by the ANOVA was the main effect of SOA [F(1,7) = 7.28, p = 0.03], indicating faster responses to targets appearing at the later SOA. Neither  122 the main effect of heading-target correspondence [F(1,7) = 1.52, p = 0.26], nor the interaction of heading-target correspondence x SOA [F(1,7) = 0.17, p = 0.70] were significant.  Table 4.5: Accuracy data for Experiment 3 Accuracy of target discrimination responses and standard deviations for each condition are listed. At the later SOA, responses to targets appearing at the heading point were significantly more accurate than responses to targets appearing opposite the heading point.  SOA 200 - 400 ms 800 - 1000 ms  Heading-Target Correspondence  Accuracy (% Correct)  SD  Consistent  95.46  3.85  Inconsistent  93.99  5.04  Consistent  92.35  3.68  Inconsistent  95.30  2.15  123 Figure 4.5: RT data for Experiment 3 Mean RTs (ms) for each condition are plotted. Error bars represent the standard error. There was no significant difference in RT between responses to consistent and inconsistent targets, at either SOA.  Electrophysiological Data Trials including eye movement artifacts, blinks, muscle potentials, or amplifier blocking were not included in subsequent analysis. ERPs were created and amplitude measures were obtained in the same manner as described in Experiment 1. An average of 23.83% of trials, including both target-present and target-absent trials, were rejected due to EEG artifacts, primarily due to eye movements and blinks. Grand-averaged residual ERP waveforms time-locked to target onset are plotted in Figure 4.6. Peak amplitude measures for the P1 waveform associated with target onset were obtained by identifying the latency of the P1 peak at lateral occipital electrode sites (OL and OR) contralateral to the visual field of target presentation for each condition of interest in the grand-averaged residual waveforms. Voltage measures at those latencies  124 were then obtained for each participant’s residual waveforms. Peak amplitude measures (Table 4.6) were analyzed with a 2 x 2 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (200-400 ms vs. 800-1000 ms) as within-subject factors. The ANOVA revealed a significant heading-target correspondence x SOA interaction [F(1, 7) = 5.62, p = 0.05]. Follow-up t-tests indicated that this interaction was underlain by significantly larger P1 amplitudes for targets appearing at the heading point than for targets appearing opposite the heading point at the later [t(7) = 5.47, p = 0.02], but not the earlier [t(7) = 0.15, p = 0.88] SOA. Neither the main effect of heading-target correspondence [F(1, 7) = 1.20, p = 0.31] nor the main effect of SOA [F(1, 7) = 1.85, p = 0.22] were significant. Although there was no a priori prediction of any attention-related sensory gain effect on the very early visual processing indexed by the C1 ERP component, this effect seemed to be present at the early SOA. Peak amplitude measures for the C1 component associated with target onset were obtained by identifying the latency of the C1 peak (maximal negative amplitude between 60 and 90 ms post-stimulus) at lateral occipital electrode sites (OL and OR) contralateral to the visual field of target presentation for both consistent and inconsistent targets in the grand-averaged residual waveforms (Table 4.7). Voltage measures at those latencies were then obtained for each participant’s residual waveforms, and submitted to a 2 x 2 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (200-400 ms vs. 800-1000 ms) as within-subject factors. The results revealed a significant heading-target correspondence x SOA interaction [F (1, 7) = 11.19, p = 0.01]. Neither the main effect of heading-target correspondence [F(1, 7) = 0.07, p = 0.80] nor the main effect of SOA [F(1, 7) = 1.72, p =  125 0.23] were significant. To follow up the significant interaction, separate ANOVAs, with heading-target correspondence (consistent vs. inconsistent) as a within-subject factor were conducted at each SOA. At the early SOA, there was a significant main effect of heading-target correspondence [F(1, 7) = 7.62, p = 0.03], indicating a larger-amplitude C1 component for consistent than inconsistent targets. At the late SOA, there was no difference in C1 amplitude between consistent and inconsistent targets [F(1, 7) = 4.44, p = 0.07].  126 Figure 4.6: Grand-averaged ERP waveforms for Experiment 3 Grand-averaged ERP waveforms, time-locked to the onset of the target are plotted for consistent and inconsistent target conditions. Time is represented along the x-axis, ranging from -200 ms to 300 ms, with tick marks denoting 100 ms intervals. The y-axis crosses at the 0 ms time point, when the target occurred, and extends vertically to ± 2 µV. The top panel depicts target-related activity for each condition at the horizontal EOG electrode. The relatively flat HEOG plots indicate the absence of eye movements associated with the onset of the target following the rejection of trials containing eye movement artifacts. The bottom panel depicts target-related ERP waveforms measured at lateral occipital (OL/OR) electrode sites, contralateral to the target location. The “Target Absent” columns depict the signal-averaged EEG activity for trials in which no target appeared, time-locked to the time point at which the target would have appeared had it been presented. Consistent targets were associated with a larger-amplitude C1 component at the early SOA and a larger-amplitude P1 component at the later SOA at occipital electrode sites.  127 Table 4.6: Peak P1 component amplitudes Peak P1 ERP component amplitudes associated with target onset and standard deviations, in microvolts (µV), are listed for each condition. The amplitude of the P1 component was larger for targets appearing at the heading point at the later SOA. SOA (ms) 200 - 400  800 - 1000  Heading-Target Correspondence  P1 Amplitude (µV)  SD  P1 Amplitude (µV)  SD  Consistent  4.83  1.44  3.94  2.73  Inconsistent  5.04  1.36  3.04  3.82  Table 4.7: Peak C1 component amplitudes Peak C1 ERP component amplitudes associated with target onset and standard deviations, in microvolts (µV), are listed for each condition. The amplitude of the C1 component was larger for targets appearing at the heading point at the earlier SOA. SOA (ms) 200 - 400  800 - 1000  Heading-Target Correspondence  C1 Amplitude (µV)  SD  C1 Amplitude (µV)  SD  Consistent  -0.09  1.25  -0.08  0.57  Inconsistent  0.83  1.45  -0.86  1.36  Discussion Experiment 3 was designed to test whether the sensory processing of target stimuli could be influenced by patterns of optic flow resembling the dynamic visual stimulation that accompanies self-motion. Given the behavioral evidence in Experiment 2 for automatic orienting to the heading point, it was anticipated that targets appearing at  128 the heading point would be subject to a sensory gain effect, as indicated by a larger amplitude P1 component. While the ERP results indeed revealed the presence of a sensory gain effect for targets appearing at the heading point, this effect was present only at the later SOA. This result was intriguing given that in previous ERP studies of automatic orienting, which have cued attention using primitive visual stimuli such as boxes flashed in the periphery, the attention-related modulation of the P1 is observable at a very early SOA (e.g. 34-234 ms; Hopfinger & Mangun, 1998). The modulation of the P1 component at the later SOA alone in the present experiment lends itself to at least two possible interpretations. One possibility is that at the early SOA (200-400 ms), the duration of exposure to the motion stimulus was not sufficient to affect the early perceptual processing indexed by the P1 component. Alternatively, it is possible that the absence of P1 modulation by the heading point at the early SOA was the result of the motion stimulus being a relatively poor cue for automatic orienting, as indicated by the absence of a RT facilitation effect. Based on the present data, it is unclear as to which of these explanations is correct. The modulation of the C1 component at the early SOA, however, indicates that the heading point in the motion simulation provided a sufficiently robust signal for automatically biasing visual processing in striate cortex. Thus, while perhaps not a strong enough cue to bring about facilitation of overt behavioral responses, the motion simulation served as a sufficiently informative visual signal for modulating the early sensory processing of stimuli appearing at the heading point. This suggests that the absence of a P1 modulation at the early SOA is not simply tied to the lack of an RT facilitation effect, but rather reflects the need for a longer exposure to the optic flow field in order for the perceptual processing  129 indexed by the P1 for stimuli appearing at the heading point to be modulated. Subsequent experiments will shed additional light on this issue. The C1 component is thought to reflect the early, feed-forward sweep of sensory processing in the primary visual cortex (Clark et al., 1995; Di Russo et al., 2003), peaking around 70-80 ms post-stimulus, with negative potentials for stimuli appearing in the upper visual field, and positive potentials for lower visual field stimuli (Hopfinger & West, 2006). Prior to the present experiment, there has been no evidence that attention directed to a particular spatial location, either volitionally or automatically, could modulate the C1 component (Clark & Hillyard, 1996; Hopfinger & West, 2006; Martinez et al., 2001; Schuller & Rossion, 2005). The one possible example of attentional selection having any modulatory effect on the amplitude of the C1 component is a study that involved the object-based (non-spatial) selection of a reflexively cued surface patch in a stimulus consisting of two spatially overlapping translucent surface patches (Khoe et al., 2005). Most early neurophysiological studies with nonhuman primates paralleled the human ERP results, showing no influence of attention on sensory processing in the primary visual (striate) cortex (Bushnell, Goldberg & Robinson, 1981; Moran & Desimone, 1985; Spitzer, Desimone & Moran, 1988). More recently, however, it has been demonstrated that attentional selection for a spatial location can increase the activity in regions of striate cortex representing the attended spatial location (Motter, 1993). The finding in the present experiment of a larger amplitude C1 component for spatially selected targets – those appearing at the heading point – is therefore a novel one. The enhancement of the very early sensory processing of stimuli appearing at the heading point is suggestive that the attentional selection of this navigationally-relevant region of  130 an optic flow field influences how stimuli at that location are represented at a very early level of perceptual processing. Furthermore, the rapid onset of this attentional modulation of the C1 component, indicated by its presence at the early SOA, supports the idea that attention is oriented to the heading point in an automatic, reflexive manner. The presence of a significant attention-related sensory gain effect without a corresponding RT cueing effect has been reported previously (Handy et al., 2001; Handy et al., 2003; Hopfinger & Ries, 2005). The P1 component is a direct, implicit measure of early visual processing, whereas an overt manual response represents the culmination of all processing stages intervening between early perception and motor output. As Hopfinger and Ries (2005) point out, automatic attentional orienting to a stimulus can result in a spatial bias in early perceptual processing that does not persist into the later stages of processing mediating an overt behavioral response. Their study investigating the ERP signatures of stimulus-driven and volitional attention, as well as the relationship between them, led them to conclude that automatic attentional biasing at early stages of processing can be overridden by top-down attentional control settings at later stages of processing. Despite past findings of attentional effects on P1 amplitudes without corresponding RT effects, the absence of RT facilitation in the present experiment was surprising given that it was an exact replication of Experiment 2, in which RT facilitation was observed for targets appearing at the heading point. The stimulus presentation and task were identical in the two experiments, and there was a trend in the hypothesized direction for an RT cueing effect at the later SOA. The accuracy data, however, reveal the emergence of a speed-accuracy tradeoff: although the accuracy was very high overall,  131 accuracy for targets appearing at the heading point at the long SOA was slightly lower than that for targets opposite the heading point. The presence of a speed accuracy tradeoff suggests that although the early perceptual processing indexed by the P1 component revealed an attention-related sensory gain effect for targets appearing at the heading point, later behavioral response processes may have been subject to strategic effects that obscured the RT cueing effect. One might speculate that while the heading point served as a compelling signal for automatic attentional orienting, the task instructions to ignore motion direction, in conjunction with the strict requirements in an ERP experiment to stay very still and minimize all eye movements and blinks, may have brought about a strategic attentional control setting that could not prevent automatic attentional orienting per se, but prevented its having any sustained effect on overt behavior.  Summary of Experiments 1-3 Taken together, the results of Experiments 1-3 are suggestive of the position that attention is oriented to the point of heading in a self-motion simulation that contains sufficiently rich textural cues to produce a robust sensation of motion in depth. Since the empirical goal of these experiments was to ascertain whether attention was automatically oriented to the heading point, the instructions to participants emphasized the importance of ignoring the motion direction. It appears that in Experiment 1, the motion simulation may not have provided sufficiently rich depth cues to simulate the visual stimulation that might accompany self-motion. When the depth information provided by the optic flow stimulus was increased in Experiment 2 by adding static objects to the scene, however, a significant orienting effect was observed in the RT data, despite the conservative  132 instructions to ignore motion direction. In Experiment 3, the ERP data revealed evidence of automatic attentional orienting to the heading point in both the C1 (at the early SOA) and the P1 (at the later SOA) components. These results are suggestive that the locus of spatial selection in the visual cortex may shift with continued exposure to a flow stimulus from striate to extrastriate cortex. This interpretation, however, is speculative, given that there was no evidence for a behavioral orienting effect, although there was a trend in the expected direction. It is worth noting that Experiments 1 and 3 represent a novel methodological approach to ERP research in that target-locked ERPs were collected against a backdrop of a complex global motion stimulus. At the outset, we were uncertain as to whether it would be possible to measure target-locked P1 waveforms amidst the ongoing activity associated with the processing of the self-motion simulation. As depicted in Figures 4.2 and 4.6, however, there were no systematic ERP waveforms associated with the ongoing motion stimulus for target-absent trials at the time points at which targets would have appeared. These findings indicate the plausibility of conducting ERP studies that employ more dynamic, complex stimuli that more closely approximate real-world conditions than those used previously. While the data presented in Chapter 4 are suggestive that attention can be oriented automatically to the heading point in an optic flow field, it is clear that in the context of the present experiments, the RT and ERP effects were certainly dependent upon the quality of depth cues provided by the motion simulation. In the next Chapter, I will describe a similar set of experiments, in which several changes were made in order to improve ecological validity in the motion simulations as well as the task instructions.  133 These changes included increasing the field of view of the simulation, adding natural textures and colour to the display, and relaxing the requirement to ignore the motion completely. It was hypothesized that by improving the quality of the motion simulation by more closely approximating the optic flow one might encounter while navigating the real world, that more reliable automatic orienting responses would be observable in both the behavioral and ERP data.  134 Chapter 5: Attention to Optic Flow in a Visually Immersive Motion Simulation Chapter 4 presented data suggesting not only that visual spatial attention is selective for the heading point in an optic flow field, but further, that the region of visual cortex manifesting this selection may dynamically shift during the course of viewing a continuous optic flow stimulus. While provocative, this interpretation is not conclusive given that the ERP results from Chapter 4 supporting this view were not accompanied by corresponding attentional enhancements in behavioral performance. Thus, it remains unclear whether the ERP data truly reflected a dynamic shift in the locus of attentional selection, or rather, were simply a spurious result. The goal of Chapter 5 was therefore to further examine the nature of attentional orienting to patterns of optic flow, with the specific aim of examining whether the intriguing pattern of ERP results observed in Chapter 4 could be replicated in the context of a motion simulation compelling enough to bring about a signature of attentional orienting in behavioral performance measures. The motion simulation employed in Chapter 5 was designed to better simulate the patterns of optic flow that accompany self motion in order to provide a more compelling visual signal for the perception of heading. Properties of the display known to influence the subjective sense of heading in a virtual simulation were improved in order to create a more naturalistic optic flow stimulus than that employed in Chapter 4. There was reason to suspect that such changes would increase the likelihood that attention would be oriented automatically to the heading point: it was clear from the results of Chapter 4 that the availability of sufficiently rich texture and depth cues in the motion simulation played an important role in whether or not RT facilitation and attention-related sensory gain would be observed. In particular, while there was no behavioral or ERP evidence of  135 attentional orienting to the heading point when the simulated environment consisted of nothing but a textured ground plane (Experiment 1), behavioral (Experiment 2) and ERP (Experiment 3) indices of attentional orienting to the heading point were observed when texture and depth cues were increased by scattering static objects throughout the scene. Despite this improvement in the motion simulation, however, the facilitation of RT was tenuous, and was not observed in Experiment 3. These data therefore suggested that further improving the motion simulation so as to better approximate the visual stimulation that accompanies self-motion may result in more robust, reliable behavioral orienting effects. In order to improve the motion simulation, changes were made to the qualitative aspects of the display known to be associated with bringing about a sense of presence, or immersion, within a simulated scene. A sense of presence within a virtual environment is critical to achieving effective virtual navigation (Loomis, Blascovich & Beall, 1999), partially owing to resultant improvements in spatial orienting abilities (Bowman et al, 2002), as well as in the illusion of self-motion afforded by the simulation (Reicke & Schulte-Pelkum, 2005; Reicke et al., 2005). It is known that virtual navigational behavior more closely approximates real world navigational behavior when one’s sense of presence or immersion within a virtual environment is improved by increasing the field of view (Lapointe & Vinson, 2002; Lessels & Ruddle, 2004). Presenting a motion simulation across a larger field of view enables a stronger sense of self-motion (Reicke & Schulte-Pelkum, 2006) by providing more realistic and readily available texture, depth, and motion information (e.g. Alfano & Michel, 1990; Kearns et al., 2002; Kirschen et al., 2000; Sinai et al., 1999). Critically, this increased  136 availability of navigation-relevant information through the use of a larger field of view is made possible via the stimulation of a larger portion of visual periphery (Alfano & Michel, 1990). Given that the field of view afforded by human eyes is approximately 200 degrees on the horizontal axis, it is understandable that a strong sense of immersion may not have been afforded by the motion simulations used in Chapter 4. The use of a small 17” monitor limited the field of view occupied by the optic flow stimulus to a mere 24 degrees. The importance of stimulating the periphery in an investigation of attentional orienting to optic flow becomes apparent when one considers the unequal distribution of perceptual abilities across the visual field. One particularly relevant example is the greater perceptual sensitivity to looming (expanding) motion in the visual periphery than at the fovea (Giaschi et al., 2007; Shirai et al., 2006). This differential perceptual sensitivity suggests that the heading point may be perceptually more salient when located at a greater visual eccentricity than the 11-degree offset of the heading point from fixation in Chapter 4. Additionally, the attention-related modulation of the neural processing of a given stimulus varies depending on the retinotopic location of that stimulus within the visual field (Handy & Khoe, 2005). Thus, providing a wide field of view has functional consequences with respect to how an optic flow stimulus is processed neurally. The experiments presented in Chapter 5 sought to achieve a more naturalistic optic flow simulation by projecting the motion simulation onto a large viewing screen. Another change made to the motion stimulus in order to increase its potency as a simulation of self-motion was to increase visual fidelity of the scene. Visual fidelity refers to the degree to which a virtual environment resembles the real world environment  137 it was meant to simulate (Waller, Hunt, & Knapp, 1998). Of critical importance to achieving high visual fidelity in a virtual scene is the use of realistic texture maps (2D images or patterns that are mapped onto 3D structures in the virtual scene; Lessels & Ruddle, 2004). In order to improve the visual fidelity of the motion simulations used in the present investigation, natural textures replaced the black and white Gaussian filtered ground plane and striped columns utilized in Chapter 4. The ground was textured with dirt and grass, the columns were textured with grey marble, and the space above the horizon was made sky blue (Figure 5.1). A final change made to improve upon the experiments presented in Chapter 4 was to relax the strict instructions given to participants to ignore the motion stimulus completely. Participants were still informed that the motion direction did not predict the location of the target, and was therefore irrelevant to completing the target discrimination task. Rather than being told to ignore the motion, however, they were instructed to view the stimulus display as a simulation of their own self-motion through the virtual environment. That is, they were told to view the scene as if they were actually moving through it themselves, while maintaining fixation on a central cross. The goal with this manipulation was to immerse the viewer in the scene as naturally as possible by having participants interpret the motion display as though they were actually situated in those scenes. In support of the potential effectiveness of this instructional manipulation are studies showing that top-down cognitive factors such as expectations, knowledge, experience, and the interpretability or meaning of stimuli in the virtual scenes play a significant role in the illusion of self-motion, as well as in the effectiveness of virtual navigational behavior (Reicke & Schulte-Pelkum, 2006; Reicke et al., 2005). Based on  138 this notion that a sense of presence can be enhanced by providing meaningful environmental stimuli, it was reasoned that establishing a cognitive set or intention to view the scene as a virtual reality-type simulation of self-motion rather than actively trying to ignore the motion altogether would allow a more ecologically valid assessment of attentional orienting to the heading point. The overall result of the changes made to the stimulus display and instructions was a more naturalistic simulation of self-motion than the one presented to participants in Chapter 4. It was expected that the improved motion simulation would lead to larger, more reliable attentional orienting to the heading point, observable as greater RT facilitation in both behavioral and ERP experiments.  Figure 5.1: Virtual environment for large-field viewing Example of a single frame from the video clips presented to participants in Experiments 1 and 2. The scene consisted of an earth-like textured ground plane and a blue sky, with the horizon bisecting the display horizontally. Gray marble columns were scattered pseudorandomly throughout the scene. The fixation cross and an example of the target stimulus (to the right of the fixation cross) are also depicted.  139 Experiment 1 Experiment 1 again tested whether attention would be oriented automatically to the heading point in a self-motion simulation. Given the improvements made to the motion simulation to provide more naturalistic heading information, however, we expected larger behavioral orienting effects than those observed previously. In order to assess the time course of attentional orienting to the heading point, Experiments 2 and 3 of Chapter 4 presented targets at one of two possible SOAs. Sampling target-directed RTs at a range of SOAs can provide information regarding the temporal onset and offset of attentional orienting effects. Typically, the rapid orienting of attention, observable as RT facilitation at an early SOA (usually around 100 ms), is interpreted as evidence attentional orienting occurring automatically (Jonides, 1981; Remington, 1978; Muller & Rabbit, 1989; Cheal & Lyon, 1991). While Experiments 2 and 3 of Chapter 4 sampled behavioral performance and target-related neural processing at two SOAs, the earliest SOA sampled was 200-400 ms. An earlier SOA was not sampled because of initial concerns that visually-evoked ERP components may not be observable so soon after the onset of such a bold, dynamic visual stimulus. Both ERP experiments of Chapter 4, however, assured the feasibility of identifying recognizable visually-evoked ERPs for target stimuli presented against the backdrop of a dynamic motion display. Given the success of this methodological innovation, an earlier 100 ms SOA was sampled in order to better establish the automaticity of the expected attentional orienting to the heading point. Additionally, a third, mid-range SOA (500 ms) was included in order to better assess the temporal profile of attentional orienting to the heading point.  140 Method Participants Fourteen neurologically healthy undergraduate students (10 female) at the University of British Columbia participated with written consent. Participants were righthanded and had normal or corrected-to-normal vision. The mean age was 20.57 years (sd = 3.27). Experimental procedures were approved by the University of British Columbia Behavioral Research Ethics Board. Participants were remunerated with ten dollars per hour of their time. The experiment lasted one hour.  Apparatus Participants were seated in a comfortable chair approximately 150 cm from a large projection screen approximately 150 cm wide and 120 cm high. Visual stimuli were presented on the screen via a DLP projector (Optoma, model DX605). The experimental task was programmed using the C++ development language in Visual C++ 6.0 (Microsoft Corporation). The stimuli were presented on a PC (AMD Sempron 2600+ processor, 1.60 GHz, 1.00 GB RAM). A gamepad was used to record responses to the experimental task. Participants used the index or middle finger of their right hand to make one of two possible responses regarding target stimuli. Responses were recorded in a log file on the presentation computer.  Stimuli and Task The task was identical to that employed in Chapter 4. There were, however, several changes made to the stimulus display in order to improve the quality and realism  141 of the self-motion simulation. Again, each experimental trial simulated forward-left or forward-right motion across a textured ground plane scattered with columns, creating a heading point and FOE on the horizon in the left or right periphery. The use of a larger viewing screen, however, increased the field of view of the motion simulation to approximately 55 degrees, with the heading point located 60 cm (21.8 degrees) to the left or right of the central fixation cross. The scenes were created with full color 24 bit bitmap image textures on the ground plane and the columns (Figure 5.1). The ground plane was textured with a green and brown earth-like surface, and the empty space above the ground plane was colored a sky blue (RGB values = 100, 120, 140). The columns were textured with a gray marble surface. The movie clips were created in real time during the execution of the program using Open GL 3D rendering. The movies were 2500 ms in duration, consisting of 250 frames presented at 100 frames per second. At the beginning of each trial, the first frame of the movie clip was held static for 1500 ms before the simulated motion began. As in the previous experiments, the camera was animated such that it moved through the scene on either a forward-left or forward-right path, while continuing to face straight ahead. In addition to motion across a virtual ground plane, the movie clips also depicted a fixation cross, 5 cm square, subtending 1.9 degrees of visual angle, at the center of the screen. The center of the cross was positioned 4.5 cm (1.72 degrees) above the horizon, and remained stationary as simulated motion across the ground plane took place. Participants were again instructed to fixate the cross while they viewed the movie clips, and to respond to target stimuli flashing briefly just above the horizon, either at the  142 heading point of simulated motion, or at an equally eccentric location on the opposite side of the fixation cross. Targets were centered 60 cm (21.8 degrees) to either the left or right of center, at the same level as the fixation cross, such that half of the targets appeared at the heading point, and half appeared on the opposite side of the fixation cross. Targets consisted of 5 cm x 5 cm (1.9 degrees) square patches of 5 equally-spaced horizontal or vertical black bars. On 80% of trials, a target appeared either 100 ms, 500 ms, or 1000 ms following the onset of motion, and remained on the screen for 100 ms. On the remaining 20% of trials, no target appeared. Following the offset of the target, and the completion of the simulated motion through the scene, the screen blanked for a 1500 ms inter-trial interval. The experimental task was still to respond as quickly and as accurately as possible using the corresponding response finger as to whether the target patch consisted of horizontal or vertical bars. Participants were informed that although the target would be present most of the time, on some trials it would not appear. The instructions to participants also emphasized the fact that the direction of motion through the scene did not reliably predict the location of the target stimulus, and that the location of the target (left or right) was randomly chosen on each trial. Importantly, and in contrast to the experiments presented in Chapter 4, participants were not instructed to ignore the motion stimulus. Rather, they were asked to view the motion simulations as though they were present in the scene, moving through the virtual environments themselves. This instruction was given to participants in order to maximize their perception of immersion in the virtual environment, which would provide a more ecologically valid test of whether attention might be oriented to the heading point during navigation of the real world.  143 An average of 4.9 experimental blocks (SD = 0.62) was completed by each participant. Participants completed as many experimental blocks as possible within the allotted one hour testing period. There was some variability in the number of blocks possible due to differences in how much time each participant needed to rest between blocks. Each block lasted about 8 minutes, and consisted of 90 trials. On half of trials, forward-left motion was simulated. On the other half of trials, forward-right motion was simulated. In each block, 72 trials (80%) were randomly selected to contain a target. The parameters manipulated in trials containing targets were motion direction (forward-left or forward-right), target location (at or opposite heading point), target type (horizontal or vertical bars), and SOA (100 ms, 500 ms, or 1000 ms). Three repetitions of each possible combination of these parameters were presented in each block of trials. In trials containing no target, motion direction (forward-left or forward-right) and SOA (100 ms, 500 ms, or 1000 ms) were manipulated (note: for target-absent trials, SOA refers to the interval between the onset of motion and the time point at which the target would have occurred had there been one).  Results Accuracy data are listed in Table 5.1. Accuracy (proportion correct over all trials) was analyzed with a 2 x 3 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (100 ms vs. 500 ms vs. 1000 ms) as within-subject factors. The ANOVA of the accuracy data revealed a significant heading-target correspondence x SOA interaction [F(2, 26) = 4.18, p = 0.03], indicating the presence of  144 a heading-target correspondence effect that varies across the levels of SOA. Follow-up ttests indicate significantly more accurate responses to targets appearing at the heading point than opposite the heading point at the 100 ms SOA [t(13) = 2.96, p = 0.01], but not at the 500 ms SOA [t(13) = 1.24, p = 0.24] or the 1000 ms SOA [t(13) = 0.48, p = 0.64]. There was also a significant main effect of SOA [F(2, 13) = 6.83, p = 0.004], indicating more accurate responses with increasing SOA. The main effect of heading-target correspondence was not significant [F(1, 13) = 3.43, p = 0.09]. Average response times are plotted in Figure 5.2. Incorrect responses, response anticipations (RTs less than 100 ms), and failures to respond in a timely manner (RTs greater than 1000 ms) represented less than 10% of the data, and were removed from the analysis. The remaining correct RTs were analyzed using a 2 x 3 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (100 ms vs. 500 ms vs. 1000 ms) as within-subject factors. The ANOVA revealed a significant main effect of heading-target correspondence [F(1, 13) = 9.03, p = 0.01], as well as a significant heading-target correspondence x SOA interaction [F(2, 26) = 6.82, p = 0.004]. Follow-up paired t-tests of the heading-target correspondence effect at each SOA indicated faster responses for targets appearing at the heading point at the 100 ms SOA [t(13) = 4.17, p = 0.001]. A similar trend was observed at the 500 ms SOA, but this effect did not quite reach significance [t(13) = 2.04, p = 0.06]. There was no difference in RT at the 1000 ms SOA for targets appearing at or opposite the heading point [t(13) = 0.67, p = 0.52]. There was also a significant main effect of SOA [F(1, 13) = 30.28, p < .0001], indicating faster RTs overall with increasing SOA.  145 Table 5.1: Accuracy Data for Experiment 1 Accuracy data and standard deviations for each condition are listed. Target discrimination responses to targets appearing at the heading point (consistent) were more accurate than responses to targets appearing opposite the heading point (inconsistent) at the 100 ms SOA, but not the 500 ms or 1000 ms SOAs.  SOA 100 ms 500 ms 1000 ms  Heading-Target Correspondence  Accuracy (% Correct)  SD  Consistent  91.20  8.70  Inconsistent  86.90  10.90  Consistent  91.90  7.80  Inconsistent  89.90  12.10  Consistent  91.60  8.30  Inconsistent  92.20  10.30  146 Figure 5.2: RT data for Experiment 1 Mean RTs (ms) for each condition are plotted. Error bars represent the standard error. Responses to targets appearing at the heading point (consistent) were significantly faster than responses to targets appearing opposite the heading point (inconsistent) at the 100 ms SOA, but not the 500 ms or 1000 ms SOAs.  Discussion The results demonstrate that increasing the field of view and improving the visual fidelity of the motion simulation in order to better approximate the patterns of optic flow that accompany self-motion had the expected effect of increasing the automatic orienting response to the heading point. The RT facilitation effect was large – approximately three times larger than any observed in Chapter 4 – and occurred rapidly following the onset of the motion display, suggesting that attentional orienting to the heading point took place automatically. Importantly, there was a concomitant effect in the accuracy data, with responses at the 100 ms SOA being not only faster, but also more accurate for targets appearing at the heading point than for targets appearing opposite the heading point. This  147 convergence of the RT and accuracy results indicates that attending automatically to the heading point led to more efficient processing of targets appearing at that location than targets appearing elsewhere. Given that a more naturalistic pattern of optic flow brought about more robust behavioral orienting effects than those observed under a more artificial motion simulation (Chapter 4), the results highlight the importance of broadening the investigation of attentional orienting to include studies using complex, dynamic stimuli that are likely to be behaviorally relevant in the real world. By limiting the study of reflexive visual attention to discrete events using primitive visual stimuli, attention researchers may be underestimating the role that the automatic orienting of attention actually plays in guiding behavior.  Experiment 2 Experiment 2 was conducted to test the hypothesis that the locus of attentional selection in the visual cortex can shift over time with ongoing exposure to dynamic optic flow stimulation approximating that brought about by self-motion. This intriguing possibility was suggested by the results of the previous ERP experiment (Experiment 3, Chapter 4), but could not be confirmed, due to the lack of observable behavioral orienting effects. Experiment 2 utilized a more effective motion simulation, found to have brought about a robust behavioral orienting response (Experiment 1). A range of visually evoked ERP components were tested in order to assess the time course of attentional modulations on several stages of information processing occurring in distributed regions of cortex. Of particular interest were the C1 (striate cortex; Clark & Hillyard, 1996), P1 (extrastriate  148 cortex; Heinze et al., 1994), N1 (visual association cortex; Luck, 1995), and P3 (distributed frontal, temporal, and parietal regions; Mulert et al., 2003) components. The attentional modulation of processing amidst these various loci of neural processing was temporally tracked by sampling target-locked ERP responses at both early and late SOAs. The resultant temporal profile of attention-related modulations at multiple stages of target processing would provide insights regarding the nature of attentional orienting the context of visually dynamic patterns of optic flow simulating those brought about by selfmotion. Under the view that the locus of attentional selection can shift with continued exposure to optic flow, the ERP results from Experiment 3 of Chapter 4 would be interpreted as indicating that the initial automatic attentional selection of the heading point is associated with a processing enhancement in the region of primary (striate) visual cortex representing that location. Over time, however, the locus of attentional selection shifts, such that the processing enhancement occurs at a later stage of visual processing in extrastriate cortical regions representing the heading point. In support of this dynamic view of attentional selection, there has been some evidence to suggest that attentional modulations at various stages of target processing, as well as the time course of this modulation at any one stage of processing, depends on moment-to-moment interactions between automatic and volitional orienting processes (Hopfinger & Ries, 2005). In the context of optic flow, however, multiple interacting influences on attentional orienting may arise not only from separate automatic and volitional orienting processes, but also from multiple stages in the ongoing neural analysis of this complex, continuous visual information. By examining the overall pattern of attention-related modulations across a  149 range of processing stages that emerges over time, insights may be gained regarding not only how attentional selection affects the various stages of neural processing, but also whether these attentional modulations, and thus the locus of attentional selection in the brain, can change over time (e.g Hopfinger & Ries, 2005; Prime & Ward, 2004). Experiment 2 sought to investigate the influence of an optic flow field simulating selfmotion on the orienting of attention, and the time course of attention-related modulations at various stages of the neural processing of visual stimuli.  Method Participants Twenty-two neurologically healthy undergraduate students at the University of British Columbia participated with written consent. Participants were right-handed and had normal or corrected-to-normal vision. Data from eight participants was discarded, one due to equipment malfunctioning, one due to poor performance of the task, four due to an excessive amount of eye movement artifacts, and two due to a failure to evoke a recognizable P1 ERP component. Of the fourteen remaining participants, the mean age was 22.31 years (sd = 3.35), and 8 were female. Experimental procedures were approved by the University of British Columbia Behavioral Research Ethics Board. Participants were remunerated with ten dollars per hour of their time. The experiment lasted two hours.  150 Apparatus and Electrophysiological Recording The testing apparatus was identical to that utilized in Experiment 1, with the exception that EEG was recorded. The presentation computer was connected via a serial port cable to a 486 PC computer running a DOS operating system and custom event recording software, which allowed the on-line monitoring of experimental progress and participant performance The event recording software received coded inputs from the presentation computer describing all types of unique experimental events, and passed them on via the parallel port to a Pentium III workstation running a Solaris operating system (Sun Microsystems, Inc.), which recorded event codes, behavioral responses, and digitized EEG data. Electrophysiological recording procedures were identical to those used in Chapter 4.  Stimuli and Task The stimuli and experimental task were identical to those described in Experiment 1, with the exception that the middle SOA was dropped in order to allow for more trials in the early and late SOA categories. In addition, the early and late SOAs were jittered randomly from 100-300 ms or 900-1100 ms in order to best isolate the ERP components of interest from the ongoing EEG signal. Participants completed an average of 13.27 experimental blocks (SD = 1.68) within the allotted testing time. There was some variability in the number of blocks possible due to differences in rest times and how much time it took to fit the electrode cap.  151 Results Behavioral Data Accuracy data (proportion correct responses across all trials) are listed in Table 5.2. Accuracy was analyzed with a 2 x 2 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (100-300 ms vs. 900-1100 ms) as within-subject factors. The ANOVA of accuracy scores indicated no significant difference in the accuracy of responses to targets presented at or opposite the heading point, although there was a strong trend in this direction [F(1, 13) = 4.54, p = 0.053]. Additionally, neither the main effect of SOA [F(1, 13) = 3.47, p = 0.09] nor the headingtarget correspondence x SOA interaction [F(1, 13) = 0.22, p = 0.64] were significant. Average response times are plotted in Figure 5.3. Incorrect responses, response anticipations (RTs less than 100 ms), and failures to respond in a timely manner (RTs greater than 1000 ms) represented 11.03% of the data after EEG artifact rejection, and were removed from the analysis. The remaining RTs were analyzed using a 2 x 2 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (100-300 ms vs. 900-1100 ms) as within-subject factors. The ANOVA revealed a significant main effect of heading-target correspondence [F(1, 13) = 5.67, p = 0.03], indicating faster responses for targets appearing at the heading point than for targets appearing opposite the heading point. Neither the main effect of SOA [F(1, 13) = 0.71, p = 0.42] nor the heading-target correspondence x SOA interaction [F(1, 13) = 0.0003, p = 0.99] were significant.  152 Table 5.2: Accuracy data for Experiment 2 Accuracy scores for target discrimination responses and standard deviations are listed for each condition. There was no difference in accuracy between consistent and inconsistent targets, at either SOA.  SOA 100 - 300 ms 900 - 1100 ms  Heading-Target Correspondence  Accuracy (% Correct)  SD  Consistent  87.76  10.30  Inconsistent  85.09  10.66  Consistent  90.54  7.54  Inconsistent  88.90  9.73  Figure 5.3: RT data for Experiment 2 Mean target discrimination RTs (ms) for each condition are plotted. Error bars represent the standard error. Responses to targets appearing at the heading point (consistent) were significantly faster than responses to targets appearing opposite the heading point (inconsistent) at both SOAs.  153 Electrophysiological Data ERPs were created by defining 3000 ms epochs beginning 1500 ms before stimulus onset. Event-related potentials (ERPs) were identified for both target-present and target-absent trials. In the case of target-present trials, ERPs were time-locked to the onset of the target stimulus. For target-absent trials, ERPs were time-locked to equivalent time-points following the onset of simulated motion through the scene. This was done in order to assess any systematic neural response to the simulated movement that would have been ongoing when the target was presented. The ERP waveforms associated with target-absent trials were subtracted from the waveforms associated with target-present trials prior to statistical analysis so as to isolate the ERP associated with the onset of the target from any ERP associated with the ongoing motion in the display. Target-absent waveforms for trials depicting forward-left motion were subtracted from target-present trials depicting forward-left motion. Likewise, target-absent waveforms associated with forward-right motion were subtracted from target-present waveforms associated with forward-right motion. The residual ERP waveforms were used in all statistical analyses. All amplitude measures, statistical analyses and waveform displays were conducted relative to a -200 to 0 ms pre-stimulus baseline. An average of 23.96% of trials, including both target-present and target-absent trials, were rejected due to EEG artifacts, primarily due to eye movements and blinks. Grand-averaged residual ERP waveforms time-locked to target onset are plotted in Figure 5.4.  154 C1 Component Peak amplitude measures for the C1 component associated with target onset were obtained by identifying the latency of the maximal negative amplitude between 60 and 90 ms post-target onset at lateral occipital electrode sites (OL and OR) contralateral to the visual field of target presentation for consistent and inconsistent targets at the early SOA in the grand-averaged residual waveforms. Voltage measures at those latencies were then obtained for each participant’s residual waveforms. Peak C1 amplitude measures were analyzed with 2 x 2 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (100-300 ms vs. 900-1100 ms) as within-subject factors. The peak C1 amplitude measures, averaged over participants for each condition are listed in Table 5.3. As predicted, the ANOVA revealed a significant heading-target correspondence x SOA interaction [F(1, 13) = 8.05, p = 0.01]. The main effect of SOA was also significant, indicating more negative C1 amplitudes at the early SOA [F(1, 13) = 5.33, p = 0.04]. The main effect of heading-target correspondence was not significant [F(1, 13) = 0.09, p = 0.77]. To follow up the significant interaction, separate ANOVAs, with heading-target correspondence (consistent vs. inconsistent) as a within-subject factor were conducted at each SOA. At the early SOA, there was a significant main effect of heading-target correspondence [F(1, 13) = 4.92, p = 0.045], indicating a larger-amplitude C1 component for consistent than inconsistent targets. At the late SOA, there was no significant difference in C1 amplitude between consistent and inconsistent targets [F(1, 13) = 4.58, p = 0.052].  155 Table 5.3: Peak C1 component amplitudes Peak C1 ERP component amplitudes associated with target onset and standard deviations, in microvolts (µV), are listed for each condition. The amplitude of the C1 component was larger for targets appearing at the heading point at the early (100-300 ms) SOA. SOA (ms) 100 - 300  900 - 1100  Heading-Target Correspondence  C1 Amplitude (µV)  SD  C1 Amplitude (µV)  SD  Consistent  -1.28  2.07  1.19  1.52  Inconsistent  -0.36  1.91  0.41  1.47  P1 Component Peak amplitude measures for the P1 component associated with target onset were obtained by identifying the latency of the maximal positive amplitude between 0 and 200 ms post-target onset at lateral occipital electrode sites (OL and OR) contralateral to the visual field of target presentation for each condition of interest in the grand-averaged residual waveforms. Voltage measures at those latencies were then obtained for each participant’s residual waveforms. Peak P1 amplitude measures were analyzed with a 2 x 2 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (100-300 ms vs. 900-1100 ms) as within-subject factors. The peak P1 amplitude measures, averaged over participants for each condition are listed in Table 5.4. The ANOVA revealed a heading-target correspondence x SOA interaction [F(1, 13) = 13.28, p = 0.003], indicating larger P1 amplitudes for targets appearing at the heading point than for targets appearing opposite the heading point at the later, but not the earlier SOA. The main effect of SOA was also significant [F(1, 13) = 6.56, p = 0.02], with larger P1 amplitudes at the later SOA than at the earlier SOA. The  156 main effect of heading-target correspondence was not significant [F(1, 13) = 0.89, p = 0.36], owing to the lack of this effect at the 100-300 ms SOA.  Table 5.4: Peak P1 component amplitudes Peak P1 ERP component amplitudes associated with target onset and standard deviations, in microvolts (µV), are listed for each condition. The amplitude of the P1 component was larger for targets appearing at the heading point at the later SOA. SOA (ms) 100 - 300  900 - 1100  Heading-Target Correspondence  P1 Amplitude (µV)  SD  P1 Amplitude (µV)  SD  Consistent  -0.24  2.11  2.08  2.73  Inconsistent  0.24  1.97  1.10  2.06  N1 Component Peak amplitude measures for the N1 component associated with target onset were obtained by identifying the latency of the minimal negative amplitude between 180 and 280 ms post-target onset at lateral temporal electrode sites (T5 and T6) contralateral to the visual field of target presentation for each condition of interest in the grand-averaged residual waveforms. Voltage measures at those latencies were then obtained for each participant’s residual waveforms. Peak N1 amplitude measures were analyzed with a 2 x 2 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (100-300 ms vs. 900-1100 ms) as within-subject factors. The peak N1 amplitude measures, averaged over participants for each condition are listed in Table 5.5. The ANOVA revealed a main effect of heading-target correspondence [F(1, 13) = 7.50, p = 0.02], indicating larger (more negative) N1  157 amplitudes for targets appearing opposite the heading point than for targets appearing at the heading point. Neither the main effect of SOA [F(1, 13) = 0.79, p = 0.39] nor the heading-target correspondence x SOA interaction [F(1, 13) = 3.88, p = 0.07] were significant.  Table 5.5: Peak N1 component amplitudes Peak N1 ERP component amplitudes associated with target onset and standard deviations, in microvolts (µV), are listed for each condition. The amplitude of the N1 component was larger for targets appearing opposite the heading point. This pattern was present at both SOAs, although a clearly defined N1 component was present only at the later SOA (see Figure 5.4a). SOA (ms) 100 - 300  900 - 1100  Heading-Target Correspondence  N1 Amplitude (µV)  SD  N1 Amplitude (µV)  SD  Consistent  -1.03  1.36  0.41  2.90  Inconsistent  -1.14  2.32  -0.45  3.56  P3 Component Amplitude measures for the P3 component associated with target onset were obtained by calculating the mean amplitude between 200 and 400 ms post-target onset at the central midline electrode site (CZ) for each condition of interest in each participants’ residual waveforms. Mean P3 amplitude measures were analyzed with a 2 x 2 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (100-300 ms vs. 900-1100 ms) as within-subject factors. P3 components are plotted in Figure 5.4b. Mean P3 amplitude measures, averaged over participants for each condition are listed in Table 5.6. The ANOVA revealed a main effect of heading-target correspondence [F(1, 13) = 16.44, p = 0.001], indicating greater  158 P3 amplitudes at both the early and the late SOAs for targets appearing at the heading point than for targets appearing opposite the heading point. The main effect of SOA was also significant [F(1, 13) = 6.63, p = 0.02], indicating larger P3 amplitudes at the later than the earlier SOA. The interaction was not significant [F(1, 13) = 0.86, p = 0.37].  Table 5.6: Mean P3 component amplitudes Mean P3 ERP component amplitudes associated with target onset and standard deviations, in microvolts (µV), are listed for each condition. The amplitude of the P3 component was larger for targets appearing at the heading point at both SOAs. SOA (ms) 100 - 300  900 - 1100  Heading-Target Correspondence  P3 Amplitude (µV)  SD  P3 Amplitude (µV)  SD  Consistent  6.38  3.21  8.03  3.68  Inconsistent  5.21  3.46  6.31  3.08  159 Figure 5.4: Grand-averaged ERP waveforms for Experiment 2 a)  Grand-averaged ERP waveforms, time-locked to the onset of the target are plotted for consistent and inconsistent target conditions. Time (x-axis) ranges from -200 ms to 300 ms, with tick marks denoting 100 ms intervals. The y-axis crosses at 0 ms (target onset), and extends vertically to ± 2 µV. The top (HEOG) panel indicates the absence of eye movements associated with the onset of the target. The bottom panel depicts ERPs at contralateral lateral occipital (OL/OR) electrode sites. The “Target Absent” columns depict the signal-averaged EEG activity for trials in which no target appeared. Larger C1 and P1 components for consistent targets were observed at the early and late SOAs (respectively). Peak amplitudes in the N1 time-range were larger (more negative) for inconsistent targets at both SOAs. A clearly defined N1 component, however, was present only at the later SOA.  b) HEOG and the central midline electrode site are shown here, with a time scale ranging from -200 ms to 600 ms. Again, ticks along the x-axis demark 100 ms intervals. The mean amplitude of the P3 component was larger for consistent targets, at both SOAs.  160 Discussion The behavioral results again indicate that attention was oriented automatically to the heading point in an optic flow field simulating the visual stimulation that might be encountered during navigation. Responses to targets appearing at the heading point were facilitated relative to those appearing on the opposite side of the display. Both the rapid emergence of this RT facilitation following the onset of the self-motion simulation, and its occurrence despite participants being informed that the motion direction did not predict the location of the target are evidence that orienting to the heading point is an automatic process. Critically, the ERP results support the hypothesis that the locus of attentional selection for the heading point in an optic flow field, corresponding to the particular region(s) of cortex showing an attention-related modulation of processing, can shift over time. As such, different regions of cortex underlying specific stages of stimulus processing will be subject to modulation by attention at different points in time during the ongoing processing of complex optic flow information. Importantly, the different time courses for the attentional modulation of C1, P1, and P3 components indicate that the specific stages of neural processing (and corresponding regions of cortex) affected by an attentional bias for processing stimuli appearing at the heading point changed over time. At the early SOA, there was an attentional modulation of both 1) the sensory processing of target stimuli in the primary visual cortex, indicated by a larger amplitude C1 component for targets appearing at the heading point, and 2) the higher-level cognitive processing of target stimuli in distributed regions of frontal, temporal and parietal cortex, indicated by a larger amplitude P3 component for targets appearing at the heading point.  161 At the later SOA, while the attentional modulation of the higher-level cognitive processing of target stimuli indexed by the P3 component persisted, the attentional modulation of the sensory processing of target stimuli shifted from primary visual cortex to extrastriate cortex, indicated by the absence of a C1 modulation and the presence instead of a larger amplitude P1 component for targets appearing at the heading point. It is possible that this apparent shift in the locus of attentional selection is underlain by a change in the neural source(s) of the (attentional) signal serving to bias stimulus processing in visual cortices representing the location of the heading point. For example, while the very early C1 modulation in primary visual cortex may be the result of attentional orienting based purely on the physical properties of the motion stimulus available to attentional control processes at an early SOA, the later modulation of the P1 in extrastriate cortex may reflect the influence of higher-level interpretive or evaluative information processing on attentional orienting that does not become available to attentional control processes until the system has had a certain amount of time to analyze the complex optic flow stimulus. As described in the previous chapter, optic flow is a complex visual stimulus, processed hierarchically into the highest levels of the dorsal visual stream. Therefore, it would not be until the later SOA that the overall pattern of optic flow would be fully analyzed and interpreted via the involvement of frontal association cortex as particularly meaningful or behaviorally relevant. As such, while this higher-level information would not contribute to attentional orienting at the early SOA, it may well influence attentional control processes at the later SOA. The physiology of the reciprocal connectivity of striate and extrastriate cortices with other (higher) regions of cortex involved in the control of attentional orienting  162 provides further support for the notion that different neural (attentional) signals may have served to bias sensory processing in these regions in the present experiment. Specifically, striate and extrastriate cortices receive different feedback inputs from higher brain areas (Felleman & Van Essen, 1991). The striate cortex receives direct feedback from relatively few regions outside the occipital lobe, including medial temporal (MT), medial superior temporal (MSTlateral), parieto-occipital (PO), and posterior intraparietal (PIP) regions (Felleman & Van Essen, 1991). At the time of the early SOA, it is unlikely that neural source of the feedback signal serving to bias processing in striate cortex would have originated from beyond one of these closely connected regions. The extrastriate cortex (particularly V4, considered the main neural source of the P1 component; Clark & Hillyard, 1996), on the other hand, receives feedback from a distributed network of at least fifteen different regions of temporal, parietal, and frontal cortex (Felleman & Van Essen, 1991). At the time of the late SOA, then, it is possible that the attentional biasing of processing in the extrastriate cortex was the result of feedback projections representing a higher-level analysis of optic flow. Regardless of whether attentional selection was manifested as gain in the visually evoked excitability of sensory neurons in striate cortex (at the early SOA) or extrastriate cortex (at the late SOA), there was a corresponding enhancement of subsequent cognitive processing in higher cortical areas: larger P3 amplitudes for targets appearing at the heading point were observed at both the early and the late SOAs. The P3 component is thought to reflect several post-perceptual cognitive operations that occur following the presentation of a stimulus (Hopfinger & Mangun, 2001), including the recognition of stimulus novelty (Squires et al, 1975; Hillyard & Picton, 1987; Daffner et al, 1998;  163 Wright, Geffen & Geffen, 1995), or behavioral significance (Ohman, 1979; Naatanen, 1992), as well as the evaluation of a stimulus for making task-relevant decisions and responses (Hopfinger & West, 2006; Muller & Hillyard, 2000). Thus, the time course of the attention-related modulation of the P3 component in the present experiment suggests that at both SOAs, targets appearing at the heading point were given priority for neural representation in regions of frontal, temporal, and parietal cortex responsible for the associative cognitive processing of visual stimuli. Importantly, the rudimentary perceptual information regarding the optic flow field available to attentional control processes at the early SOA was sufficient to instantiate an attentional bias in the cognitive processing of stimuli at the location of the heading point. This bias was maintained at the later SOA, at which time the outputs of a more complex cognitive analysis of the optic flow field would have been available to attentional control processes. Presumably, this higher-level cognitive analysis would include inputs from regions of frontal cortex responsible for maintaining a volitional cognitive set for task performance in working memory (Hillyard & Anllo-Vento, 1998), providing a means by which the knowledge that the heading point was irrelevant to the task could influence attentional orienting. Thus, it is possible that the attentional orienting effects observed at the late SOA occurred despite inputs from volitional control processes acting to prevent or override automatic orienting to the heading point. This possibility is consistent with the notion that the heading point in an optic flow field serves as a particularly potent cue for automatic attentional orienting, likely owing to its significance for guiding safe, effective navigational behavior. Although this hypothesis is speculative, there is evidence that  164 reflexive and volitional influences on attentional orienting are separate processes that can overlap in time (e.g. Friesen, Ristic & Kingstone, 2004). While ERP studies of volitional attentional orienting typically reveal robust attention-related enhancements of the N1 component (Luck, 1995), especially when a more difficult discrimination task is used (Handy & Mangun, 2000; Hopfinger & West, 2006; Vogel & Luck, 2000), studies of reflexive orienting have never revealed such an effect (Hopfinger & West, 2006; Van Voorhis & Hillyard, 1977). The absence of an attention-related enhancement of the N1 component in the present experiment is consistent with these established findings. Initially, we had predicted that the amplitude of N1 component would be greater for targets appearing at the heading point, based on the large behavioral orienting effects brought about by the motion simulation, as well as the novel C1 modulation found in the previous ERP experiment. In contrast, the present experiment revealed the opposite effect: the N1 component was larger for targets appearing opposite the heading point. This reverse effect has been found previously in ERP studies of reflexive orienting (e.g. Fu et al, 2003; Hopfinger & Ries, 2005), and has been attributed to the temporal carry-over of greater positivity owing to higher P1 amplitudes for cued targets, in conjunction with the lack of an attention-related modulation of the neural processing giving rise to the N1 itself (Hopfinger & Ries, 2005). The fact that there was no enhancement of the N1 component for targets appearing at the heading point even while such enhancements were observed for C1, P1, and P3 components, coupled with the fact that N1 enhancements seem to be an index of volitional rather than reflexive orienting, is consistent with the hypothesis that attentional orienting to the heading point in an optic flow field is reflexive in nature.  165 Interestingly, the behavioral RT facilitation effect found in the present experiment was longer lasting than that observed in Experiment 1, observable at both the early and the late SOAs. While this difference may seem surprising given that the two experiments were so similar, it is important to point out that the changes made to the timing of target onsets in Experiment 2 are known to affect both the size and time course of attentional orienting effects. In particular, the jittering of the SOA in Experiment 2 for the purpose of collecting reliable ERP data had the result of making the timing of target onset much less predictable than that of Experiment 1. This reduction in temporal predictability is evident in the absence of a foreperiod effect – a decline in RT with increasing SOA – in Experiment 2 (Bertelson, 1967; Posner, 1978; Thomas, 1974), which is known to reflect volitional temporal preparation (Kingstone, 1992; Snyder & Kingstone, 2001; Tipper & Kingstone, 2005). There is a general consensus among attention researchers that volitional preparatory processes and strategic factors can influence automatic orienting (Yantis & Johnston, 1990; Yantis, 1993b; Theeuwes 1991; Folk, Remington, & Johnston, 1992; Lupianez et al., 2001). For example, eliminating volitional preparation by reducing the temporal predictability of the target can increase RT facilitation for targets appearing at peripherally cued locations (Tipper & Kingstone, 2005). The present data reveals a similar result: the reduction of the target’s temporal predictability was associated with the emergence of RT facilitation at the late SOA that was not found in Experiment 1. It is reasonable to speculate that in Experiment 1, the volitional preparation enabled by the temporally predictable target onset brought about conditions favoring the volitional intention to maintain a spatially neutral attentional focus at fixation, thereby overriding reflexive orienting to the heading point at later SOAs. Conversely, in Experiment 2,  166 greater uncertainty regarding when the target would appear precluded this level of volitional preparation and, consequently, the interruption of reflexive orienting. In addition to being faster, there was a strong trend toward discrimination responses being more accurate for targets appearing at the heading point, at both the early and late SOAs. This pattern parallels that seen in Experiment 1, where RT facilitation for targets appearing at the heading point was accompanied by an increase in accuracy. A concomitant enhancement in speed and accuracy is evidence that stimuli appearing at the heading point are processed more efficiently than stimuli appearing opposite the heading point (Posner, 1978). Such a pattern indicates that not only are responses faster, but also the quality of information processing leading to a manual discrimination response is improved. This finding can be contrasted with the suspected emergence of a speedaccuracy tradeoff in Experiment 2 of Chapter 4, suggested by less accurate responses at the heading point at the late SOA despite the presence of a sensory gain effect in the P1 ERP data. This reduced accuracy for targets at the heading point was interpreted as evidence that strategic factors may have been at play, obscuring behavioral measures of attentional orienting to the heading point. In the present experiment, however, the larger, more realistic motion simulation served as a more potent cue for automatically orienting to the heading point, facilitating not only RT, but also showing a trend toward improving the accuracy of target discrimination responses. Combined, the trend toward greater accuracy and larger P3 amplitudes for targets appearing at the heading point provide converging evidence suggesting more efficient cognitive processing underlying discrimination responses for stimuli appearing at the heading point than for stimuli appearing elsewhere.  167 Overall, the results of Experiment 2 suggest that in the context of a continuous, dynamic optic flow field, the locus of attentional selection in the brain can change over time. This dynamic shift in the regions of visual cortex subject to sensory gain with ongoing exposure to the flow stimulus may reflect a change in the neural structures effecting the attentional bias, and therefore the type of stimulus information driving attentional orienting. The possibility that different types of information may direct attentional orienting at different points in time in the context of continuous, dynamic stimulation hints at the idea that our understanding of attentional orienting, based almost solely on laboratory studies of discrete isolated events and primitive visual stimuli may need to be broadened in order to account for the functioning of attention under the dynamic, continuous conditions likely to be experienced in the real world.  Summary of Experiments 1 and 2 The aim of Chapter 5 was to test whether, in the context of a compelling motion simulation, larger, more reliable behavioral orienting effects would be observed. Of particular interest was whether, provided behavioral orienting effects were observable, the intriguing pattern of ERP results observed in Experiment 3 of Chapter 4 would be replicated. The results are consistent with the hypothesis that the visuocortical region manifesting attentional selection for the spatial location of the heading point can change as the analysis of optic flow continues. This novel finding of a dynamically shifting locus of attentional selection carries the implication that automatic attentional orienting in the context of dynamic, continuous, behaviorally relevant information could potentially be  168 the result of the changing quality and neural locus of information upon which attentional biasing signals feeding back to visual cortex are based. More generally, it was apparent that increasing the field of view, improving the visual fidelity of the simulation by adding natural textures, and relaxing the instruction to ignore the motion completely created a more compelling motion display, which resulted in larger, more reliable behavioral and ERP orienting effects. It is possible that these changes to the display enabled a better approximation of the patterns of optic flow that accompany self-motion in the real world, which provide meaningful, behaviorally relevant information that is used for the control of navigational behavior. The findings presented in Chapter 5 would therefore support the hypothesis that the cognitive and neural processes enabling automatic attentional orienting are sensitive to stimuli that are meaningful to an observer on the basis of their potential significance for providing information that is pertinent to guiding behavior. Also notable is that the use of such large, dynamic displays has been heretofore a relatively unexplored methodological approach to ERP attention research. In addition to providing evidence regarding attentional orienting to meaningful, dynamic stimuli, Chapter 5 demonstrates the feasibility of isolating visually-evoked ERP components for target stimuli presented against the backdrop of a wide-field motion display. As indicated in the “Target Absent” panels of Figure 5.4, there were no systematic, isolated waveforms present amidst the ongoing motion display at time points at which the target would have occurred. While these illustrations indicate a tendency toward a positive drift on target-absent trials, this positive drift was removed from the analysis by creating residual waveforms that subtracted the activity associated with target-absent trials from  169 that associated with target-present trials. In this way, we could investigate the ERP components evoked by target stimuli in a realistic, dynamic context, while minimizing the potential confound of temporally overlapping activity linked to the ongoing processing of the motion display. The rapid emergence of both RT facilitation and attention-related modulations of the neural processing of target stimuli, combined with the fact that the heading point was non-predictive with respect to the target location, provides strong evidence that these attentional orienting effects were automatic in nature. There is a great deal of research, however, suggesting that automatic orienting to salient stimuli is subject to volitional control (Yantis & Johnston, 1990; Yantis, 1993b; Theeuwes 1991; Folk, Remington, & Johnston, 1992; Lupianez et al., 2001). While the experiments presented in Chapters 4 and 5 have not provided any explicit motivation to attend volitionally to the heading point, they have likewise not provided any explicit detriment to doing so. Chapter 6 will set up task demands that encourage attending volitionally to the location opposite the heading point, in order to test the susceptibility of automatic orienting to the heading point to volitional control.  170 Chapter 6: Is Attentional Orienting to Optic Flow Strongly Reflexive? The experiments presented in Chapters 4 and 5 provide clear evidence that attention can be oriented automatically to the heading point in an optic flow field providing a compelling simulation of self-motion. This conclusion was based on both behavioral results indicating faster, and in some cases, more accurate responses for targets appearing at the heading point, as well as ERPs demonstrating increased sensoryevoked excitability in visuocortical regions representing the location of the heading point. Several key questions regarding this general finding have been addressed, including the time course of orienting effects, whether attentional orienting to the heading point depends on the realism of the motion simulation, whether the neural locus of attentional selection in visual cortex can shift with continued exposure to the flow field, and whether stimuli appearing at the heading point are associated with a larger neural response in cortical regions underlying the higher-level cognitive analysis and interpretation of these stimuli. Throughout these investigations, the data have converged on the novel conclusion that attentional orienting to the heading point in an optic flow field can occur automatically, without willful effort on the part of the observer. The automaticity of the orienting response has been established based on, 1) the occurrence of attentional orienting to the heading point despite its location being irrelevant to the target discrimination task, 2) the rapidity of orienting responses (both behavioral and electrophysiological) upon viewing the motion simulations, and 3) the fact that targets appearing at the heading point are subject to an electrophysiological gain effect at the earliest level of cortical visual processing. While these findings are certainly a reliable  171 indication that attention is oriented automatically to the heading point when there are no other demands on the attentional system, the possibility remains that given a specific reason to attend elsewhere, orienting to the heading point would not occur. Strictly speaking, however, a purely automatic attentional orienting process would occur not only unintentionally, but also despite focused volitional intentions to orient elsewhere in the scene (e.g. Shiffrin & Schneider, 1977; Theeuwes, 1991). Thus, a final question that will be addressed in the present chapter concerns whether attentional orienting to the heading point occurs despite competing task goals, or whether it can be overridden volitionally. Typically, establishing the resistance of an automatic orienting process to volitional control involves setting up task conditions that require volitional orienting to a location other than that cued reflexively by a salient stimulus (e.g. Yantis & Jonides, 1990). In the context of a spatial cueing procedure, the most effective way to pit volitional and reflexive processes against each other is to make a salient cue for reflexive orienting counter-predictive of the target location. In two studies assessing the automaticity of attentional orienting to eye gaze, for example, targets were much more likely to appear opposite the gazed-at location (Driver, 1999; Friesen, Ristic & Kingstone, 2004). The presence of RT facilitation for targets appearing at the gazed-at location despite volitional intentions to orient attention in the other direction was taken as evidence that this rapid attentional orienting effect was strongly reflexive. By the same logic, Chapter 6 used a counter-predictive spatial cueing procedure to test whether orienting to the heading point in an optic flow field would occur despite volitional intentions to orient to the opposite side of the display.  172 Again, both behavioral (Experiment 1) and electrophysiological (Experiment 2) data served as measures of attentional orienting. In addition, however, recording eye movements provided a third measure of attentional orienting. The ERP experiments in Chapters 4 and 5 indicated that a large proportion of trials were rejected due to the presence of eye movements, despite participants having been reminded repeatedly to maintain central fixation (Chapter 4: Experiment 1: 31%, Experiment 3: 24%; Chapter 5: Experiment 2: 24%). Anecdotally, upon repeated prompting to maintain fixation, most participants insisted that they were not making eye movements. This raised the possibility that eye movements were being made unintentionally, perhaps as a result of automatic overt attentional orienting to the heading point. Thus, it was reasoned that recording eye movement activity might provide a useful measure of automatic orienting. Given the large amount of overlap in the neural mechanisms subserving overt and covert attentional orienting, it is likely that they are two different manifestations of largely the same underlying attentional processes (Grosbras, Laird, & Paus, 2005). This tight coupling between overt and covert attention enables the use of eye movements as an ecologically valid measure of attentional orienting. Thus, combining RT and eye movement measures in the context of a counter-predictive spatial cueing procedure provides a broad basis for assessing the automaticity of attentional orienting to patterns of optic flow.  Experiment 1 Experiment 1 used the same motion simulations and large viewing screen that were employed in Chapter 5. This time, however, the spatial contingency between motion  173 direction and target location was set up such that the direction of motion was counterpredictive of the target location. Participants were accurately informed that the target was most likely to appear at the location opposite the heading point (75%) than at the heading point (25%). Additionally, they were instructed to use this information to prepare accordingly for the target. If attentional orienting to heading point is strongly automatic, then RT facilitation for targets appearing at the heading point should occur despite volitional intentions to orient to the opposite side of the display in preparation for the target. While participants were encouraged to maintain fixation throughout the experiment, it was expected based on the results of the ERP experiments reported previously that a certain number of unintentional eye movements would be made. Of primary interest was whether these eye movements would be preferentially directed towards the heading point in the motion simulations. In order to determine whether eye movements were preferentially directed toward the heading point, EOG data were collected in a manner that distinguished leftward and rightward eye movements based on the polarity of the associated voltage deviation on the horizontal EOG channel. With the left eye electrode referenced to the right eye electrode, leftward eye movements were associated with negative voltage deviations, and rightward eye movements were associated with positive voltage deviations. If eye movements were made because participants were simply being restless or ignoring the instruction to fixate the central cross, there should be no evidence for a bias to make eye movements to any particular location within the motion display. If, on the other hand, optic flow elicits reflexive overt orienting to the heading point despite the  174 volitional intention to fixate centrally, then the signal-averaged EOG waveforms timelocked to motion onset should reveal systematic amplitude differences between trials depicting leftward and rightward motion. Because a counter-predictive spatial cueing procedure was used, a tendency for eye movements to be directed toward the heading point would suggest that these eye movements were an automatic overt orienting response that was resistant to volitional, goal-directed attentional orienting processes.  Methods Participants Twelve neurologically healthy undergraduate students (11 female) at the University of British Columbia participated with written consent. Participants were righthanded, had normal or corrected-to-normal vision, and had a mean age of 21.25 years (sd = 1.86). Experimental procedures were approved by the University of British Columbia Behavioral Research Ethics Board. Participants were remunerated with ten dollars per hour of their time. The experiment lasted one hour.  Apparatus and Electroocculogram Recording The testing apparatus was identical to that employed in Experiment 1 of Chapter 5, with the exception that EOG data were also collected. Participants were fitted with five electroocculogram (EOG) electrodes, and one grounding electrode. The EOG electrodes were positioned such that they would record horizontal, vertical, and oblique eye movements. The horizontal EOG electrodes were placed on the outer canthus of each eye. The vertical EOG electrodes were placed about 1.5 cm above and below the center  175 of the left eye. The oblique EOG electrode was placed between the horizontal and vertical EOG electrodes, at approximately a 45º angle from the center of the left eye. The grounding electrode was placed about 1.5 cm below the right eye. EOG data were collected using a Grass Instruments Model 12 amplifier, referenced to either the right temple electrode (horizontal EOG) or the left under-eye electrode (vertical and oblique EOG). EOG was amplified with a gain of 50,000 and a half-amplitude bandpass of 0.1 to 30 Hz. Data were digitized at 256 Hz using an analog to digital signal converter box (National Instruments, model pc-6170e) connecting the amplifier to the Pentium III workstation used to record the EOG and behavioral data. ERP waveforms were low-pass filtered (25.6 half-amplitude cutoff) prior to statistical analysis.  Stimuli and Task The stimuli and task were nearly identical to that utilized in Chapter 5. In the present experiment, however, the direction of simulated motion through the scene was counter-predictive of the target location. On trials in which a target appeared, 75% of the time it appeared opposite the heading point. Participants were informed about this contingency, and instructed to use the motion direction to prepare for the appearance of the target. Participants completed eight experimental blocks within the allotted one-hour testing period. Each block lasted about 3.5 minutes, and consisted of 40 trials. On half of trials, forward-left motion was simulated. On the other half of trials, forward-right motion was simulated. In each block, 32 trials (80%) were randomly selected to contain a target.  176 Of these 32 trials, 25% (4) contained a target appearing at the heading point, and 75% (12) contained a target appearing opposite the heading point. Within each of these groups of trials, target location (left or right), target type (horizontal or vertical bars), and SOA (100, 500, or 1000 ms) were manipulated in a counterbalanced manner.  Results Behavioral Data Eye movements occurred on 17% of trials, and these trials were removed from the behavioral analysis. Accuracy data (Table 6.1) was analyzed with a 2 x 3 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (100 ms vs. 500 ms vs. 1000 ms) as within-subject factors. The ANOVA of accuracy scores revealed no significant main effects or interactions (all F’s < 2.28, p’s > 0.16). RT data for trials not containing eye movements is presented in Figure 6.1. Incorrect responses, response anticipations (RTs less than 100 ms), and failures to respond in a timely manner (RTs greater than 1000 ms) represented 11.6% of the data, and were removed from the analysis of RT. The remaining RTs were analyzed using a 2 x 3 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (100 ms vs. 500 ms vs. 1000 ms) as within-subject factors. Of particular interest is the finding that despite participants being informed that the target was most likely to occur at the location opposite the heading point, responses were faster overall for targets presented at the heading point. The ANOVA revealed a significant main effect of heading-target correspondence [F(1, 11) = 12.72, p = 0.004],  177 indicating faster RTs to targets appearing at the heading point than to targets appearing at the opposite location. There was also a significant main effect of SOA [F(2, 22) = 14.72, p < 0.0001], indicating a speeding of RT with increasing SOA, a standard foreperiod effect. These main effects were modulated by the presence of a significant heading-target correspondence x SOA interaction [F(2, 22) = 8.38, p = 0.002], underlain by the presence of faster responses to targets appearing at the heading point at the 100 ms [t(10) = 3.60, p = 0.005] and 1000 ms [t(10) = 5.65, p = 0.0002] SOAs, but not at the 500 ms SOA [t(10) = 0.64, p = 0.54].  Table 6.1: Accuracy data for Experiment 1 Accuracy data and standard deviations for each condition are listed. There were no significant differences in target discrimination accuracy between conditions.  SOA 100 ms 500 ms 1000 ms  Heading-Target Correspondence  Accuracy (% Correct)  SD  Consistent  86.45  18.90  Inconsistent  87.25  12.35  Consistent  85.30  17.29  Inconsistent  92.75  9.99  Consistent  85.55  9.98  Inconsistent  93.10  9.27  178 Figure 6.1: RT data for Experiment 1 Mean RTs (ms) for each condition are plotted. Error bars represent the standard error. Responses to targets appearing at the heading point (consistent) were significantly faster than responses to targets appearing opposite the heading point (inconsistent) at the 100 ms SOA and 1000 ms SOAs, but not at the 500 ms SOA.  Electroocculogram Data In order to determine whether eye movements were most likely to be directed to the heading point, the EOG data were analyzed as signal-averaged waveforms timelocked to the onset of the motion simulation. Signal-averaged EOG waveforms were created for the horizontal EOG channel by defining 3000 ms epochs beginning 1500 ms before the onset of the motion stimulus on all trials. Figure 6.2a depicts the grandaveraged EOG waveforms. Of particular interest were motion-locked voltage deviations on the horizontal EOG electrode. With the particular EOG setup used for the present experiment, eye movements made toward the heading point would result in a more negative voltage potential on the horizontal EOG for leftward motion trials, and a more positive voltage  179 potential for rightward motion trials. Upon viewing the horizontal EOG channel in Figure 6.2, it is evident not only that there was a tendency for eye movements to be directed towards the heading point, but also that eye movements occurred as little as 100 ms following the onset of the motion simulation. It is also evident that there was a large negative drift in the horizontal EOG signal that began before the onset of motion. This prominent linear drift, known as the contingent negative variation (CNV) reflects preparation for a task-relevant event (Walter et al., 1964; Weinberg, 1972). In the case of the present experiment, the presentation of the first frame of the motion simulation 1000 ms prior to the motion began likely served as a warning signal to prepare for the upcoming trial. The motion onset itself may also have contributed to the CNV, serving as a temporal warning signal that the target was about to appear. Regardless of the underlying cause of the CNV in the present experiment, it is evident from the overlap in leftward and rightward motion waveforms prior to motion onset that the CNV equally affected both types of trials. Thus, its presence did not bear on the effect of interest, which did not concern absolute amplitudes, but rather the difference in voltage potentials for leftward and rightward motion trials. Therefore, in order to better visualize the effect of interest, a difference waveform was calculated by subtracting leftward motion from rightward motion trials (Figure 6.2b). Positive voltage deflections in the difference waveform represent the occurrence of eye movements directed toward the heading point. Mean amplitudes for the time window between 100 and 1000 ms post-motion onset were calculated for both leftward and rightward simulated motion relative to a 200-0 ms pre-stimulus baseline for each participant (Table 6.2). These mean amplitude  180 measures were submitted to a repeated measures ANOVA, with motion direction (left vs. right) as a within-subject factor. As predicted, the ANOVA of mean amplitudes revealed a significant main effect of motion direction at the horizontal EOG electrode [F(1, 11) = 28.41, p = 0.0002], indicating more negative voltage potentials for leftward motion trials and more positive voltage potentials for rightward motion trials. Importantly, the voltage deviations associated with eye movements toward the heading point were evident by 100 ms following the onset of motion, indicating that eye movements toward the heading point occurred very rapidly. These results not only indicate that eye movements associated with motion onset were more likely to be directed toward the heading point than elsewhere, but also suggest that these eye movements were elicited reflexively.  181 Figure 6.2: Signal-averaged HEOG waveforms for Experiment 1 Panel A represents HEOG activity, averaged over all leftward and rightward motion trials. More negative values represent a tendency to make leftward eye movements. More positive values represent a tendency to make rightward eye movements. Panel B represents the difference waveform obtained by subtracting the EOG amplitudes for leftward motion trials from those for rightward motion trials. A positive deflection in the difference waveform represents eye movements toward the heading point.  Table 6.2: Mean EOG amplitudes for Experiment 1 Mean amplitudes for signal-averaged EEG activity at the HEOG electrode site, timelocked to motion onset, for leftward and rightward motion trials are listed. Leftward motion was associated with a more negative HEOG mean amplitude than was rightward motion. This pattern indicates that participants had a tendency to make eye movements toward the heading point. Motion Direction Mean Amplitude (µV)  SD  Leftward  -5.87  5.82  Rightward  -3.60  5.38  182 Discussion The results provide evidence that attentional orienting to the heading point in an optic flow field is strongly automatic, occurring rapidly despite competing task demands to orient to the opposite side of the display in order to prepare for the target. Both the behavioral data, indicating the presence of a RT facilitation effect at the 100 ms SOA, and the EOG data, indicating the occurrence of reflexive eye movements to the heading point within 100 ms of motion onset, support this conclusion. Regardless of whether an eye movement was made, RTs were faster for targets appearing at the heading point at both the early and late SOAs. When eye movements occurred, however, they were more likely to be directed to the heading point than anywhere else. Furthermore, eye movements toward the heading point began to occur as little as 100 ms following motion onset. The reflexivity of an eye movement can be inferred on the basis of its latency. Whereas eye movements made voluntarily toward a relevant stimulus typically take about 350 ms to execute, reflexive eye movements elicited by a salient stimulus are executed much faster, typically about 250 ms following stimulus onset (Walker et al., 2000; Forbes & Klein, 1996). The occurrence of eye movements rapidly (250 ms or less) following the onset of the motion simulation would therefore indicate the activation of a reflexive orienting process. The occurrence of unintentional, reflexive eye movements to the heading point is reminiscent of a similar phenomenon, coined oculomotor capture (Theeuwes et al., 1998). Oculomotor capture is observed in the context of visual search tasks requiring eye movement responses, wherein a task-irrelevant distractor stimulus appearing abruptly can  183 elicit reflexive eye movements to its location (Theeuwes et al., 1998; Irwin, Colcombe, Kramer & Hahn, 2000). Much like studies of the related attentional capture phenomenon (the reflexive covert orienting of attention to salient distractors, discussed in Chapter 1; Theeuwes, Kramer, Hahn, & Irwin, 1998), oculomotor capture studies have typically tested capture by basic, low-level visual stimuli, such as abrupt onsets or luminance changes (Ludwig & Gilchrist, 2002). It is possible, however, that oculomotor capture may also occur in response to more complex, behaviorally relevant stimuli, such as the self-motion simulations utilized in the present experiment. The reflexive eye movements made toward the heading point in the present experiment may therefore reflect the occurrence of oculomotor capture in a novel experimental context. While attention was oriented to the heading point reflexively despite the competing volitional intention to orient to the opposite location, the time course of the behavioral RT facilitation effect suggests that a volitional orienting process may have come into play at the 500 ms SOA, at which time there was no RT facilitation for targets appearing at the heading point. The disappearance and reemergence of the RT facilitation effect at the 500 and 1000 ms SOAs may have thus been the result of a competing volitional orienting process that temporarily obscured the facilitative effect of reflexive orienting to the heading point. In support of this possibility, a previous study using counter-predictive eye gaze cues in a spatial cueing procedure yielded similar results: while RTs were facilitated for targets appearing at the gazed-at location at early SOAs, this effect disappeared at midrange SOAs, and began to arise again at the late SOA (Friesen et al., 2004). These results were interpreted as indicating the operation of independent reflexive and volitional  184 orienting processes that overlapped in time. At the mid-range SOAs, volitional orienting to the predicted (non-gazed-at) location would have been at its peak, therefore obscuring the RT facilitation for targets at the gazed-at location. By this account, importantly, the absence of RT facilitation at the 500 ms SOA in the present experiment reflects a temporary masking of reflexive RT facilitation by a concurrent, opposing volitional orienting process, as opposed to either the complete interruption of a reflexive orienting process by a volitional one or the mere absence of a reflexive orienting process. Indeed, had a volitional process promoting orienting to the location opposite the heading point completely replaced the reflexive orienting process, there would have been no reemergence of RT facilitation for targets appearing at the heading point at the 1000 ms SOA. Additionally, the results of the EOG analysis suggest that a reflexive orienting process was at work across all three SOAs. As illustrated in Figure 6.2, the tendency for participants to make eye movements toward the heading point persisted for at least 1000 ms following motion onset. Thus, even though a volitional orienting process may have influenced observable manual response times at the 500 ms SOA, the combined RT and EOG results indicate that this volitional process masked rather than abolished a reflexive orienting process that was engaged concurrently. Experiment 2 was designed to further explore the possibility that reflexive and volitional orienting mechanisms can co-occur and interact to produce observed behavioral effects.  Experiment 2 The disappearance of the RT facilitation effect at the 500 ms SOA in Experiment 1 led to the hypothesis that an opposing volitional orienting process may have been  185 engaged concurrently with reflexive orienting to the heading point, having a maximal effect on RT at the mid-range SOA. Other counter-predictive cueing studies have yielded similar results (e.g. Friesen, Ristic, & Kingstone, 2004; Driver et al., 1999). Importantly, however, manual RTs can only provide information regarding the final response stage of target processing. ERPs, on the other hand, provide the ability to investigate the modulatory effect of attentional orienting on multiple stages of target processing. Thus, while behavioral measures can provide coarse information signaling the potential presence of an interaction between reflexive and volitional orienting processes, the use of ERP measures can reveal specific information regarding how such an interaction may be instantiated at multiple stages of target processing to produce the final behavioral effect. The aim of Experiment 2 was therefore to replicate Experiment 1 using ERP methodology as a means of, 1) examining whether RT facilitation for targets appearing at the heading point is accompanied by corresponding enhancements in target processing even when motion direction is counter-predictive of the target location, 2) providing converging evidence for the hypothesis that reflexive and volitional orienting processes were concurrently engaged to produce observed RT effects, and 3) examining multiple stages of target processing for the presence of modulations by volitional attention. How might concurrently activated reflexive and volitional mechanisms be reflected in the ERPs elicited by target stimuli? When acting in isolation, both reflexive and volitional orienting can affect early (P1) and late (N1, P3) stages of target processing. It is possible, however, that when concurrently activated, the modulatory effects of reflexive and volitional orienting may be observed at different stages of target processing. In support of this possibility, there is some evidence to suggest that reflexive orienting  186 has a greater modulatory effect on the early (P1) stage of target processing, and volitional orienting process has a greater modulatory effect on later stages of target processing, including the N1 and P3 components (Hopfinger & Ries, 2005, Hopfinger & West, 2006). When reflexive orienting is engaged in isolation, the typical result is a yoked enhancement of both the P1 and P3 components for targets appearing at the reflexively cued location (Hopfinger & Mangun, 1998, 2001). The N1 component is typically left unaffecteed, or is larger for targets appearing at an uncued location (Fu et al, 2003; Hopfinger & Ries, 2005). When volitional orienting is engaged in isolation, the typical result is an enhancement of the P1 component for targets appearing at the cued location, and an enhancement of the P3 component for targets appearing at the uncued (nonpredicted) location (e.g. Mangun & Hillyard, 1991; Handy & Khoe, 2005), which is thought to occur as a result of the violation of the spatial expectancy activated by the predictive cue (Donchin, 1981). Additionally, there is usually a robust enhancement of the N1 component for targets appearing at cued locations (Luck, 1995; Mangun & Hillyard, 1991). When a volitional orienting process is engaged in opposition to a concurrently activated reflexive orienting process, its effect tends to be more apparent at later stages of target processing, including the N1 and P3 components (Hopfinger & Ries, 2005; Hopfinger & West, 2006). In ERP studies investigating the interaction of reflexive and volitional orienting, volitional orienting seems to have relatively little effect on the enhancement of the P1 component for targets appearing at a reflexively cued location. In contrast, the reflexive modulation of the P3 component is affected to a much greater  187 extent by a concurrent opposing volitional orienting process (Hopfinger & West, 2006). This relative dominance of volitional orienting on the P3 component is likely a consequence of the P3 component reflecting a multitude of high-level cognitive processes (Johnson, 1993) generated in widely distributed regions of frontal, parietal, and temporal cortex (Mulert et al., 2003). According to the idea that reflexive and volitional orienting processes can modulate different stages of target processing when activated concurrently, a given pattern of RT facilitation reflects the net result of reflexive and volitional contributions to all stages of target processing, up to and including the response stage. By this logic, one might expect that volitional attentional modulations would be most apparent at the 500 ms SOA, when the effect of volitional orienting on RT seemed to be at its peak in Experiment 1. Alternatively, if the disappearance of the RT facilitation effect at the 500 ms SOA in Experiment 1 did not reflect the concurrent activation of reflexive and volitional processes, but rather the mere absence of reflexive orienting, then at the 500 ms SOA attention-related modulations of target-evoked neural activity should be absent at all, or at least most, stages of target processing. Experiment 2 used the same counter-predictive cueing procedure that gave rise to the pattern of RT facilitation observed in Experiment 1 to test multiple stages of target processing for the presence of any attention-related modulations, either reflexive or volitional. The results could therefore provide insight as to whether observed RT effects arose from reflexive orienting processes alone, or rather, from reflexive and volitional processes activated concurrently.  188 Methods Participants Fourteen neurologically healthy undergraduate students (7 female) at the University of British Columbia participated with written consent. Participants were righthanded and had normal or corrected-to-normal vision. The data from eight participants had to be rejected, six due to an excessive number of eye movements, and two due to the absence of any recognizable P1 ERP component. Of the six remaining participants, five were female. The mean age was 25.17 years (sd = 8.06). Experimental procedures were approved by the University of British Columbia Behavioral Research Ethics Board. Participants were remunerated with psychology course credit for two hours of testing.  Apparatus and Electrophysiological Recording The testing apparatus was identical to that employed in Experiment 1, with the exception that EEG data were collected. All electrophysiological recording procedures were identical to those employed in all ERP experiments reported previously (Experiments 1 and 3 of Chapter 4, and Experiment 2 of Chapter 5).  Stimuli and Task The stimuli and experimental task were identical to those described in Experiment 1, with the exception that the SOA was jittered for the purpose of identifying reliable ERPs from the averaging of target-locked EEG epochs. The SOA ranges used were the same as those used in Experiment 2 of Chapter 5, including 100-300 ms, 400-600 ms, and 900-1100 ms.  189 Results Behavioral Data Eye movements occurred on 33% of trials, and these trials were removed from the behavioral analyses. Accuracy data, listed in Table 6.4, were analyzed with a 2 x 3 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (100-300 vs. 400-600 vs. 900-1100 ms) as within-subject factors. The ANOVA of accuracy scores indicated no significant difference in the accuracy of responses to targets presented at or opposite the heading point [F(1, 5) = 1.12, p = 0.34]. Additionally, neither the main effect of SOA [F(2, 10) = 0.45, p = 0.65] nor the headingtarget correspondence x SOA interaction [F(2, 10) = 2.45, p = 0.14] were significant. Average RTs are plotted in Figure 6.3. Incorrect responses, response anticipations (RTs less than 100 ms), and failures to respond in a timely manner (RTs greater than 1000 ms) represented 16.25% of the data, and were removed from the RT analysis. The RT data were submitted to a 2 x 3 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (100-300 vs. 400-600 vs. 9001100 ms) as within-subject factors. The ANOVA revealed a significant main effect of heading-target correspondence [F(1, 5) = 28.98, p = 0.003], indicating faster responses for targets appearing at the heading point than for targets appearing opposite the heading point. The main effect of SOA was also significant [F(2, 10) = 6.94, p = 0.01], indicating a standard foreperiod effect. The heading-target correspondence x SOA interaction was not significant [F(2, 10) = 0.59, p = 0.57]. This non-significant interaction effect was surprising given that the overall data pattern seemed to replicate almost exactly the pattern of results seen in Experiment 1, in  190 which this interaction was significant. Upon closer inspection of the data, however, it seemed that there was a large amount of variability at the 500 ms SOA arising from differences in whether a volitional orienting process had come into play. Specifically, at the 500 ms SOA, 3/6 participants showed a pattern of RTs consistent with the reflexive orienting of attention to the heading point, and 3/6 participants showed a pattern of RTs consistent with volitional orienting to the location opposite the heading point. When these variable RT facilitation effects were averaged across participants, the result was a lack of any RT facilitation, either reflexive or volitional in nature. Clearly, given this large amount of variability, the small sample size did not provide adequate power to detect this interaction effect statistically.  Table 6.4: Accuracy data for Experiment 2 Accuracy data and standard deviations for each condition are listed. There were no significant differences in target discrimination accuracy between conditions.  SOA 100 ms 500 ms 1000 ms  Heading-Target Correspondence  Accuracy (% Correct)  SD  Consistent  87.01  25.15  Inconsistent  79.66  25.31  Consistent  82.23  30.90  Inconsistent  80.82  29.55  Consistent  79.79  27.57  Inconsistent  82.14  24.31  191 Figure 6.3: RT data for Experiment 2 Mean RTs (ms) for each condition are plotted. Error bars represent the standard error. Responses to targets appearing at the heading point (consistent) were significantly faster overall than responses to targets appearing opposite the heading point (inconsistent).  Electroocculogram Data The EOG data revealed that participants had a very difficult time maintaining fixation in the present experiment, necessitating the exclusion of data from six of the original fourteen participants. On average, these six excluded participants made eye movements on 54.05% of trials. Three excluded participants made eye movements on more than 62% of trials. Even amongst the six participants included in the analysis, a substantial proportion (33%) of trials were associated with eye movements, and thus rejected from the EEG signal-averaging process. The inability of participants to maintain fixation reliably was an intriguing result, despite resulting in the loss of a great deal of ERP data. In order to determine whether eye movements tended to be directed toward the heading point, the EOG data for all trials  192 prior to artifact rejection were plotted as signal-averaged waveforms time-locked to motion onset for leftward and rightward motion. Figure 6.4 depicts the EOG data for the participants that were rejected from the analysis because of excessive eye movement artifacts. In these participants, it is evident that there was a very strong tendency to make eye movements to the heading point. Eye movements toward the heading point began to occur as little as 100 ms following the onset of motion, as evidenced by the first sharp positive deflection in the difference (right motion-left motion) waveform illustrated in Figure 6.4b. Figure 6.5 depicts a similar, albeit attenuated pattern in the EOG data prior to eye movement rejection for the participants that were included in the analysis. This rapid onset of eye movements toward the heading point replicates the EOG results of Experiment 1, and suggests that attention was overtly oriented to the heading point in a strongly reflexive manner.  193 Figure 6.4: HEOG waveforms for participants rejected from Experiment 2 Panel A represents HEOG activity, signal-averaged over all leftward and rightward motion trials. More negative values represent a tendency to make leftward eye movements. More positive values represent a tendency to make rightward eye movements. Panel B represents the difference waveform obtained by subtracting the EOG amplitudes for leftward motion trials from those for rightward motion trials. A positive deflection in the difference waveform represents eye movements toward the heading point.  194 Figure 6.5: HEOG waveforms for participants included in Experiment 2 Panel A represents HEOG activity, signal-averaged over all leftward and rightward motion trials. More negative values represent a tendency to make leftward eye movements. More positive values represent a tendency to make rightward eye movements. Panel B represents the difference waveform obtained by subtracting the EOG amplitudes for leftward motion trials from those for rightward motion trials. A positive deflection in the difference waveform represents eye movements toward the heading point.  Electrophysiological Data An average of 33% of trials, including both target-present and target-absent trials, were rejected due to EEG artifacts brought about by eye movements and blinks. ERPs were created by defining 3000 ms epochs beginning 1500 ms before stimulus onset. Event-related potentials (ERPs) were identified for both target-present and target-absent trials. In the case of target-present trials, ERPs were time-locked to the onset of the target stimulus. For target-absent trials, ERPs were time-locked to equivalent time-points following the onset of simulated motion through the scene. In the same manner as the  195 other ERP experiments reported here, the ERP waveforms associated with target-absent trials were subtracted from the waveforms associated with target-present trials prior to statistical analysis so as to isolate the ERP associated with the onset of the target from any ERP associated with the ongoing motion in the display. The residual ERP waveforms were used in all statistical analyses. All amplitude measures, statistical analyses and waveform displays were conducted relative to a -200 to 0 ms pre-stimulus baseline. Grand-averaged residual ERP waveforms time-locked to target onset are plotted in Figure 6.6. As a result of so many trials being rejected due to eye movements, the average ERP waveforms were based on a paucity of trials relative to the large number normally required to observe reliable ERPs, and were therefore quite noisy. This noisiness in the data can be seen in Figure 6.6 as fluctuations in the -200 ms to 0 ms (baseline) time window.  C1 Component Peak amplitude measures for the C1 component associated with target onset were obtained by identifying the latency of the maximal negative amplitude between 60 and 90 ms post-target onset at lateral occipital electrode sites (OL and OR) contralateral to the visual field of the target for consistent and inconsistent targets at the early SOA in the grand-averaged residual waveforms. Voltage measures at those latencies were then obtained for each participant’s residual waveforms. Peak C1 amplitude measures (Table 6.5) were analyzed with 2 x 3 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (100-300 ms, 400-600ms, or 900-  196 1100 ms) as within-subject factors. The ANOVA revealed no statistically significant effects [all F’s < 1.00, p’s > 0.40].  P1 Component Peak amplitude measures for the P1 component associated with target onset were obtained by identifying the latency of the maximal positive amplitude between 0 and 200 ms post-target onset at lateral occipital electrode sites (OL and OR) contralateral to the visual field of the target for consistent and inconsistent targets at the late SOA in the grand-averaged residual waveforms. Voltage measures at those latencies were then obtained for each participant’s residual waveforms. Peak P1 amplitude measures (Table 6.5) were analyzed with 2 x 3 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (100-300 ms, 400-600ms, or 9001100 ms) as within-subject factors. Despite the data being very noisy due to the rejection of a large proportion of trials due to the presence of eye movements, the ANOVA revealed that both the heading-target correspondence main effect [F(1, 5) = 2.56, p = 0.17] and the heading-target correspondence x SOA interaction [F(2, 10) = 2.44, p = 0.14] approached, but did not quite reach, statistical significance. The main effect of SOA was not significant [F(2, 10) = 0.22, p = 0.81]. Given the a priori prediction of an attention-related P1 modulation at the late SOA based on the results of the ERP experiments from Chapters 4 and 5, a paired t-test was conducted on P1 amplitudes at the late SOA. This specific test revealed a significantly higher amplitude P1 component for targets appearing at the heading point at the late SOA [t(5) = 3.87, p = 0.01].  197 N1 Component Peak amplitude measures for the N1 component associated with target onset were obtained by identifying the latency of the minimal negative amplitude between 180 and 280 ms post-target onset at lateral occipital electrode sites (OL and OR) contralateral to the visual field of target presentation for each condition of interest in the grand-averaged residual waveforms. Voltage measures at those latencies were then obtained for each participant’s residual waveforms. Peak N1 amplitude measures (Table 6.5) were analyzed with 2 x 3 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (100-300 ms, 400-600ms, or 900-1100 ms) as within-subject factors. The ANOVA revealed no statistically significant effects [all F’s < 1.50, p’s > 0.25].  P3 Component Amplitude measures for the P3 component associated with target onset were obtained by calculating the mean amplitude between 200 and 400 ms post-target onset at the central midline electrode site (CZ) for each consistent and inconsistent target at each SOA in each participants’ residual waveforms. Mean P3 amplitude measures (Table 6.5) were analyzed with 2 x 3 repeated measures ANOVA, with heading-target correspondence (consistent vs. inconsistent), and SOA (100-300 ms, 400-600ms, or 9001100 ms) as within-subject factors. The ANOVA revealed a significant heading-target correspondence x SOA interaction [F(2, 10) = 5.68, p = 0.02], indicating that the attention-related modulation of the P3 component varied across the different SOAs. Neither the main effect of heading-target correspondence [F(1, 5) = 0.14, p = 0.72], nor  198 the main effect of SOA [F(2, 10) = 2.32, p = 0.15] were significant. The interaction was followed up with paired t-tests at each SOA. Despite the interaction having been significant, none of the follow-up t-tests reached significance [100 ms SOA: t(5) = 1.07, p = 0.33; 500 ms SOA: t(5) = 1.32, p = 0.24; 1000 ms SOA: t(5) = 1.08, p = 0.33].  199 Figure 6.6: Grand-averaged ERP waveforms for Experiment 2 a)  Grand-averaged ERP waveforms, time-locked to the onset of the target are plotted for consistent and inconsistent target conditions. Time (x-axis) ranges from -200 ms to 300 ms, with tick marks denoting 100 ms intervals. The y-axis crosses at 0 ms (target onset), and extends vertically to ± 2 µV. The top (HEOG) panel indicates the absence of eye movements associated with the onset of the target. The bottom panel depicts ERPs at contralateral lateral occipital (OL/OR) electrode sites. A significantly larger-amplitude P1 components for consistent targets was observed at the 900-1100 ms SOA.  b) HEOG and the central midline electrode site are shown here, with a time scale ranging from -200 ms to 600 ms. Again, ticks along the x-axis demark 100 ms intervals. The attention-related amplitude modulation of the P3 component varied across SOA, with a larger-amplitude P3 component for targets appearing at the heading point at the 400-600 ms SOA.  200 Interactive Effects of Reflexive and Volitional Orienting The grand-averaged ERP waveforms (Figure 6.6) reveal a pattern of attentionrelated modulations that is consistent with the hypothesized concurrent activation of reflexive and volitional orienting processes. Table 6.5 characterizes the data with respect to whether or not the observed waveforms appear to reflect the influence of a volitional orienting process. The “Predicted Modulation” column indicates the direction of the amplitude difference between consistent (target at heading point) and inconsistent (target opposite heading point) trials that would be expected if a reflexive orienting process alone had given rise to the observed RT patterns. These predictions, made for the C1, P1, N1, and P3 components, were based on previous investigations detailing the nature of attention-related modulations brought about by reflexive and volitional orienting processes engaged in isolation, described in the introduction to this experiment (Hopfinger & Mangun, 1998, 2001; Fu et al, 2003; Hopfinger & Ries, 2005; . Mangun & Hillyard, 1991; Handy & Khoe, 2005; Luck, 1995; Mangun & Hillyard, 1991). The modulations predicted if a reflexive orienting process alone had given rise to the behavioral results were then compared with the observed ERP data. Differences in predicted and observed attention-related modulations of target processing were interpreted as reflecting the possible involvement of a volitional orienting process. Critically, the influence of a volitional orienting process on some ERP components but not others at any particular SOA would be indicative of concurrently activated reflexive and volitional processes. As indicated in Table 6.5, the results of this comparison between predicted and observed ERP component modulations suggest that a volitional orienting process may  201 have influenced the observed ERP waveforms. This volitional influence is suggested via the ERP component amplitudes manifesting a pattern consistent with attention-related modulations of target processing that are different than the modulations that would be expected to occur had a reflexive process acted alone to produce the observed behavioral results. While the overall pattern of results is consistent with this hypothesis, only two attention-related modulations were statistically significant, including the reflexive P1 modulation at the 1000 ms SOA, and the P3 modulation at the 500 ms SOA (as described above). These two comparisons are denoted in Table 6.5 with a ‘*’. Because the ERP analysis suffered from a lack of power due to the rejection of so many trials and participants due to eye movements, the majority of these modulations were not statistically significant (denoted in Table 6.5 with a ‘?’).  202 Table 6.5: Predicted and observed ERP component amplitude modulations Peak amplitudes are listed for the C1, P1, and N1 components. Mean amplitudes are listed for the P3 component. Mean difference column reflects the component amplitudes for consistent targets minus the component amplitudes for inconsistent targets. Predicted modulation column reflects the amplitude modulation that would be expected to occur if the behavioral RT results had arisen purely as a result of a reflexive orienting process. Differences between predicted and observed modulations were interpreted as possibly reflecting the influence of a volitional orienting process. Asterisks indicate statistically significant effects. Question marks indicate statistically non-significant effects. Consistent ERP Component  Inconsistent  SOA  Amplitude  SD  Amplitude  SD  Mean Difference (con-inc)  Predicted Modulation (if reflexive)  Observed Modulation  Volitional Influence?  100 500 1000  -1.59 0.35 -0.28  1.75 1.26 4.08  -1.01 -0.35 -1.32  4.06 2.23 2.8  -0.59 0.7 1.04  C<I C=I C = I or C < I  C<I C>I C>I  N? Y? Y?  100 500 1000  1.94 2.97 3.05  3.73 4.94 2.14  2.8 2.54 0.25  3.74 5.22 1.63  -0.86 0.42 2.8  C = I or C > I C=I C>I  C<I C>I C>I  Y? Y? N*  100 500 1000  1.24 0.60 0.84  5.17 5.43 5.44  0.77 -0.63 -3.08  3.18 3.68 3.55  0.46 1.23 3.91  C>I C=I C>I  C>I C>I C>I  N? Y? N?  100 500 1000  3.25 8.88 5.31  6.29 4.03 5.26  5.13 6.5 4.29  2.46 2.75 4.13  -1.88 2.38 1.02  C>I C=I C>I  C<I C>I C=I  Y? Y* Y?  C1  P1  N1  P3  Discussion The results of Experiment 2 indicate that attentional orienting to the heading point in an optic flow field simulating self-motion is a strongly reflexive phenomenon. Despite the fact that participants were aware of the much greater likelihood that targets would appear at the location opposite the heading point, and were instructed to use this information to prepare volitionally for the onset of the target, responses were faster for  203 targets at the heading point at the early and late SOAs. Therefore, the intention to attend to the opposite location, even if it did trigger a volitional orienting response in the neural systems underlying the spatial shifting of attention, could not completely override reflexive orienting to the heading point. The reflexive RT facilitation effect was accompanied at the late SOA by an attention-related sensory gain effect on the amplitude of the P1 component for targets appearing at the heading point. There appeared to be a small reflexive attention-related modulation of the target processing in the striate cortex indexed by the C1 component similar to that observed in Chapters 4 and 5. Because the ERP data were so noisy, however, particularly at the 100 ms SOA, this effect was not significant. Thus, while the results of the present experiment do suggest that the early visual processing of stimuli can be affected by a reflexive attentional sensitivity to the heading point in an optic flow field simulating self-motion, they can not speak to the issue of whether the locus of attentional selection in the visual cortex can shift over time from striate to extrastriate cortex. In addition to the RT and ERP results indicating a strongly automatic covert orienting process, an analysis of the EOG data revealed an almost uncontrollable tendency to make reflexive eye movements toward the heading point. These eye movements began to occur as little as 100 ms following motion onset, indicating the operation of a reflexive orienting process. This reflexive overt orienting effect was so strong, in fact, several participants had to be excluded from the analysis because they were not able to refrain from making eye movements. While this result was interesting, it resulted in the signal-averaged ERP waveforms being based on too few trials to effectively cancel out the random noise present in the raw EEG signal. Combined with  204 the small sample size, this had a deleterious effect on the power of the statistical analysis of attention-related modulations of target processing. While statistically tenuous, the overall pattern of ERP results is consistent with the hypothesis that although attention was reflexively oriented to the heading point, a volitional process engaged concurrently affected multiple stages of target processing. Despite the lack of power plaguing the statistical analyses of the ERP data, two important effects were significant. The first, as mentioned previously, was the enhancement of the P1 component for targets appearing at the heading point at the late SOA. The second was the enhancement of late-stage target processing indexed by the P3 component at the 500 ms, but not the 100 ms and 1000 ms SOAs, indicated by a significant heading-target correspondence x SOA interaction in the analysis of P3 amplitudes. This result is consistent with previous ERP studies investigating the interaction of reflexive and volitional orienting, which demonstrate a tendency for volitional processes to dominate at later stages of processing (Hopfinger & Ries, 2005; Hopfinger & West, 2006). Even in the absence of any other significant attention-related modulations of target processing, these two effects alone, while perhaps not conclusive, are at least consistent with the hypothesis that reflexive and volitional orienting processes cooccurred to produce the behavioral orienting response. In order to make any firm conclusions regarding the specific interaction of reflexive and volitional orienting processes that may unfold over time across multiple stages of target processing, the experiment would have to be repeated, taking precautionary measures to increase both the number of trials included in the ERP signal averaging procedure, as well as the overall sample size.  205 Summary of Experiments 1 and 2 Overall, the results of Experiments 1 and 2 demonstrate unequivocally that reflexive attentional orienting to the heading point in an optic flow field simulating selfmotion is a strongly automatic process. Both covert attention, as measured with RTs and ERPs, and overt attention, as measured with eye movements, tends to be oriented very rapidly to the heading point in a self-motion simulation. Furthermore, the use of a counter-predictive spatial cueing procedure established that attention was rapidly oriented to the heading point even when the task required participants to attend to the opposite location. Critically, while covert and overt reflexive orienting effects were resistant to being overridden by a volitional orienting process, there was behavioral and ERP evidence to suggest that reflexive orienting effects were influenced by the co-occurrence of a volitional orienting process. This volitional influence was particularly apparent at the late stage of cognitive processing indexed by the P3 component. The prominent tendency for participants to make eye movements toward the heading point is reminiscent of the oculomotor capture phenomenon described in Experiment 1. Oculomotor capture has typically been observed in the context of visual search tasks using primitive visual stimuli such as abrupt onsets of basic shapes and saccadic responses to target stimuli (e.g. Theeuwes et al., 1998; Hunt, von Muhlenen, & Kingstone, 2005). If the eye movements toward the heading point in the experiments reported here do in fact reflect oculomotor capture, this would represent the first demonstration of oculomotor capture in a complex, dynamic experimental context designed to simulate the visual stimulation encountered while navigating the real world.  206 Furthermore, the presence of behavioral and sensory gain effects at the latest SOA tested, as well as persistence of the tendency to make eye movements toward the heading point across all three SOAs, indicates that reflexive orienting to the heading point was not merely a fleeting process triggered purely by the onset of the motion stimulus. Rather, this persistence of the effects of reflexive orienting on behavior suggests that this process may actually be a robust phenomenon capable of influencing ongoing behavior in a continuous manner. This result can be contrasted with automatic orienting effects observed in response to the primitive, low-level visual stimuli typically employed in spatial cueing experiments. Under these more artificial conditions, reflexive orienting effects also occur rapidly, but do not normally persist for more than half a second. In contrast, the meaningful, behaviorally relevant navigational information provided by patterns of optic flow, such as the direction of heading, triggers longer-lasting attentional orienting effects, suggesting that this automatic orienting process may be sufficiently robust to play an important role in guiding navigational behavior in the continuous, dynamic conditions that characterize our experience in the real world.  207 Chapter 7: General Discussion Most of what is known about automatic attentional orienting has been based on laboratory studies investigating how attention is oriented in response to primitive, lowlevel visual stimuli that are inherently salient, such as abrupt onsets, or objects unique on some basic feature dimension. While the use of such basic stimuli has afforded precise experimental control of the visual input upon which spatial selection may occur, it has also required the stripping down of the richness, complexity, and dynamism that define the meaningful visual experiences of everyday life. As a result, the possibility that attention can be oriented automatically on the basis of more complex visual stimuli that may be of particular relevance to an observer has been comparatively neglected in attention research. The present dissertation investigated this possibility.  Summary The human information processing system, while sufficiently complex to enable the diversity and flexibility that characterizes human behavior, is nevertheless limited with respect to how much information it can handle at once. Visual perception is one cognitive domain in which this principle is readily apparent. While the visual system takes in a vast array of sensory information spanning the entire visual field, one can only fully analyze, interpret, be aware of, and respond to a limited subset of this information at any give time. Visuospatial attention is the means by which sensory information within a certain portion of the visual field is selected for cognitive analysis. The act of visuospatial selection is referred to as attentional orienting. Critically, while attention can be oriented  208 volitionally, with the specific intention of bringing one’s cognitive resources to bear on a particular location within the visual field, it can also be oriented automatically. The studies presented in this dissertation were designed to investigate the basis for the automatic orienting of attention – how certain stimuli may serve as cues for the attentional selection of a particular region of visual space without willful effort on the part of the observer. Chapter 1 presented a brief review of the state of attention research today, beginning with a look at early theories of attention arising out of information processing research. Next, the phenomenon of attentional capture was described, and the general findings from this area of research were surveyed. The key idea to come out of this discussion was that to date, attentional capture research has investigated almost exclusively the automatic orienting of attention to salient, low-level feature singletons and abrupt events. The spatial cueing procedure was introduced as the state-of-the-art experimental paradigm for investigating the cognitive processes underlying attentional orienting, and experimentally dissociating reflexive and volitional orienting processes. Indeed, the spatial cueing procedure opened up valuable empirical ground, but it was developed with the specific intent of removing all possible associative meaning from cue stimuli. While the intentions behind this methodological move were good – to avoid confounding spatial selection per se with the learned associations evoked by meaningful stimuli – the result has been the proliferation of models of attention that provide little insight regarding how visuospatial orienting is actually carried out amidst the continuously changing, complex, and meaningful visual information present in the real world.  209 Chapter 1 concluded with the proposal motivating the present dissertation that complex stimuli having the potential to provide meaningful, behaviorally relevant information may serve to guide the automatic orienting of attention. The automatic visuospatial selection of behaviorally relevant information seems not only plausible, but also necessary, when one considers that the effective control of behavior demands that perceptual, motor, and cognitive systems have access to appropriate visual information on a moment-to-moment basis. Given the limited cognitive and neural resources available for processing information at any one time, combined with the fact that certain visual information will be particularly relevant to the ongoing control of behavior, it makes sense that the attention system would promote orienting to relevant stimuli above others. While most spatial cueing studies of reflexive orienting have employed primitive, meaningless stimuli, more recent research suggests that attentional orienting can be triggered automatically by meaningful stimuli such as eye gaze, arrows, or inherently graspable objects. The finding that eye gaze could trigger automatic attentional orienting to a gazed-at location was the first demonstration of reflexive orienting to a peripheral location indicated by a directionally meaningful cue presented centrally. This unique result led to the proposal that eye gaze represents a “special” type of attentional cue, owing to its status as a potent source of social information. Chapter 2 sought to directly test the hypothesis proposed by several authors that automatic orienting to eye gaze represents a unique type of attentional orienting underlain by a specialized neural system. Prior to the experiments presented in Chapter 2, there was indirect evidence to suggest this possibility. Among this evidence was the distinct behavioral RT profile associated with automatic orienting to eye gaze as opposed to salient peripheral flashes, as well as  210 the proposed existence of specialized neural systems for processing eye gaze stimuli. The experiments presented in Chapter 2, however, represent the first direct, controlled test of this hypothesis. The fMRI results indicated that reflexive attentional orienting to both eye gaze and arrow cues engaged extensive dorsal and ventral fronto-parietal attention networks. Thus, the same network of brain areas subserved orienting to both types of cues. Eye gaze cues, however, more vigorously engaged two regions of ventral frontal cortex known to be associated with attentional re-orienting to salient, meaningful stimuli, as well as lateral occipital regions. An ERP study demonstrated this enhanced occipital response was attributable to a higher amplitude sensory gain effect for targets appearing at locations cued by an eye gaze stimulus than for those cued by an arrowhead. Thus, while eye gaze cues did not engage a unique attentional orienting network, they did serve as a more potent cue for enhancing the early sensory processing of stimuli at attended locations, i.e., the effect was not qualitative but quantitative. The greater BOLD activity in ventral frontal regions was suggestive that this enhanced sensory gain effect was due to eye gaze cues being treated by the orienting system as particularly salient or meaningful, possibly owing to their status as socially relevant stimuli. The idea that eye gaze may serve as a particularly potent signal for the automatic orienting of attention on the basis that it provides information useful for the control of socially appropriate behavior raised the possibility that there are other behavioral domains which may stand to benefit by automatic attentional orienting to relevant stimuli. Chapter 3 proposed navigation as one such behavioral domain. For a healthy individual, the control of navigational behavior feels almost effortless. In order to follow  211 a path, avoid obstacles, and approach goals, however, perceptual and motor systems must be coordinated seamlessly, and perceptual information pertinent not only to the basic functions of navigational control (such as heading maintenance), but also to the individual’s ongoing behavioral goals, must be continuously updated. As a result of attention research being limited for the most part to the investigation of orienting in response to discrete, static events, little is known about how visual attention may operate in dynamic real-world environments to facilitate the gathering and processing of the visual information required for effective navigation. In order to explore how attention may operate dynamically as one moves through the environment, Chapter 3 investigated patterns of eye movement activity while participants viewed movies depicting navigation through various everyday environments from a first-person perspective. The results demonstrated systematic patterns of fixation activity indicating the preferential allocation of attention to meaningful stimuli, particularly other people in the scene and the heading point of motion. Differences in the temporal dynamics of fixation activity, characterized by long pursuit fixations on people, and short, frequent fixations on the objects defining the path, suggested that multiple demands on attention in dynamic, realistic contexts may be resolved via attention being oriented to important stimuli only long enough to extract the relevant information. The experiments presented in Chapter 4 were designed to test the possibility that attention can be automatically oriented to the heading point in an optic flow field depicting visual stimulation akin to that which might be encountered during self-motion. While this possibility was suggested by the open-ended investigation of eye movements conducted in Chapter 3, Chapter 4 used a controlled experimental task to assess the  212 temporal dynamics of this orienting effect. A variation of the spatial cueing procedure was used, in which computer-generated simulations depicted motion toward a point on the left or right of a central fixation stimulus, and participants were required to respond to a target stimulus appearing either at or opposite the heading point. The results demonstrated that attention was automatically oriented to the heading point in an optic flow field simulating self-motion through a virtual environment, but only when a sufficient amount of textural and depth information was provided. Attentional orienting to the heading point was inferred in the same manner as previous research using the spatial cueing procedure – faster responses to targets appearing at cued as opposed to uncued locations. In this case, the heading point was the cued location, and the corresponding point on the opposite side of the display was the uncued location. The automaticity of the orienting effect was inferred based on the rapid occurrence of RT facilitation for targets appearing at the heading point even though participants were correctly informed that the motion direction did not predict the location of the target. An ERP study provided the first demonstration that the early visual processing in striate cortex indexed by the C1 component could be modulated by reflexive orienting to a particular spatial location. This effect indicated that automatic orienting to the heading point served to increase the sensory-evoked excitability of neurons representing that region of visual space in the primary visual cortex. While present at the early SOA, this C1 modulation was replaced at the late SOA by an enhancement of the P1 component. This novel pattern of results led to the intriguing hypothesis that with continued exposure to the optic flow stimulus, the locus of attentional selection in the visual cortex can shift  213 from striate to extrastriate cortex. The absence of a behavioral effect in the ERP study, however, made it difficult to draw firm conclusions regarding this hypothesis. Based on the finding from Chapter 4 that attentional orienting to the heading point only occurred when static objects in the scene provided additional texture and depth information, it was hypothesized that further improvements to the realism of the motion simulation would result in larger, more reliable behavioral orienting effects. Chapter 5 made several changes to the self-motion simulation in order to better approximate the patterns of optic flow that accompany self-motion in the real world. The motion simulation was improved by increasing the field of view across which it was displayed, as well as increasing the visual fidelity of the display through the use of natural colors and textures. As predicted, the results revealed a behavioral RT facilitation effect for targets appearing at the heading point that was nearly three times larger than that observed in Chapter 4. The fact that attentional orienting to the heading point in an optic flow field is more robust under conditions that better approximate the visual stimulation that accompanies self-motion in the real world is indicative that the neural processes enabling automatic orienting are particularly sensitive to visual stimuli with the potential to provide behaviorally relevant information. In addition to the larger behavioral orienting effects brought about by the improved motion simulation, the ERP results from Chapter 4 indicating early C1 and later P1 modulation were replicated. Thus, upon continued exposure to an optic flow field, the locus of attentional selection in visual cortex can indeed shift from striate to extrastriate cortex. It was proposed that this shifting locus of attentional selection might be underlain by a change in the source of the attentional signal serving to bias stimulus  214 processing in visual cortices representing the location of the heading point. Specifically, the rapid enhancement of sensory processing in the primary visual (striate) cortex may be underlain by an attentional biasing signal based purely on information regarding the physical properties of the stimulus that would be available at an early SOA. The later enhancement of the P1 component, on the other hand, may be the result of an attentional biasing signal based on the results of a higher-level cognitive analysis that would not be available until a later SOA. Finally, Chapter 5 revealed that targets appearing at the heading point were subject to an attention-related modulation of the P3 component at both the early and the late SOAs. This finding suggests that even very rapidly following motion onset, the higher cognitive processing of target stimuli appearing at the heading point was enhanced relative to stimuli appearing elsewhere, based on the initial visual processing of the motion stimulus. The persistence of this effect at the late SOA, at which time the outputs of an evaluative cognitive analysis would be available, suggests that the automatic P3 modulation was sufficiently robust to occur despite the availability of top-down information regarding the irrelevance of the motion direction for performing the experimental task. The P3 component reflects a multitude of higher cognitive processes, including the recognition of a behaviorally relevant stimulus. These findings are therefore consistent with the idea that the heading point in an optic flow field is a particularly potent cue for the automatic orienting of attention because it is a behaviorally relevant stimulus critical to coordinating safe, effective navigational behavior. While Chapters 4 and 5 provided evidence that orienting to the heading point would occur automatically when there was no explicit reason to attend anywhere else, the  215 question of whether it would occur despite the volitional intention to attend elsewhere remained. Chapter 6 investigated whether reflexive attentional orienting to the heading point would meet this criterion for strong automaticity. The results of a counterpredictive spatial cueing procedure revealed that orienting to the heading point was in fact strongly automatic, occurring even the when participants knew that the target was far more likely to appear at the opposite location. The temporal profile of the RT facilitation effect, however, hinted that the volitional intention to orient to the location opposite the heading point might have come into play at the middle 500 ms SOA, when the RT facilitation effect was eliminated. An ERP study sought to investigate the possibility that reflexive and volitional orienting processes could co-occur, interacting to produce the final observed behavioral effects. The pattern of attention-related modulations of various stages of target processing was certainly consistent with this hypothesis. The statistical analyses of these modulations were compromised by the presence of noise in the data resulting from the rejection of a large proportion of trials (and participants) due to the presence of eye movements. Yet there were two key significant effects, each providing support for the hypothesis that reflexive and volitional orienting processes can co-occur. A robust P1 sensory gain effect for stimuli appearing at the heading point suggested the dominance of a reflexive orienting process at an early stage of processing. At the same time, however, the P3 component exhibited a volitional attention-related modulation, suggesting the dominance of a volitional orienting process at this later stage of stimulus processing. Combined, these results indicate the co-activation of opposing reflexive and volitional orienting processes in response to the same stimulus display.  216 In addition to behavioral and ERP measures of attentional orienting, Chapter 6 included an analysis of eye movement activity. While the primary focus of the motion simulation experiments presented in Chapter 4-6 was to investigate automatic covert attentional orienting to the heading point, the consistent finding that eye movements occurred on many trials suggested that patterns of eye movement activity might actually provide meaningful insights regarding attentional orienting to patterns of optic flow. Given the fact that in most circumstances, eye movements are associated with corresponding shifts of attention, it was reasoned that systematic patterns of eye movements triggered by the motion simulations could provide an ecologically valid measure of attentional orienting to naturalistic patterns of optic flow. The results of both experiments in Chapter 6 clearly indicated a tendency for participants to make eye movements toward the heading point. The very rapid occurrence of these eye movements following the onset of motion, their occurrence despite the heading point being counter-predictive of the target location, and the anecdotal observation that participants seemed to be unaware that they were making these eye movements indicated that eye movements were directed toward the heading point reflexively. The tendency to make reflexive eye movements toward the heading point may reflect the occurrence of oculomotor capture, a phenomenon that has only been demonstrated previously in the context of visual search tasks utilizing primitive visual stimuli.  217 Conclusions The present dissertation was aimed at investigating whether attention can be oriented automatically to complex, high-level visual stimuli that provide a significant source of behaviorally relevant information. While there is a great deal of research suggesting that low-level stimulus features can serve as the basis for attentional orienting, what has been comparatively neglected is how stimuli that are meaningful or significant to us may influence the orienting of attention in a manner that can not be explained by low-level selection. Walking by an attractive person on the street, you might do a double-take. If you crack a joke about your boss, only to see your fellow employees looking behind you towards the door, you’re compelled to turn around and look, despite the sinking feeling that you’re in for it. And upon looking at your old high school class photo, you can’t seem to see anything else but that horrible hairdo you were sporting. What these examples point to is the notion that our attention can be drawn to things – objects, events, and locations – because they are meaningful to us, not necessarily because they are unique on some basic stimulus dimension. The present research was approached from the perspective that attention facilitates the gathering of visual information relevant to our ongoing behavior in a meaningful, dynamic world. To the extent that the neural mechanisms of attention evolved as a means of processing visual information efficiently in the service of guiding adaptive, contextappropriate behavior, one would expect that the attention system is responsive to complex stimuli with the potential to provide behaviorally relevant information. The broad experimental approach taken by this dissertation has therefore been to investigate  218 the orienting of attention to complex, meaningful stimuli as a means of gaining insight into the functioning of an attention system that streamlines and coordinates the processing of visual information appropriate to particular environmental contexts and behavioral goals. From this perspective, the meaning or behavioral relevance conveyed by visual stimuli is not an experimental confound (as was originally assumed by the researchers who developed the spatial cueing procedure), but rather an integral factor in the functioning of the human attention system that warrants empirical investigation. As a first step in this direction, the experiments presented in this dissertation investigated whether attention could be oriented automatically to stimuli with the potential to provide behaviorally relevant information. These investigations have provided new insights, detailed below, regarding how attention may operate in the complex, meaningful, dynamic conditions that define our everyday experience. The results support several broad conclusions, each of which will be discussed it turn.  Conclusion 1: Attention can be oriented reflexively to complex, meaningful stimuli.  The findings that attention can be oriented automatically to eye gaze stimuli, arrow stimuli, and the heading point in an optic flow field indicate that visuospatial attentional selection can occur on the basis of complex visual stimuli that are not fully represented along unique feature channels at a low-level stage of processing in the visual system. Chapter 2 presented data indicating that both eye gaze and arrow cues could trigger attentional orienting. Critically, the same physical stimulus was used to represent  219 both types of stimuli. An ambiguous cue stimulus was designed such that a different side of the display (left or right of fixation) would be cued depending on how it was perceived. For example, an eye gazing to the left was the same physical stimulus as an arrow pointing to the right. RT facilitation indicating automatic attentional orienting was observed for both types of cue stimuli. Therefore, there was nothing inherent in the physical characteristics of the stimulus that determined the direction of attentional orienting. Rather, attention was oriented automatically in the direction appropriate to the percept, which was achieved and maintained based on a high-level cognitive interpretation of the cue stimulus. The fact that attention was oriented automatically on the basis of a meaningful interpretation of an otherwise ambiguous physical stimulus indicates that high-level representations can trigger automatic orienting. Additional evidence for this claim is provided by the fact that the cue stimuli in these experiments were presented centrally, but triggered orienting to peripheral locations. This represents a fundamental difference from typical studies of reflexive attentional orienting, which normally elicit shifts of attention via abrupt flashes presented in the periphery. While this typical form of reflexive cue provides direct information regarding the cued location that is inherent in its low-level sensory representation, automatic orienting to a location indicated by an eye gaze or arrow cue requires the prior interpretation of the cue’s directionality that is not inherent in the physical stimulation. Chapters 4-6 found that the heading point in an optic flow field could also trigger automatic attentional orienting. The heading point in an optic flow field is a complex, behaviorally relevant visual stimulus represented at a high level of the visual system. The FOE in an optic flow field is a significant source of heading information, the perception  220 of which is dependent upon the integration of activity at multiple levels of the brain’s complex, hierarchically organized motion processing system (Culham et al., 2001; Zeki, 1993). The high-level representation of the FOE in the visual system attests to its complexity as a visual stimulus (see Appendix A for an overview of how optic flow is processed by the visual system). Also important to note here is that the FOE, while being a key source of heading information, is by no means the sole contributor to heading perception. There are other sources of motion information present in the optic flow field that are known to play a role in the perception of heading, including the parallactic displacements of objects in the scene with respect to the observer (Priest et al., 1985; Cutting et al., 1992). The final perception of heading, therefore, is based on the convergence of multiple complex motion signals. This point is especially relevant in light of the finding from Chapter 4 that orienting to the heading point occurred only when sufficiently robust texture and depth cues were present. What this suggests is that automatic attentional orienting to the heading point does not occur solely based on the mere presence of a radial FOE. Rather, it seems that attention is oriented on the basis of a motion signal compelling enough to provide a high-level perception of heading. Combined, these findings support the notion that the neural processes underlying reflexive attentional orienting have the capacity for sensitivity to the high-level representations that imbue a particular stimulus with meaning. Certainly, this is the case in the experiments reported here. The present results, therefore, open up the possibility that high-level interpretive representations can provide top-down input to attentional control systems and trigger attentional orienting automatically, that is, without volitional  221 intention on the part of the observer. Similarly, the results imply that top-down inputs to attentional control processes can occur without volitional intention. This idea represents a departure from models of attention based on traditional spatial cueing experiments, which typically equate reflexive orienting with exogenous, peripheral cueing (e.g. Vecera & Rizzo, 2006).  Conclusion 2: Behaviorally relevant stimuli are potent cues for reflexive orienting. There were several observations throughout the experiments reported here to suggest that behaviorally relevant stimuli may serve as particularly potent cues for reflexive orienting. Previous research finding that eye gaze cues presented at fixation could trigger reflexive orienting cited this result as evidence that orienting to eye gaze may be a unique form of attentional orienting underlain by a specialized neural system. To the contrary, the results of Chapter 2 indicated that the same attentional orienting network was engaged by both eye gaze and arrow cues, suggesting that attentional orienting to eye gaze was not underlain by a specialized neural system per se. There was evidence, however, that eye gaze served as a particularly potent cue for reflexive orienting, including increased BOLD activity in ventral frontal regions associated with recognizing relevant stimuli, and a larger attention-related sensory gain effect in extrastriate visual cortex. Therefore while eye gaze cues are not “special” in the sense of engaging a unique orienting network, they are “special” in the sense of engaging a common orienting network more robustly than arrow cues. Further evidence for the idea that behaviorally relevant stimuli may be potent cues for attentional orienting was provided by the eye tracking study presented in Chapter 3,  222 which revealed a tendency for participants to fixate both the socially and navigationally relevant aspects of the scene to a greater extent than any other objects in the scene. This tendency to fixate particular stimuli of potential utility in guiding navigational or social behavior under realistic free-viewing conditions suggests that these stimuli may be compelling cues for the automatic orienting of attention. Additionally, Chapter 5 revealed that larger, more reliable reflexive orienting effects were triggered in response to an improved motion simulation that more closely approximated naturalistic patterns of optic flow, relative to those observed in Chapter 4. Given the relevance of the heading point in an optic flow field for guiding navigational behavior in the real world, it follows that a more compelling simulation of the heading point would provide a more behaviorally relevant stimulus. The ERP results supprted this idea by indicating an attention-related enhancement of the P3 component that was present even at the early SOA. This finding suggested that immediately following the onset of motion, the heading point was being processed preferentially at late stage of associative processing thought to underlie the recognition of the significance of a stimulus for behavior (Muller & Hillyard, 2000). Thus, Chapter 5 revealed that reflexive orienting to the heading point was bolstered when the potential behavioral relevance of the motion simulation was increased. The ERP experiments in Chapters 4 and 5 presented the novel finding that reflexive orienting to the heading point in an optic flow field resulted in an attentionrelated modulation of the C1 component. Thus, orienting to this relevant aspect of an optic flow field can enhance the earliest cortical processing of stimuli appearing at the attended location. While several researchers have previously tested for this modulation, it  223 has never been observed in the context of a spatial cueing task utilizing the primitive visual stimuli typically used to cue for attention (Hillyard & Anllo-Vento, 1998; Hopfinger & Mangun, 1998; 2001; Luck, 1995). Neurophysiological research with monkeys, however, has indicated via intracellular recording techniques that attention can modulate neural activity in striate cortex (Motter, 1993). This discrepancy suggests the possibility that attentional orienting based on the primitive cue stimuli typically employed in spatial cueing studies may not be sufficiently robust to drive a sensory gain effect in striate cortex large enough to be detectable using ERP methodology. It is possible that the emergence of the C1 modulations observed in the experiments presented here was the result of cueing attention with dynamic, meaningful, behaviorally relevant stimuli.  Conclusion 3: Attentional orienting is a continuous process. Most of what is known about attentional orienting processes has been based on studies cueing attention with not only low-level, but also discrete visual stimuli. Typically, reflexive cueing is elicited by briefly flashing a peripheral cue. Because the cue stimulus is typically removed within about 50 ms, there is no opportunity for the accrual of additional visual information beyond that which is available at the moment the cue is presented. This type of stimulation does exceptionally little to approximate realworld visual experience. In natural circumstances, visual experience is continuous, such that cues for the orienting of attention persist over time. The eye tracking study presented in Chapter 3 was conducted in order to gain insight into how attention might operate in the continuous, dynamic conditions that  224 define the experience of the real world. The results revealed a dynamic pattern of eye movement activity characterized by a continuous interplay between automatic orienting to the navigationally relevant aspects of the scene that occurred intermittently amidst longer pursuits of other people in the scene. Importantly, there were no “gaps” in fixation activity. That is, attention was overtly oriented from one scene element to the next, without interruption or hesitation. Thus, attentional control processes were continuously engaged so as to promote effective eye movement behavior while viewing the navigational scene. The particular scene element fixated at any given time seemed to be determined based on the momentary relevance of that element with respect to the greater context of the scene. For example, pursuit fixations of people in the scene were more likely to be initiated automatically when people were located at the center of the scene, or near the observer than when they were in the periphery. Similarly, fixations to the heading point were more likely to occur automatically when the heading point was visible in the distance than when it was constrained to a near location. While speculative, it seems plausible that this ongoing interplay was fueled by 1) the implicit monitoring of the relevance or importance of various stimuli in the scene, and 2) the orienting of attention when a particular stimulus gained superiority as a behaviorally relevant stimulus. Chapter 3 provided evidence that continuous visual stimulation resulted in a continuous succession of orienting behaviors sensitive to the momentary relevance of the various elements in the scene. Chapters 4 and 5, on the other hand, provided evidence to suggest that rather than being a discrete, “all-or-none” process, a single instance of reflexive orienting can evolve continuously. Specifically, the ERP experiments in  225 Chapters 4 and 5 provided the first demonstration that the visuocortical locus of attentional selection can shift over time with continued exposure to a cue stimulus. Upon continued viewing of the optic flow field, the sensory gain effect for targets appearing at the heading point shifted from being manifested in the primary visual cortex as a modulation of the C1 component to being manifested in extrastriate areas as a modulation of the P1 component. It is likely that this shift was the result of the accrual of additional high-level information regarding the flow field that became available to extrastriate cortex via feedback projections from high-level association areas at the later SOA . This novel finding highlights the idea that traditional cueing studies using discrete cue and target events may not provide the means to capture the fundamentally dynamic, continuous nature of attentional orienting processes.  Conclusion 4: Reflexive and volitional processes interact to guide attentional orienting. According to the central thesis of this dissertation, visuospatial attention serves to streamline the vast array of incoming visual information so as to provide perceptual, cognitive, and motor systems with information pertinent to coordinating effective behaviors. The activities of daily life necessitate the ability to both respond appropriately to important stimuli in the environment, and pursue behaviors in accordance with goals and intentions. Throughout this dissertation, it has been argued that the visuospatial selection of behaviorally relevant information can occur automatically. While this process would certainly promote appropriate behavioral responses to environmental stimuli, the ability to orient attention volitionally would still be necessary in order to actively pursue one’s ongoing behavioral goals. Thus, reflexive and volitional orienting  226 processes must work in tandem so as to mediate the requirements for coordinating context-appropriate behaviors. It is possible that the results of the eye tracking study presented in Chapter 3 echo this logic. The ongoing patterns of fixation activity, including brief, frequent fixations to the objects defining the path dispersed intermittently amongst pursuit fixations to people that occurred less frequently, but were longer-lasting, may reflect the continuous, dynamic interplay between reflexive and volitional orienting processes. These results suggest that overt attentional orienting behavior while viewing real-world navigational scenes reflects the continuous resolution of reflexive and volitional influences by the eye movement (overt attentional orienting) system. In order to examine the functioning of each process independently, however, reflexive and volitional orienting processes are typically dissociated experimentally through the use of peripheral (non-predictive) and central (predictive) cues. Understandably, the consequence of this dissociation has been to conceptualize the effects of attentional orienting as being either reflexive or volitional in nature. The present research suggests, however, that reflexive and volitional orienting processes may interact to produce a given behavioral orienting effect that is neither solely reflexive nor solely volitional in nature, but rather reflects the contribution of both reflexive and volitional processes. Chapter 6 presented data to support this hypothesis. Specifically, the pattern of RT facilitation when reflexive and volitional processes were placed in opposition through the use of a counter-predictive spatial cueing task indicated that while attention was oriented automatically to the heading point at the early and late SOAs, this effect was  227 absent at the mid-range SOA. This finding suggested that a volitional process had come into effect at the mid-range SOA, temporarily obscuring the RT facilitation brought about by reflexive orienting to the heading point. The ERP experiment presented in Chapter 6 provided some converging evidence for this hypothesis by revealing that the P1 component displayed an attention-related modulation consistent with a reflexive orienting process, while the P3 component displayed an attention-related modulation consistent with a volitional orienting process at the mid-range SOA. While there is a general consensus among attention researchers that volitional preparatory processes and strategic factors can influence the automatic orienting of attention by instantiating generalized attentional control settings (Yantis & Johnston, 1990; Theeuwes, 1991; Lupianez et al, 2001; Folk, Remington & Johnston, 1992), the present results speak to a subtly different idea. The influence of the proposed volitional orienting process at the mid-range SOA did not reflect the overall suppression of reflexive orienting by a generalized attentional control setting, but rather the momentary interaction of two opposing, active orienting processes.  Implications The research presented in this dissertation was aimed at investigating whether attention can be reflexively oriented to complex stimuli on the basis of their potential to provide meaningful, behaviorally relevant information. Utilizing more complex, meaningful stimuli than those typically employed in the study of attentional orienting, four key insights were revealed, each of which carries a specific requirement that a comprehensive theory of attention must satisfy.  228 First, high-level cognitive representations can influence reflexive orienting processes. This implies that there must be some neural means by which top-down information can provide input to reflexive orienting processes. Second, if behaviorally relevant stimuli serve as particularly potent cues for reflexive orienting, then there must be a neural means by which the recognition of behavioral relevance, be it implicit or explicit, can boost the control signal driving reflexive orienting processes. Third, if attention is a continuous process, then it makes little sense to conceptualize attentional orienting as discrete reflexive or volitional episodes triggered in a ballistic manner by information acquired prior to the initiation of orienting. Rather, there would have to some means by which the relative importance of various stimuli in the visual scene could be monitored continuously and orienting initiated when a particular stimulus or location gained superiority as a behaviorally relevant stimulus.  Finally, the interaction of  potentially opposing reflexive and volitional processes resulting in one specific observable behavioral orienting effect requires there be some neural basis for the interaction and resolution of multiple orienting signals. Combined, the present findings seem to fit well with the biased competition model of attention put forward by Desimone and Duncan (1995). This model proposes that attentional selection occurs on the basis of the resolution of competition between stimuli for neural representation in the high-level object processing regions of the visual association cortex, a limited capacity information-processing system that can effectively handle only so much information at once. Stimuli succeeding in this competition for neural representation in higher cognitive processing regions are attended; those not succeeding are left unattended. Critically, in this light, attention does not function as a  229 spotlight that moves in space to highlight, or enhance processing at, particular locations. Rather, Desimone and Duncan (1995) propose that attention is an emergent property of the summation of biasing signals acting to promote the representation of a given stimulus in associative processing regions of cortex. According to this model, as a visual stimulus is processed, it is subject to bias at multiple levels of analysis from any neural source available. These biases may be bottomup or top-down in nature, and can be selective on the basis of spatial location, basic stimulus features, or complex feature conjunctions. Top-down biases may include, attentional control templates held in working memory dictating which types of stimuli will be relevant to the task at hand, learned importance, or novelty. Bottom-up biases may include stimulus salience or outputs from specialized neural processing systems. It is the net result of these biasing processes, however, that determines how the stimulus is ultimately represented at a high level of the visual system. The biased competition model provides a solution for several of the implications of the present research outlined above. Critically, this model makes no reference to reflexive and volitional shifts of attention. Indeed, it does not subscribe to the idea of an attentional shift at all. It does, however, provide for the interactive contribution of topdown and bottom-up signals to influence whether a given stimulus is attended. It also provides the means for high-level information or top down signals to bias attention automatically, without willful effort on the part of the observer, a point that is particularly relevant to the present dissertation. Another essential point put forward by this model is that attention should not be viewed as a spotlight moving rapidly and serially through space, but rather emerging as the result of potentially slower-acting biasing signals  230 operating in parallel on an ongoing basis. This idea provides an excellent means of accounting for the present data, which also suggests that attention is a continuous process, and that the nature of the information driving orienting responses can change over time.  Future Directions The results of the present work open up several research questions for further investigation. One prominent outstanding question is whether other types of meaningful, complex stimuli can capture attention. The present experiments have revealed that attention can be oriented automatically in response to eye gaze cues, arrowhead, and the heading point in an optic flow field. According to the interpretations put forward here, these automatic orienting responses occur by virtue of these stimuli carrying some form of behavioral relevance or another. By this logic, other types of stimuli providing behaviorally relevant information should also garner attention. Social information, for example, is not only provided by eye gaze, but also by body movements, head turns, and facial expressions. There is even some evidence to suggest that head turns are a better indicator of another’s attentional focus than are eye gaze cues (e.g. Langton, Watt, & Bruce, 2000). Additionally, while the heading point is an important source of navigational information, there are certainly other sources. Under conditions in which the heading point is obscured, or when one is not looking straight ahead while navigating, there are several other cues used for heading perception, including the displacement direction of the nearest object in one’s view (Cutting et al., 1995; Cutting et al., 1999), and the motion parallax between objects located at different  231 distances from the observer during pursuit fixations (Cutting et al., 1992; Priest et al., 1985; Wann & Land, 2000). Indeed, the results of Chapter 3 suggested that in the complex, dynamic conditions of the real world, objects defining the path seemed to be a potent attentional cue. Future research should address the extent to which other types of stimuli in both the social and navigational domains garner attention preferentially. Further, however, there may be other behavioral domains that stand to benefit from automatic orienting to relevant stimuli. The control of manual actions for precision tasks, and visually-guided action in sports are two possible examples. What is more, future research should assess the extent to which automatic orienting to certain types of stimuli actually results in some actual behavioral advantage. Another outstanding question is the extent to which the task at hand affects how attention is oriented to various types of stimuli. The experiments presented in Chapters 46 revealed that attention was oriented to the heading point in an optic flow field when participants had to perform a discrimination task regarding target stimuli that were out of context with respect to the motion simulation. It would be interesting to examine how these orienting effects might be affected by the performance a task within the context of the motion simulation. For example, one might imagine that the need to locate or track a particular object in the scene may reduce the likelihood of orienting to the heading point. Finally, future research should address the issue of interactivity. All of the experiments presented here involved the passive viewing of simulated motion. In order to assess how attention may operate in the service of behavior, it would make sense to study attentional orienting in contexts where participants actually have control over their  232 motion through an environment. To this end, future experiments could provide interactive control to participants in simulated environments, and assess how attention is oriented under various task conditions. Even more interesting would be to utilize mobile eye tracking technology to assess patterns of overt orienting while people are actually out and about in the real world. The fact that the present investigation, by virtue of utilizing more complex stimuli, provided novel insights regarding the nature of attentional orienting suggests that further insights may be gained by examining attentional orienting under interactive or real-world conditions.  233 References Abrams, R. A., & Christ, S. E. (2003). Motion onset captures attention. Psychological Science, 14(5), 427-432. Abrams, R. A., & Christ, S. E. (2005). The onset of receding motion captures attention: Comment on franconeri and simons (2003). Perception & Psychophysics, 67(2), 219-223. Abrams, R. A., & Dobkin, R. S. (1994). Inhibition of return: Effects of attentional cueing on eye movement latencies. Journal of Experimental Psychology: Human Perception and Performance, 20(3), 467-477. Akiyama, T., Kato, M., Muramatsu, T., Saito, F., Umeda, S., & Kahima, H. (2006). Gaze but not arrows: A dissociative impairment after right superior temporal gyrus damage. Neuropsychologia, 44, 1804-1810. Albright, T. D. (1989). Centrifugal bias in the middle temporal visual area (MT) of the macaque. Visual Neuroscience, 2(2), 177-188. Alfano, P. L., & Michel, G. F. (1990). Restricting the field of view: Perceptual and performance effects. Perceptual and Motor Skills, 70(1), 35-45. Allison, T., Puce, A., & McCarthy, G. (2000). Social perception from visual cues: Role of the STS region. Trends in Cognitive Sciences, 4(7), 267-278. Andersen, R. A. (1997). Neural mechanisms of visual motion perception in primates. Neuron, 18(6), 865-872. Anderson, K. C., & Siegel, R. M. (1999). Optic flow selectivity in the anterior superior temporal polysensory area, STPa, of the behaving monkey. The Journal of Neuroscience, 19(7), 2681-2692.  234 Anllo-Vento, L., & Hillyard, S. A. (1996). Selective attention to the color and direction of moving stimuli: Electrophysiological correlates of hierarchical feature selection. Perception & Psychophysics, 58(2), 191-206. Anllo-Vento, L., Luck, S. J., & Hillyard, S. A. (1998). Spatio-temporal dynamics of attention to color: Evidence from human electrophysiology. Human Brain Mapping, 6(4), 216-238. Azzopardi, P., Jones, K. E., & Cowey, A. (1999). Uneven mapping of magnocellular and parvocellular projections from the lateral geniculate nucleus to the striate cortex in themacaque monkey. Vision Research, 39(13), 2179-2189. Bacon, W. J., & Egeth, H. E. (1997). Goal-directed guidance of attention: Evidence from conjunctive visual search. Journal of Experimental Psychology: Human Perception and Performance, 23(4), 948-961. Baizer, J. S., Ungerleider, L. G., & Desimone, R. (1991). Organization of visual inputs to the inferior temporal and posterior parietal cortex in macaques. The Journal of Neuroscience, 11(1), 168-190. Ball, K., & Sekuler, R. (1980). Human vision favors centrifugal motion. Perception, 9(3), 317-325. Ballard, D. H., Hayhoe, M. M., & Pelz, J. B. (1995). Memory representations in natural tasks. Journal of Cognitive Neuroscience, 7(1), 66-80. Bashinski, H. S., & Bacharach, V. R. (1980). Enhancement of perceptual sensitivity as the result of selectively attending to spatial locations. Perception & Psychophysics, 28(3), 241-248.  235 Becker, J. T., Huff, F. J., Bebes, R. D., Holland, A., & Boller, F. (1988). Neuropsychological function in Alzheimer's disease: Pattern of impairment and rates of progression. Archives of Neurology, 45(3), 263-268. Berger, A., & Henik, A. (2000). The endogenous modulation of IOR is nasal-temporal asymmetric. Journal of Cognitive Neuroscience, 12(3), 421-428. Bernstein, I. H., & Chu, P. K. (1973). Stimulus intensity and foreperiod effects in intersensory facilitation. Quarterly Journal of Experimental Psychology, 25(2), 171-181. Bertelson, P. (1967). The time course of preparation. Quarterly Journal of Experimental Psychology, 19(3), 272-279. Biederman, I. (1972). Perceiving real-world scenes. Science, 177(4043), 77. Birmingham, E. (2007). Social attention and real-world scenes: The roles of action, competition and social content. The Quarterly Journal of Experimental Psychology, 0(0), 0. Birmingham, E., Bischof, W. F., & Kingstone, A. (in press). Gaze selection in complex social scenes. Visual Cognition. Bonda, E., Petrides, M., Ostry, D., & Evans, A. (1996). Specific involvement of human parietal systems and the amygdala in the perception of biological motion. Journal of Neuroscience, 16(11), 3737-3744. Bowman, D. A., Datey, A., Ryu, Y. S., Farooq, U., & Vasnaik, O. (2002). Empirical comparison of human behavior and performance with different display devices for virtual environments. Proceedings of the Human Factors and Ergonomics Society 46th Annual Meeting.  236 Bowman, D., Gabbard, J. L., & Hix, D. (2002). A survey of usability evaluation in virtual environments:  Classification  and  comparison  of  methods.  Presence:  Teleoperators and Virtual Environments, 11(4), 404-424. Bradley, D. C., Maxwell, M., Andersen, R. A., Banks, M. S., & Shenoy, K. V. (1996). Neural mechanisms of heading perception in primate visual cortex. Science, 273(5281), 1544-1547. Brainard, D. H. (1997). The videotoolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10(4), 433-436. Braun, J. (2003). Natural scenes upset the visual applecart. Trends in Cognitive Sciences, 7(1), 7-9. Broadbent, D. E. (1958). Perception and Communication: Pergamon Press New York. Busettini, C., Masson, G. S., & Miles, F. A. (1997). Radial optic flow induces eye movements with ultra-short latencies. Nature, 390(6659), 512-515. Bushnell, M. C., Goldberg, M. E., & Robinson, D. L. (1981). Behavioral enhancement of visual responses in monkey cerebral cortex. I. Modulation in posterior parietal cortex related to selective visual attention. Journal of Neurophysiology, 46(4), 755-772. Buswell, G. T. (1935). How People Look at Pictures: A Study of the Psychology of Perception in Art: University Microfilms International. Cavanagh, P. (2002). Visual self-motion perception in older adults: Implications for postural control during locomotion. Neurology Report(March), 8. Cavanagh, P., & Mather, G. (1989). Motion: The long and short of it. Spatial Vision, 4(23), 103-129.  237 Cave, K. R., & Wolfe, J. M. (1990). Modeling the role of parallel processing in visual search. Cognitive Psychology, 22(2), 225-271. Chang, S. W., & Abrams, R. A. (2004). Hand movements deviate toward distracters in the absence of response competition. The Journal of General Psychology, 131(4), 328-344. Chapman, C., Hoag, R., & Giaschi, D. (2004). The effect of disrupting the human magnocellular pathway on global motion perception. Vision Research, 44, 25512557. Chapman, G. J., & Hollands, M. A. (2006). Evidence for a link between changes to gaze behaviour and risk of falling in older adults during adaptive locomotion. Gait & Posture, 24(3), 288-294. Chaudhuri, A. (1990). Modulation of the motion aftereffect by selective attention. Nature, 344(6261), 60-62. Cheal, M., & Lyon, D. R. (1991). Central and peripheral cueing of forced-choice discrimination. Quarterly Journal of Experimental Psychology, 43A, 859-880. Cheal, M., Chastain, G., & Lyon, D. R. (1998). Inhibition of return in visual identification tasks. Visual Cognition, 5(3), 365-388. Cheng, K., Fujita, H., Kanno, I., Miura, S., & Tanaka, K. (1995). Human cortical regions activated by wide-field visual motion: A PET study. Journal of Neurophysiology, 74(1), 413-427. Cherry, C. (1953). Some experiments on the reception of speech with one and with two ears. Journal of the Acoustical Society of America, 25(5), 975-979.  238 Chun, M. M. (2000). On the functional role of implicit visual memory for the adaptive deployment of attention across scenes. Visual Cognition, 7(1), 65-81. Clark, V. P., & Hillyard, S. A. (1996). Spatial selective attention affects early extrastriate but not striate components of the visual evoked potential. Journal of Cognitive Neuroscience, 8(5), 387-402. Cleland, B. G., Levick, W. R., & Sanderson, K. J. (1973). Properties of sustained and transient ganglion cells in the cat retina. The Journal of Physiology, 228(3), 649680. Colby, C. L., & Goldberg, M. E. (1999). Space and attention in parietal cortex. Annual Review of Neuroscience, 22(1), 319-349. Colby, C. L., Duhamel, J. R., & Goldberg, M. E. (1996). Visual, presaccadic and cognitive activation of single neurons in monkey lateral intraparietal area. Journal of Neurophysiology, 76(5), 2841-2852. Cole, G. G., Kentridge, R. W., & Heywood, C. A. (2005). Object onset and parvocellular guidance of attentional allocation. Psychological Science, 16(1), 270-274. Corbetta, M., & Shulman, G. L. (2002). Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews Neuroscience, 3(1), 201-215. Corbetta, M., Miezen, F. M., Dobmeyer, S., Shulman, G. L., & Petersen, S. E. (1991). Selective and divided attention during visual discriminations of shape, color, ad speed: Functional anatomy by positron emission tomography. Journal of Neuroscience, 11(8), 2383-2402. Corbetta, M., Miezin, F. M., Schulman, G. L., & Petersen, S. E. (1993). A PET study of visuospatial attention. The Journal of Neuroscience, 13(3), 1202-1226.  239 Crossman, E. (1953). Entropy and choice reaction time: The effect of frequency imbalance on choice-response. The Quarterly Journal of Experimental Psychology, 5, 41-51. Crowell, J. A., Banks, M. S., Shenoy, K. V., & Andersen, R. A. (1998). Visual selfmotion perception during head turns. Nature Neuroscience, 1(8), 732-737. Crundall, D., Shenton, C., & Underwood, G. (2004). Eye movements during intentional car following. Perception, 33(8), 975-986. Culham, J. C., Brandt, S. A., Cavanagh, P., Kanwisher, N. G., Dale, A. M., & Tootell, R. B. (1998). Cortical fMRI activation produced by attention to moving targets. Journal of Neurophysiology, 80(5), 2657-2670. Culham, J., He, S., Dukelow, S., & Verstraten, F. A. J. (2001). Visual motion and the human brain: What has neuroimaging told us? Acta Psychologica, 107(1-3), 6994. Cutting, J. E. (1996). Wayfinding from multiple sources of local information in retinal flow. Journal of Experimental Psychology: Human Perception and Performance, 22(5), 1299-1313. Cutting, J. E., & Readinger, W. O. (2002). Walking, looking to the side, and taking curved paths. Perception & Psychophysics, 64(3), 415-425. Cutting, J. E., F., W. R., M., F., & Baumberger, B. (1999). Human heading judgments and object-based motion information. Vision Research, 39(6), 1079-1105. Cutting, J. E., Vishton, P. M., & Braren, P. A. (1995). How we avoid collisions with stationary and moving obstacles. Psychological Review, 102(4), 627-651.  240 Daffner, K. R., Mesulam, M. M., Scinto, L. F. M., Acar, D., Calvo, V., Faust, R., et al. (2000). The central role of the prefrontal cortex in directing attention to novel events. Brain, 123(5), 927-939. Danzinger, S., & Kingstone, A. (1999). Unmasking the inhibition of return phenomenon. Perception & Psychophysics, 61(6), 1024-1037. Danzinger, S., Kingstone, A., & Snyder, J. J. (1998). Inhibition of return to successively stimulated locations in a sequential visual search paradigm. Journal of Experimental Psychology: Human Perception and Performance, 24, 1467-1475. De Graef, P., Christiaens, D., & d'Ydewalle, G. (1990). Perceptual effects of scene context on object identification. Psychological Research, 52(4), 317-329. Deaner, R. O., & Platt, M. L. (2003). Reflexive social attention in monkeys and humans. Current Biology, 13(18), 1609-1613. Decety, J. (1999). What neuroimaging tells us about the division of labour in the visual system. Psyche, 5(9). Desimone, R., & Duncan, J. (1995). Neural mechanisms of selective visual attention. Annual Review of Neuroscience, 18, 193-222. Di Russo, F., Martínez, A., & Hillyard, S. A. (2003). Source analysis of event-related cortical activity during visuo-spatial attention. Cerebral Cortex, 13(5), 486-499. Di Russo, F., Martinez, A., Sereno, M. I., Pitzalis, S., & Hillyard, S. A. (2002). Cortical sources of the early components of the visual evoked potential. Human Brain Mapping, 15(2), 95-111. Donchin, E. (1981). Presidential address, 1980: Blackwell Synergy.  241 Dorris, M. C., Klein, R. M., Everling, S., & Munoz, D. P. (2002). Contribution of the primate superior colliculus to inhibition of return. Journal of Cognitive Neuroscience, 14(8), 1256-1263. Downing, C. J. (1988). Expectancy and visual-spatial attention: Effects on perceptual quality. Journal of Experimental Psychology: Human Perception  and  Performance, 14(2), 188-202. Dreze, X., & Hussherr, F. X. (2003). Internet advertising: Is anybody watching? Journal of Interactive Marketing, 17(4), 8-23. Driver, J. (1999). Gaze perception triggers reflexive visuospatial orienting. Visual Cognition, 6(5), 509-540. Driver, J., Davis, G., Ricciardelli, P., Kidd, P., Maxwell, E., & BaronCohen, S. (1999). Shared attention and the social brain: Gaze perception triggers automatic visuospatial orienting in adults. Visual Cognition, 6(5), 509-540. Duffy, C. J., & Wurtz, R. H. (1991). Sensitivity of MST neurons to optic flow stimuli i: A continuum of response selectivity to large-field stimuli. Journal of Neurophysiology, 65(6), 1329-1345. Duffy, C. J., & Wurtz, R. H. (1996). Optic flow, posture and the dorsal visual pathway. In T. Ono, B. McNaughton, S. Molotchnikoff, E. Rolls & H. Nishijo (Eds.), Perception, Memory And Emotion: Frontier In Neuroscience (pp. 63-77). Oxford, NY: Elsevier Science. Duncan-Johnson, C. C., & Donchin, E. (1982). The P300 component of the event-related brain potential as an index of information processing. Biological Psychology, 14(1-2), 1-52.  242 Dupont, P., Orban, G. A., De Bruyn, B., Verbruggen, A., & Mortelmans, L. (1994). Many areas in the human brain respond to visual motion. Journal of Neurophysiology, 72(3), 1420-1424. Eason, R. G. (1981). Visual evoked potential correlates of early neural filtering during selective attention. Bulletin of the Psychonomic Society, 18, 203-206. Eastwood, J. D., Smilek, D., & Merikle, P. M. (2001). Differential attentional guidance by unattended faces expressing positive and negative emotion. Perception & Psychophysics, 63(6), 1004-1013. Edwards, M., & Badcock, D. R. (1993). Asymmetries in the sensitivity to motion in depth: A centripetal bias. Perception, 22(9), 1013-1023. Enroth-Cugell, C., & Robson, J. G. (1966). The contrast sensitivity of retinal ganglion cells of the cat. The Journal of Physiology, 187(3), 517. Eriksen, C. W., & Yeh, Y. Y. (1985). Allocation of attention in the visual field. Journal of Experimental Psychology: Human Perception and Performance, 11(5), 583597. Fajen, B., & Warren, W. H. (2000). Go with the flow. Trends in Cognitive Sciences, 4(10), 368-369. Felleman, D. J., & Van Essen, D. C. (1991). Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex, 1(1), 1. Fernandez-Duque, D., & Johnson, M. L. (2002). Cause and effect theories of attention: The role of conceptual metaphors. Review of General Psychology, 6(2), 153-165. Findlay, J. M., & Gilchrist, I. D. (2003). Active Vision: The Psychology of Looking and Seeing: Oxford University Press.  243 Fischer, M. H., & Hoellen, N. (2004). Space and object based attention depend on motor intention. Journal of General Psychology, 131(4), 365-377. Fischer, M. H., Castel, A. D., Dodd, M. D., & Pratt, J. (2003). Perceiving numbers causes shifts in spatial shifts of attention. Nature Neuroscience, 6(6), 555-556. Folk, C. L., & Remington, R. W. (1998). Selectivity in distraction by irrelevant feature singletons: Evidence for two forms of attentional capture. Journal of Experimental Psychology: Human Perception and Performance, 24, 847-858. Folk, C. L., Remington, R. W., & Johnston, J. C. (1992). Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human Perception and Performance, 18(4), 1030-1044. Forbes, K., & Klein, R. (1996). The magnitude of the fixation offset effect with endogenously and exogenously controlled saccades. Journal of Cognitive Neuroscience, 8, 344-352. Franconeri, S. L., & Simons, D. J. (2003). Moving and looming stimuli capture attention. Perception & Psychophysics, 65(7), 999-1010. Franconeri, S. L., & Simons, D. J. (2005). The dynamic events that capture visual attention: A reply to abrams ad christ (2005). Perception & Psychophysics, 67(6), 962-966. Franconeri, S. L., Hollingworth, A., & Simons, D. J. (2005). Research article do new objects capture attention? Psychological Science, 16(4), 275. Frenz, H., Bremmer, F., & Lappe, M. (2003). Discrimination of travel distances from 'situated' optic flow. Vision Research, 43, 2173-2183.  244 Friesen, C. K., & Kingstone, A. (1998). The eyes have it!: Reflexive orienting is triggered by nonpredictive gaze. Psychonomic Bulletin & Review, 5(3), 490-495. Friesen, C. K., & Kingstone, A. (2003). Abrupt onsets and gaze direction trigger independent reflexive attentional effects. Cognition, 87(1), B1-B10. Friesen, C. K., Ristic, J., & Kingstone, A. (2004). Attentional effects of counterpredictive gaze and arrow cues. Journal of Experimental Psychology: Human Perception & Performance, 30(2), 319-329. Frischen, A., & Tipper, S. P. (2004). Orienting attention via observed gaze shift evokes longer term inhibitory effects: Implications for social interactions, attention, and memory. Journal of Experimental Psychology: General, 133(4), 516-533. Fu, S., Greenwood, P. M., & Parasuraman, R. (2005). Brain mechanisms of involuntary visuospatial attention: An event-related potential study. Human Brain Mapping, 25(4), 378. Fukuda, Y., & Stone, J. (1974). Retinal distribution and central projections of y-, x-, and w-cells of the cat's retina. Journal of Neurophysiology, 37(4), 749-772. Georgeson, M. A., & Harris, M. G. (1978). Apparent foveofugal drift of counterphase gratings. Perception, 7(5), 527-536. Getty, D. J., Swets, J. A., Pickett, R. M., & Gonthier, D. (1995). System operator response to warnings of danger: A laboratory investigation of the effects of the predictive value of a warning in human response time. Journal of Experimental Psychology: Applied, 1(1), 19-33.  245 Giaschi, D., Zwicker, A., Au Young, S., & Bjornson, B. The role of cortical area v5/mt+ in speed-tuned directional anisotropies in global motion perception. Vision Research, 47(7), 887-898. Gibson, J. J. (1950). Perception of the Visual World. Boston: Houghton Mifflin. Gibson, J. J. (1979). The Ecological Approach to Visual Perception. Boston: Houghton Mifflin. Goldstein, E. B. (1981). The ecology of j. J. Gibson's perception. Leonardo, 14(3), 191195. Gonzalez, C. M. G., Clark, V. P., Fan, S., Luck, S. J., & Hillyard, S. A. (1994). Sources of attention-sensitive visual event-related potentials. Brain Topography, 7(1), 4151. Goodale, M. A., & Milner, A. D. (1992). Separate visual pathways for perception and action. Trends in Neurosciences, 15(1), 20-25. Gray, R. (2000). Attentional modulation of motion-in-depth processing. Vision Research, 40(9), 1041-1050. Gray, R., & Regan, D. (1998). Accuracy of estimating time to collision using binocular and monocular information. Vision Research, 38(4), 499-512. Grill-Spector, K., Kushnir, T., Edelman, S., Itzchak, Y., & Malach, R. (1998). Cueinvariant activation in object-related areas of the human occipital lobe. Neuron, 21(1), 191-202. Gros, B. L., Blake, R., & Hiris, E. (1998). Anisotropies in visual motion perception: A fresh look. Journal of the Optical Society of America A, 15(8), 2003-2011.  246 Grosbras, M. H., Laird, A. R., & Paus, T. (2005). Cortical regions involved in eye movements, shifts of attention, and gaze perception. Human Brain Mapping, 25(1), 140-154. Handy, T. C., & Khoe, W. (2005). Attention and sensory gain control--a peripheral visual process? Journal of Cognitive Neuroscience, 17(12), 1936-1949. Handy, T. C., & Mangun, G. R. (2000). Attention and spatial selection: Electrophysiological evidence for modulation by perceptual load. Perception & Psychophysics, 62(1), 175-186. Handy, T. C., Grafton, S. T., Shroff, N. M., Ketay, S. B., & Gazzaniga, M. S. (2003). Graspable objects grab attention when the potential for action is recognized. Nature Neuroscience, 6(4), 421-427. Handy, T. C., Green, V., Klein, R., & Mangun, G. R. (2001). Combined expectancies: Erps reveal the early benefits of spatial attention that are obscured by reaction time measures. Journal of Experimental Psychology: Human Perception & Performance, 27(2), 303-317. Handy, T. C., Jha, A. P., & Mangun, G. R. (1999). Promoting novelty in vision: Inhibition of return modulates perceptual-level processing. Psychological Science, 10(2), 157-161. Handy, T. C., Schaich Borg, J., Turk, D. J., Tipper, C. M., Grafton, S. T., & Gazzaniga, M. S. (2005). Placing a tool in the spotlight: Spatial attention modulates visuomotor responses in cortex. NeuroImage, 26(1), 266-276.  247 Handy, T. C., Soltani, M., & Mangun, G. R. (2001). Perceptual load and visuocortical processing: Event-related potentials reveal sensory-level selection. Psychological Science, 12(3), 213-218. Hansen, C. H., & Hansen, R. D. (1988). Finding the face in the crowd: An anger superiority effect. Journal of Personality and Social Psychology, 54(6), 917-924. Harris, J. M., & Rogers, B. J. (1999). Going against the flow. Trends in Cognitive Sciences, 3(12), 449-450. Hayhoe, M. M., Ballard, D. H., Triesch, J., Shinoda, H., Aivar, P., & Sullivan, B. (2002). Vision in natural and virtual environments. Proceedings of the symposium on Eye tracking research & applications, 7-13. Hayhoe, M., & Ballard, D. (2005). Eye movements in natural behavior. Trends in Cognitive Sciences, 9(4), 188-194. Heinze, H. J., Mangun, G. R., Burchert, W., Hinrichs, H., Scholz, M., Munte, T. F., et al. (1994). Combined spatial and temporal imaging of spatial selective attention in humans. Nature, 392(6506), 543-546. Henderson, J. M., & Hollingworth, A. (1999). High-level scene perception. Annual Review of Psychology, 50(1), 243-271. Henderson, J. M., Brockmole, J. R., Castelhano, M. S., & Mack, M. (2006). Visual saliency does not account for eye movements during visual search in real-world scenes. Eye movements research: insights into mind and brain, Elsevier. Henderson, J. M., Weeks, P. A., & Hollingworth, A. (1999). The effects of semantic consistency on eye movements during complex scene viewing. Journal of Experimental Psychology: Human Perception and Performance, 25(1), 210-228.  248 Hick, W. E. (1952). On the rate of gain of information. Quarterly Journal of Experimental Psychology, 4(1), 11-46. Hietanen, J. K., Nummenmaa, L., Nyman, M. J., Parkkola, R., & Hämäläinen, H. (2006). Automatic attention orienting by social and symbolic cues activates different neural networks: An fMRI study. NeuroImage, 33(1), 406-413. Hillstrom, A. P., & Yantis, S. (1994). Visual motion and attentional capture. Perception & Psychophysics, 55(4), 399-411. Hillyard, S. A., & Anllo-Vento, L. (1998). Event-related brain potentials in the study of visual selective attention. Proceedings of the National Academy of Sciences, 95(3), 781-787. Hillyard, S. A., & Picton, T. W. (1987). Electrophysiology of cognition. In V. Mountcastle (Ed.), Handbook Of Physiology: Section 1: The Nervous System (Vol. 5: Higher Brain Functions, pp. 519-584). Maryland: Betmesda. Hoffman, E. A., & Haxby, J. V. (2000). Distinct representations of eye gaze and identity in the distributed human neural system for face perception. Nature Neuroscience, 3(1), 80-84. Hoffman, J. E., & Subramaniam, B. (1995). The role of visual attention in saccadic eye movements. Perception & Psychophysics, 57(6), 787-795. Hoffman, J. E., Houck, M. R., MacMillan 3rd, F. W., Simons, R. F., & Oatman, L. C. (1985). Event-related potentials elicited by automatic targets: A dual-task analysis. Journal of Experimental Psychology: Human Perception and Performance, 11(1), 50-61.  249 Holender, D., & Bertelson, P. (1975). Selective preparation and time uncertainty. Acta Psychologica, 39(3), 193-203. Hollands, M. A., Marple-Horvat, D. E., Henkes, S., & Rowan, A. K. (1995). Human eye movements during visually guided stepping. Journal of Motor Behavior, 27(2), 155-163. Hollands, M. A., Patla, A. E., & Vickers, J. N. (2002). "Look where you're going!": Gaze behavior associated with maintaining and changing the direction of lococmotion. Experimental Brain Research, 143(2), 221-230. Hommel, B., Pratt, J., Colzato, L., & Godijn, R. (2001). Symbolic control of visual attention. Psychological Science, 12(5), 360-365. Hooker, C. I., Paller, K. A., Gitelman, D. R., Parrish, T. B., Mesulam, M. M., & Reber, P. J. (2003). Brain networks for analyzing eye gaze. Cognitive Brain Research, 17(2), 406-418. Hopfinger, J. B., & Mangun, G. R. (1998). Reflexive attention modulates processing of visual stimuli in human extrastriate cortex. Psychological Science, 9(6), 441-446. Hopfinger, J. B., & Mangun, G. R. (2001). Tracking the influence of reflexive attention on sensory and cognitive processing. Cognitive, Affective, & Behavioral Neuroscience, 1(1), 56-65. Hopfinger, J. B., & Ries, A. J. (2005). Automatic versus contingent mechanisms of sensory-driven neural biasing and reflexive attention. Journal of Cognitive Neuroscience, 17(8), 1341-1352. Hopfinger, J. B., & West, V. M. (2006). Interactions between endogenous and exogenous attention on cortical visual processing. NeuroImage, 31(2), 774-789.  250 Hopfinger, J. B., Buonocore, M. H., & Mangun, G. R. (2000). The neural mechanisms of top-down attentional control. Nature Neuroscience, 3(3), 284-291. Hubel, D. H., & Wiesel, T. N. (1977). Functional architecture of macaque monkey visual cortex. Proceedings of the Royal Society of London B, 198, 1-59. Hunt, A. R. (2007). The effect of emotional faces on eye movements and attention. Visual Cognition, 15(5), 513-531. Hyman, R. (1953). Stimulus information as a determinant of reaction time. Journal of Experimental Psychology, 45(3), 188-196. Irwin, D. E., & Zelinsky, G. J. (2002). Eye movements and scene perception: Memory for things observed. Perception & Psychophysics, 64(6), 882-895. Irwin, D. E., Colcombe, A. M., Kramer, A. F., & Hahn, S. (2000). Attentional and oculomotor capture by onset, luminance and color singletons. Vision Research, 40(10-12), 1443-1458. Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40(10-12), 1489-1506. James, W. (1890). The Principles of Psychology. Boston: Harvard University Press. Janzen, G., & van Turennout, M. (2004). Selective neural representation of objects relevant for navigation. Nature Neuroscience, 7(6), 673-677. Jeannerod, M. (1999). A dichotomous visual brain? Psyche, 5(25). Johnson Jr, R. (1993). On the neural generators of the P300 component of the eventrelated potential. Psychophysiology, 30(1), 90-97. Jonides, J. (1981). Voluntary versus automatic control over the mind's eye's movement. In J. Long & A. Baddely (Eds.), Attention & Performance (Vol. IX, pp. 187-203).  251 Jonides, J., & Yantis, S. (1988). Uniqueness of abrupt visual onset in capturing attention. Perception & Psychophysics, 43(4), 346-354. Josephson, S., & Holmes, M. E. (2002). Visual attention to repeated internet images: Testing the scanpath theory on the world wide web. Proceedings of the symposium on Eye tracking research & applications, 43-49. Judd, A., Sim, J., Cho, J., von Muhlenen, A., & Lleras, A. (2004). Motion perception, awareness and attention effects with looming motion. Journal of Vision, 4(8), 608a. Kavcic, V., & Duffy, C. J. (2003). Attentional dynamics and visual perception: Mechanisms of spatial disorientation in Alzheimer's disease. Brain, 126(5), 11731181. Kavcic, V., Fernandez, R., Logan, D., & Duffy, C. J. (2006). Neurophysiological and perceptual correlates of navigational impairment in Alzheimer's disease. Brain, 129(3), 736-746. Kern, R., Petereit, C., & Egelhaaf, M. (2001). Neural processing of naturalistic optic flow. The Journal of Neuroscience, 21(8), RC139-RC143. Khoe, W., Mitchell, J. F., Reynolds, J. H., & Hillyard, S. A. (2005). Exogenous attentional selection of transparent superimposed surfaces modulates early eventrelated potentials. Vision Research, 45(24), 3004-3014. Kingstone, A. (1992). Combining expectancies. The Quarterly Journal of Experimental Psychology, 44(1), 69. Kingstone, A., & Pratt, J. (1999). Inhibition of return is composed of attentional and occulomotor processes. Perception & Psychophysics, 61(6), 1046-1054.  252 Kingstone, A., Friesen, C. K., & Gazzaniga, M. (2000). Reflexive joint attention depends on lateralized cortical connections. Psychological Science, 11(2), 159-166. Kingstone, A., Tipper, C., Ristic, J., & Ngan, E. (2004). The eyes have it!: An fMRI investigation. Brain and Cognition, 55, 269-271. Kirschen, M. P., Kahana, M. J., Sekuler, R., & Burack, B. (2000). Optic flow helps humans learn to navigate through synthetic environments. Perception, 29(7), 801818. Klein, R. M. (1988). Inhibitory tagging system facilitates visual search. Nature, 334(6181), 430-431. Klein, R. M. (1994). Perceptual-motor expectancies interact with covert visual orienting under conditions of endogenous but not exogenous control. Canadian Journal of Experimental Psychology, 48(2), 167-181. Klein, R. M. (2000). Inhibition of return. Trends in Cognitive Sciences, 4(4), 138-147. Klein, R. M., & Kerr, B. (1974). Visual signal detection and the locus of foreperiod effects. Memory & Cognition, 2(3), 431-435. Klein, R. M., & Pontefract, A. (1994). Does oculomotor readiness mediate cognitive control of visual attention - revisited! In C. Umilta & M. Moscovitch (Eds.), Attention And Performance (Vol. XV, pp. 333-350). Cambridge, MA.: MIT Press. Klein, R. M., Kingstone, A., & Pontefract, A. (1993). Orienting of visual attention. In K. Rayner (Ed.), Eye Movements And Visual Cognition: Scene Perception And Reading (pp. . 46-65). New York: Springer-Verlag. Koenderink, J. J. (1986). Optic flow. Vision Research, 26(1), 161-180.  253 Kramer, A. F., & Hahn, S. (1995). Splitting the beam: Distribution of attention over noncontiguous regions of the visual field. Psychological Science, 6(6), 381-387. Lambert, A., & Hockey, R. (1991). Peripheral visual changes and spatial attention. Acta Psychologica, 76(2), 149-163. Lamme, V. A. F. (2000). Neural mechanisms of visual awareness: A linking proposition. Brain and Mind, 1(3), 385-406. Land, M. F. (1997). The knowledge base of the oculomotor system. Philosophical Transactions: Biological Sciences, 352(1358), 1231-1239. Land, M. F., & Lee, D. N. (1994). Where we look when we steer. Nature, 369(6483), 742-744. Land, M. F., & McLeod, P. (2000). From eye movements to actions: How batsmen hit the ball. Nature Neuroscience, 3, 1340-1345. Langdon, R., & Smith, P. (2005). Spatial cueing by social versus nonsocial directional signals. Visual Cognition, 12(8), 1497-1527. Langton, S. R. H., & Bruce, V. (1999). Reflexive visual orienting in response to the social attention of others. Visual Cognition, 6(5), 541-568. Langton, S. R. H., Watt, R. J., & Bruce, V. (2000). Do the eyes have it? Cues to the direction of social attention. Trends in Cognitive Sciences, 4(2), 50-58. Lapointe, J. F., & Vinson, N. G. (2002). Effects of joystick mapping and field-of-view on human performance in virtual walkthroughs. The First International Symposium on 3D Data Processing Visualization and Transmission, 490–493. Lappe, M., Bremmer, F., & van ben Berg, A. V. (1999). Perception of self-motion from visual flow. Trends in Cognitive Sciences, 3(9), 329-336.  254 Lappe, M., Bremmer, F., Pekel, M., Thiele, A., & Hoffmann, K. (1996). Optic flow processing in monkey STS: A theoretical and experimental approach. The Journal of Neuroscience, 16(19), 6265-6285. Lappe, M., Pekel, M., & Hoffmann, K. P. (1998). Optokinetic eye movements elicited by radial optic flow in the macaque monkey. Journal of Neurophysiology, 79(3), 1461-1480. Laurent, M., & Thomson, J. A. (1991). Anticipation and control in visually-guided locomotion. International Journal of Sport Psychology, 22(3), 4. Law, M. B., Pratt, J., & Abrams, R. A. (1995). Color-based inhibition of return. Perception & Psychophysics, 57(3), 402-408. Lee, D. N. (1976). A theory of visual control of breaking based on information about time-to-collision. Perception, 5(4), 437-459. Lee, D. N., Georgopoulos, A. P., Clark, M. J. O., Craig, C. M., & Port, N. L. (2001). Guiding contact by coupling the taus of gaps. Experimental Brain Research, 139(2), 151-159. Lee, D. N., Lishman, J., & Thompson, J. A. (1982). Regulation of gait in long jumping. Journal of Experimental Psychology: Human Perception and Performance, 8(3), 448-459. Lee, D. N., Young, D. S., Reddish, P. E., Lough, S., & Clayton, T. M. H. (1983). Visual timing in hitting an accelerating ball. Quarterly Journal of Experimental Psychology, 35A(2), 333-346.  255 Leghari, M. A., Tipper, C. M., & Handy, T. C. (in preparation). Retinotopic variance in implicit visuomotor responses: A bias for fixation and the lower right visual quadrant. Lessels, S., & Ruddle, R. A. (2004). Changes in navigational behaviour produced by a wide field of view and a high fidelity visual scene. Proceedings of the 10th Eurographics Symposium on Virtual Environments (EGVE’04, 71–78. Lewis, C. F., & McBeath, M. K. (2004). Bias to experience approaching motion in a three-dimensional virtual environment. Perception, 33(3), 259-276. Lewis, J. W., Beauchamp, M. S., & DeYoe, E. A. (2000). A comparison of visual and auditory motion processing in human cortex. Cerebral Cortex, 10(9), 873-888. Liu, T., Pestilli, F., & Carrasco, M. (2005). Transient attention enhances perceptual performance and fMRI response in human visual cortex. Neuron, 45(3), 469-477. Livingstone, M., & Hubel, D. (1988). Segregation of form, color, movement, and depth: Anatomy, physiology, and perception. Science, 240(4853), 740-749. Loftus, G. R., & Mackworth, N. H. (1978). Cognitive determinants of fixation location during picture viewing. Journal of Experimental Psychology: Human Perception and Performance, 4(4), 565-572. Lohse, G. L. (1997). Consumer eye movement patterns on yellow pages advertising. Journal of Advertising, 26(1), 61. Longuet-Higgins, H. C., & Prazdny, K. (1980). The interpretation of moving retinal images. Proceedings of the Royal Society of London, B, 208(1173), 385-397.  256 Loomis, J. M., & Beall, A. C. (1998). Visually-controlled locomotion: Its dependence on optic flow, three-dimensional space perception, and cognition. Ecological Psychology, 10(3-4), 271-285. Loomis, J. M., Blascovich, J. J., & Beall, A. C. (1999). Immersive virtual environment technology as a basic research tool in psychology. Behavior Research Methods, Instruments, & Computers, 31(4), 557-564. Loomis, J. M., Da Silva, J. A., Fujita, N., & Fukusima, S. S. (1992). Visual space perception and visually directed action. Journal of Experimental Psychology: Human Perception and Performance, 18(4), 906-921. Luck, S. J. (1995). Multiple mechanisms of visual-spatial attention: Recent evidence from human electrophysiology. Behavioral Brain Research, 71(1-2), 113-123. Luck, S. J., & Ford, M. A. (1998). On the role of selective attention in visual perception. Proceedings of the National Academy of Sciences USA, 95(3), 825-830. Luck, S. J., Chelazzi, L., Hillyard, S. A., & Desimone, R. (1997). Neural mechanisms of spatial selective attention in areas V1, v2, and V4 of macaque visual cortex. Journal of Neurophysiology, 77(1), 24-42. Luck, S. J., Hillyard, S. A., Mouloua, M., Woldorff, M. G., Clark, V. P., & Hawkins, H. L. (1994). Effects of spatial cuing on luminance detectability: Psychophysical and electrophysiological evidence for early selection. Journal of Experimental Psychology: Human Perception and Performance, 20(4), 887-904. Ludwig, C. J., & Gilchrist, I. D. (2002). Measuring saccade curvature: A curve-fitting approach. Behavior Research Methods, Instruments & Computers, 34(4), 618624.  257 Lupianez, J., & Milliken, B. (1999). Inhibition of return and the attentional set for integrating vs. Differentiating information. Journal of General Psychology, Theme Issue on Visual Attention, Part 2, 126, 392-418. Lupianez, J., Milliken, B., Solano, C., Weaver, B., & Tipper, S. P. (2001). On the strategic modulation of the time course of facilitation and inhibition of return. Quarterly Journal of Experimental Psychology, 54A(3), 753-773. Mack, A., & Rock, I. (1998). Inattentional Blindness. Cambridge, MA: MIT Press. Mangun, G. R., & Hillyard, S. A. (1991). Modulation of sensory-evoked brain potentials provide evidence for changes in perceptual processing during visual-spatial priming. Journal of Experimental Psychology: Human Perception and Performance, 17(4), 1057-1074. Mangun, G. R., Hillyard, S. A., & Luck, S. J. (1993). Electrocortical substrates of visual selective attention. In Attention And Performance (Vol. 14, pp. 219-243). Mangun, G. R., Hopfinger, J. B., Kussmaul, C. L., Fletcher, E. M., & Heinze, H. J. (1997). Covariations in ERP and PET measures of spatial selective attention in human extrastriate visual cortex. Human Brain Mapping, 5(4), 273-279. Mannan, S., Ruddock, K. H., & Wooding, D. S. (1995). Automatic control of saccadic eye movements made in visual inspection of briefly presented 2-D images. Spatial Vision, 9(3), 363-386. Marcar, V. L., Zihl, J., & Cowey, A. (1997). Comparing the visual deficits of a motion blind patient with the visual deficits of monkeys with area MT removed. Neuropsychologia, 35(11), 1459-1465.  258 Marrocco, R. T. (1978). Saccades induced by stimulation of the frontal eye fields: Interaction with voluntary and reflexive eye movements. Brain Research, 146(1), 23-34. Martínez, A., Anllo-Vento, L., Sereno, M. I., Frank, L. R., Buxton, R. B., Dubowitz, D. J., et al. (1999). Involvement of striate and extrastriate visual cortical areas in spatial attention. Nature Neuroscience, 2, 364-369. Masson, M. E. J. (1991). Constraints on the interaction between context and stimulus information, 13th Conference Of The Cognitive Science Society (pp. 540–545). Chicago. Mayer, A. R., Dorflinger, J. M., Rao, S. M., & Seidenberg, M. (2004). Neural networks underlying endogenous and exogenous visual-spatial orienting. NeuroImage, 23(2), 534-541. Maylor, E. A., & Hockey, R. (1985). Inhibitory component of externally controlled covert orienting in visual space. Journal of Experimental Psychology: Human Perception and Performance, 11(6), 777-787. McAuliffe, J., Pratt, J., & O'Donnell. (2001). Examining location-based and object-based components of inhibition of return in static displays. Perception & Psychophysics, 63(6), 1072-1082. Merkel, J. (1885). Die zeitlichen verhältnisse der willensthätigkeit (the temporal relations of the actions of will, or the timing of voluntary action). Philosophische Studien (Philosophical Studies), 2, 73–127.  259 Mikami, A., Newsome, W. T., & Wurtz, R. H. (1986). Motion selectivity in macaque visual cortex: II. Spatio-temporal range of directional interactions in MT and V1. Journal of Neurophysiology, 55(6), 1328-1339. Mistlin, A. J., Chitty, A. J., Head, A. S., Potter, D. D., Broennimann, R., Milner, A. D., et al. (1985). Visual analysis of body movements by neurones in the temporal cortex of the macaque monkey: A preliminary report. Behavioral Brain Research, 16(23), 153-170. Miura, T., Shinohara, K., & Kanda, K. (2002). Shift of attention in depth in a semirealistic setting. Japanese Psychological Research, 44(3), 124. Mondor, T. A. (1999). Predictability of the cue-target relation and the time-course of auditory inhibition of return. Perception & Psychophysics, 61(8), 1501-1509. Moran, J., & Desimone, R. (1985). Selective attention gates visual processing in the extrastriate cortex. Science, 229(4715), 782. Morrone, M. C., Tosetti, M., Montanaro, D., Fiorentini, A., Cioni, G., & Burr, D. C. (2000). A cortical area that responds specifically to optic flow, revealed by fMRI. Nature Neuroscience, 3, 1322-1328. Motter, B. C. (1993). Focal attention produces spatially selective processing in visual cortical areas V1, v2, and V4 in the presence of competing stimuli. Journal of Neurophysiology, 70(3), 909-919. Mulert, C., Jager, L., Schmitt, R., Bussfeld, P., Pogarell, O., Moller, H. J., et al. (2004). Integration of fMRI and simultaneous EEG: Towards a comprehensive understanding of localization and time-course of brain activity in target detection. NeuroImage, 22(1), 83–94.  260 Muller, H. J., & Findlay, J. M. (1988). The effect of visual attention on peripheral discrimination thresholds in single and multiple element displays. Acta Psychologica, 69(2), 129-155. Muller, H. J., & Rabbit, P. M. A. (1989). Reflexive and voluntary orienting of visual attention: Time course of activation and resistance to interruption. Journal of Experimental Psychology: Human Perception and Performance, 15(2), 315-330. Muller, H. J., & von Muhlenen, A. (1996). Attentional tracking and inhibition of return in dynamic displays. Perception & Psychophysics, 58(2), 224-249. Muller, J. R., Philiastides, M. G., & Newsome, W. T. (2005). Microstimulation of the superior colliculus focuses attention without moving the eyes. Proceedings of the National Academy of Sciences, 102(3), 524-529. Naatanen, R. (1970). The diminishing time-uncertainty with the lapse of time after the warning signal in reaction time experiments with varying foreperiods. Acta Psychologica, 34(4), 399-419. Naatanen, R. (1992). Attention and Brain Function: Erlbaum Hillsdale, NJ. Nickerson, R. S., Collins, A. M., & Markowitz, J. (1969). Effects of uncertain warning signals on reaction time. Perception & Psychophysics, 5(2), 107-112. Nielsen, M., & Olsen, O. F. (1998). The structure of the optic flow field. In H. Burkhardt & B. Neumann (Eds.), Computer Vision (Vol. 2, pp. 271-287.). Niemann, T., Lappe, M., Buscher, A., & Hoffman, K. P. (1999). Ocular responses to radial optic flow and single accelerated targets in humans. Vision Research, 39(7), 1359-1371.  261 O'Brien, H. L., Tetewsky, S. J., Avery, L. M., Cushman, L. A., Makous, W., & Duffy, C. J. (2001). Visual mechanisms of spatial disorientation in Alzheimer's disease. Cerebral Cortex, 11(11), 1085-1092. O'Craven, K. M., Rosen, B. R., Kwong, K. K., Treisman, A., & Savoy, R. L. (1997). Voluntary attention modulates activity in human MT-MST. Neuron, 18(4), 591598. Ohman, A. (1979). The orienting response, attention, and learning: An informationprocessing perspective. The orienting reflex in humans, 443–471. Ohman, A., Flykt, A., & Esteves, F. (2001). Emotion drives attention: Detecting the snake in the grass. Journal of Experimental Psychology: General, 130(3), 466478. O'Keefe, J., & Nadel, L. (1978). The Hippocampus as a Cognitive Map. New York: Oxford University Press. Orban, G. A., Dupont, P., De Bruyn, B., Vogels, R., Vandenberghe, R., & Mortelmans, L. (1995). A motion area in human visual cortex. Proceedings of the National Academy of Science USA, 92(4), 993-997. Osberger, W., & Maeder, A. J. (1998). Automatic Identification Of Perceptually Important Regions In Animage. Paper presented at the Fourteenth International Conference on Pattern Recognition. Pan, B., Hembrooke, H. A., Gay, G. K., Granka, L. A., Feusner, M. K., & Newman, J. K. (2004). The determinants of web page viewing behavior: An eye-tracking study. Proceedings of the 2004 symposium on Eye tracking research & applications, 147-154.  262 Pashler, H. E. (1998). The Psychology Of Attention. Cambridge, MA: MIT Press. Patla, A. E., & Vickers, J. N. (1997). Where and when do we look as we approach and step over an obstacle in the travel path? NeuroReport, 8(17), 3661-3665. Pelli, D. G. (1997). The videotoolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10(44), 437-442. Pelphrey, K. A., & Morris, J. P. (2006). Brain mechanisms for interpreting the actions of others from biological-motion cues. Current Directions in Psychological Science, 15(3), 136 - 140. Pelphrey, K. A., Morris, J. P., Michelich, C. R., Allison, T., & McCarthy, G. (2005). Functional anatomy of biological motion perception in posterior temporal cortex: An fMRI study of eye, mouth, and hand movements. Cerebral Cortex, 15(12), 1866 - 1876. Perrett, D. I., Smith, P. A. J., Potter, D. D., Mistlin, A. J., Head, A. S., Milner, A. D., et al. (1985). Visual cells in the temporal cortex sensitive to face view and gaze direction. Proceedings of the Royal Society of London, B223(1232), 293-317. Pessoa, L., Kastner, S., & Ungerleider, L. G. (2003). Neuroimaging Studies Of Attention: From Modulation Of Sensory Processing To Top-Down Control (Vol. 23): Society for Neuroscience. Petersen, M. S., Kramer, A. F., & Irwin, D. E. (2004). Covert shifts of attention precede involuntary eye movements. Perception & Psychophysics, 66(3), 398-405. Pieters, R., & Wedel, M. (2004). Attention capture and transfer in advertising: Brand, pictorial, and text-size effects. Journal of Marketing, 68(2), 36-50.  263 Pieters, R., Rosbergen, E., & Wedel, M. (1999). Visual attention to repeated print advertising: A test of scanpath theory. Journal of Marketing Research, 36(4), 424438. Pinel, J. P. J. (1999). Biopsychology, 4th Edition: Allyn & Bacon. Posner, M. I. (1978). Chronometric Explorations Of Mind. Hillsdale, NJ: Lawrence Erlbaum Associates. Posner, M. I. (1980). Orienting of attention. Quarterly Journal of Experimental Psychology, 32(1), 3-25. Posner, M. I. (1988). Structures and functions of selective attention. In T. Boll & B. Bryant (Eds.), Master Lectures In Clinical Neuropsychology And Brain Function: Research, Measurement, And Practice (pp. 171-202). Washington, D.C.: American Psychological Association. Posner, M. I. (1990). Hierarchical distributed networks in the neuropsychology of selective attention. In A. Carmazza (Ed.), Cognitive Neuropsychology And Neurolinguistics: Advances In Models Of Cognitive Function And Impairment (pp. 187-210). Hillsdale, NJ: Erlbaum Associates Inc. Posner, M. I. (1992). Attention as a cognitive and neural system. Current Directions in Psychological Science, 1(1), 11-14. Posner, M. I. (1995). Attention in cognitive neuroscience: An overview. In M. S. Gazzaniga (Ed.), The Cognitive Neurosciences (pp. 615-624). Cambridge, MA: MIT Press.  264 Posner, M. I., & Cohen, Y. (1984). Components of visual orienting. In H. Bouma & D. G. Bouwhuis (Eds.), Attention & Performance (Vol. X, pp. 531-556). Hillsdale, NJ: Lawrence Erlbaum Associates Inc. Posner, M. I., & Petersen, S. E. (1990). The attention system of the human brain. Annual Review of Neuroscience, 13, 25-42. Posner, M. I., & Rothbart, M. K. (1991). Inhibition of return: Neural basis and function. In A. D. Milner & D. R. Michael (Eds.), The Neuropsychology Of Consciousness (pp. San Diego, CA): Academic Press Ltd. Posner, M. I., & Snyder, C. R. R. (1975). Attention and cognitive control. Information processing and cognition: The Loyola symposium, 55-85. Posner, M. I., Cohen, Y., & Rafal, R. D. (1981). Neural Systems Control Of Spatial Orienting. Paper presented at the Philosophical Transactions of the Royal Society of London, London, UK. Posner, M. I., Crippin, P. J., Cohen, A., & Rafal, R. (1986). Speed of covert orienting of attention and express saccades, Psychonomics Society. New Orleans. Posner, M. I., Nissen, M. J., & Ogden, W. C. (1978). Attended and unattended processing modes: The role of set for spatial location. In H. L. Pick & E. Saltzman (Eds.), Modes Of Perceiving And Processing Information (pp. 137-158). Hillsdale, NJ: Lawrence Erlbaum Associates Inc. Posner, M. I., Rafal, R. D., Choate, L. S., & Vaughan, J. (1985). Inhibition of return: Neural basis and function. Cognitive Neuropsychology, 2(3), 211-228. Posner, M. I., Snyder, C. R. R., & Davidson, B. J. (1980). Attention and the detection of signals. Journal of Experimental Psychology: General, 109(2), 160-174.  265 Posner, M. I., Walker, J. A., & Friedrich, F. J. (1987). How do the parietal lobes direct covert attention? Neuropsychologia (Special Issue: Selective Visual Attention), 25(1A), 135-145. Pratt, J., & Abrams, R. A. (1995). Inhibition of return to successively cued spatial locations. Journal of Experimental Psychology: Human Perception and Performance, 21(6), 1343-1353. Pratt, J., O'Donnell, C., & Morgan, A. (2000). The role of fixation location in inhibition of return. Canadian Journal of Experimental Psychology, 54(3), 186-195. Pratt, J., Sekuler, A. B., & McAuliffe, J. (2001). The role of attentional set on attentional cueing and inhibition of return. Visual Cognition, 8(1), 33-46. Priest, H. F., Cutting, J. E., Torrey, C. C., & Regan, D. (1985). Visual flow and direction of locomotion. Science, 227(4690), 1063-1065. Prime, D. J., & Ward, L. M. (2004). Inhibition of return from stimulus to response. Psychological Science, 15(4), 272-276. Prime, D. J., & Ward, L. M. (2004). Research report inhibition of return from stimulus to response. Psychological Science, 15(4), 272. Pritchett, A. R., Vandor, B., & Edwards, K. (2002). Testing and implementing cockpit alerting systems. Reliability Engineering and System Safety, 75(2), 193-206. Prokop, T., Schubert, M., & Berger, W. (1997). Visual influence on human locomotion. Experimental Brain Research, 114(1), 63-70. Puce, A., Allison, T., Bentin, S., Gore, J. C., & McCarthy, G. (1998). Temporal cortex activation in humans viewing eye and mouth movements. Journal of Neuroscience, 18(6), 2188-2199.  266 Quadflieg, S., Mason, M. F., & Macrae, C. N. (2004). The owl and the pussycat: Gaze cues and visuospatial orienting. Psychonomic Bulletin and Review, 11(5), 826831. Rafal, R. (1996). Visual attention: Converging operations from neurology and psychology. In Converging Operations In The Study Of Visual Selective Attention (pp. 139–192). Rafal, R. D., Posner, M. I., Friedman, J. H., Inhoff, A. W., & Bernstein, E. (1988). Orienting of visual attention in progressive supranuclear palsy. Brain, 111(2), 267–280. Rafal, R., & Egly, R. (1994). Effects of inhibition of return on voluntary and visually guided saccades. Canadian Journal of Experimental Psychology, 48(2), 384-300. Rafal, R., & Henik, A. (1994). The neurology of inhibition: Integrating controlled and automatic processes. Inhibitory Processes in Attention, Memory, and Language, 1-52. Rafal, R., & Robertson, L. (1995). The neurology of visual attention. In M. S. Gazzaniga (Ed.), (Vol. The Cognitive Neurosciences, pp. 625-648). Cambridge, MA: MIT Press. Rafal, R., & Smith, J. (1990). Extrageniculate vision in hemianopic humans: Saccade inhibition by signals in the blind field. Science, 250(4977), 118-121. Rafal, R., Calabresi, P., Brennan, C., & Sciolto, T. (1989). Saccade preparation inhibits reorienting to recently attended locations. Journal of Experimental Psychology: Human Perception and Performance, 15(4), 673-685.  267 Rajashekar, U., Bovik, A. C., & Cormack, L. K. (2006). Visual search in noise: Revealing the influence of structural cues by gaze-contingent classification image analysis. Journal of Vision, 6(4), 379-386. Rajashekar, U., Cormack, L. K., & Bovik, A. C. (2002). Visual search: Structure from noise. Proceedings of the symposium on Eye tracking research & applications, 119-123. Raymond, J. E. (1994). Directional anisotropy of motion sensitivity across the visual field. Vision Research, 34(8), 1029-1037. Regan, D., & Gray, R. (2001). Hitting what one wants to hit and missing what one wants to miss. Vision Research, 41(25-26), 3321-3329. Remington, R. (1978). Visual attention, detection, and the control of saccadic eye movements. University of Oregon. Remington, R., Johnston, J., & Yantis, S. (1992). Involuntary attentional capture by abrupt onsets. Perception & Psychophysics, 51(3), 279-290. Restle, F. (1980). The seer of ithica. (review of the ecological approach to visual perception). Contemporary Psychology, 25, 291. Reuter, L., P, A., Jha, A. P., & Rosenquist, J. N. (1996). What is inhibited in inhibition of return?  Journal  of  Experimental  Psychology:  Human  Perception  and  Performance, 22(2), 367-378. Riecke, B. E., Schulte-Pelkum, J., Avraamides, M. N., & Bülthoff, H. H. (2004). Enhancing the visually induced self-motion illusion (vection) under natural viewing conditions in virtual reality. Proceedings of Seventh Annual Workshop Presence, 125–132.  268 Riecke, B. E., Schulte-Pelkum, J., Caniard, F., & Bülthoff, H. H. (2005). Towards lean and elegant self-motion simulation in virtual reality. Proceedings of IEEE VR2005, 131–138. Riecke, B. E., Västfjäll, D., Larsson, P., & Schulte-Pelkum, J. (2005). Top-down and multi-modal influences on self-motion perception in virtual reality. HCI international. Ripoll, H., Fleurance, P., & Cazeneuve, D. (1987). Analysis of visual patterns of table tennis players. Eye movements: from physiology to cognition (ed JK O'Regan & A Lëvy-Schoen), 616-617. Ristic, J., & Kingstone, A. (2006). Attention to arrows: Pointing to a new direction. The Quarterly Journal of Experimental Psychology, 59(11), 1921-1930. Ristic, J., Friesen, C. K., & Kingstone, A. (2002). Are eyes special? It depends on how you look at it. Psychonomic Bulletin & Review, 9(3), 507-513. Rizzo, M., & Nawrot, M. (1998). Perception of movement and shape in Alzheimer's disease. Brain, 121(12), 2259-2270. Rizzolatti, G., Camarda, R., Fogassi, L., Gentilucci, M., Luppino, G., & Matelli, M. (1988). Functional organization of inferior area 6 in the macaque monkey. Experimental Brain Research, 71(3), 491-507. Rizzolatti, G., Fadiga, L., Matelli, M., Bettinardi, V., Paulesu, E., Perani, D., et al. (1996). Localization of grasp representations in humans by PET: 1. Observation versus execution. Experimental Brain Research, 111(2), 246-252.  269 Rizzolatti, G., Riggio, L., Dascola, I., & Umilta, C. (1987). Reorienting attention across the horizontal and vertical meridians: Evidence in favor of a premotor theory of attention. Neuropsycologia, 25(1A), 31-40. Rogers, S. (2000). The emerging concept of information. Ecological Psychology, 12(4), 335-343. Rogers, S. D., Kadar, E. E., & Costall, A. (2005). Gaze patterns in the visual control of straight-road driving and braking as a function of speed and expertise. Ecological Psychology, 17(1), 19-38. Royden, C. S., & Hildreth, E. C. (1996). Human heading judgments in the presence of moving objects. Perception & Psychophysics, 58(6), 836-856. Royden, C. S., & Hildreth, E. C. (1999). Differential effects of shared attention on perception of heading and 3-D object motion. Perception & Psychophysics, 61(1), 120-133. Royden, C. S., Banks, M. S., & Crowell, J. A. (1992). The perception of heading during eye movements. Nature, 360(6404), 583-585. Rushton, S. K., Harris, J. M., Lloyd, M. R., & Wann, J. P. (1998). Guidance of locomotion on foot uses perceived target location rather than optic flow. Current Biology, 8(21), 1191–1194. Sakata, H., Taira, M., Kusunoki, M., Murata, A., Tsutsui, K., Tanaka, Y., et al. (1999). Neural representation of three-dimensional features of manipulation objects with stereopsis. Experimental Brain Research, 128(1), 160-169. Sakata, H., Taira, M., Murata, A., Gallese, V., Tanaka, Y., Shikata, E., et al. (1997). Parietal visual neurons coding three-dimensional characteristics of objects and  270 their relation to hand action. In P. Thier & H. O. Karnath (Eds.), Parietal Lobe Contributions To Orientation In 3D Space (pp. 237-254). New York: Springer. Sanders, A. F. (1972). Foreperiod duration and the timecourse of preparation. Acta Psychologica, 36(1), 60-71. Sanders, A. F. (1975). The foreperiod effect revisited. Quarterly Journal of Experimental Psychology, 27(4), 591-598. Sanders, A. F., & Wertheim, A. H. (1973). The relation between physical and stimulus properties and the effect of foreperiod duration on reaction time. Quarterly Journal of Experimental Psychology, 25(2), 201-206. Sapir, A., Soroker, N., Berger, A., & Henik, A. (1999). Inhibition of return in spatial attention: Direct evidence for collicular generation. Nature Neuroscience, 2(12), 1053-1054. Sato, N., Sakata, H., Tanaka, Y. L., & Taira, M. (2006). Navigation-associated medial parietal neurons in monkeys. Proceedings of the National Academy of Sciences, 103(45), 17001. Schaafsma, S. J., & Duysens, J. (1996). Neurons in the ventral intraparietal area of awake macaque monkey closely resemble neurons in the dorsal part of the medial superior temporal area in their responses to optic flow patterns. Journal of Neurophysiology, 76(6), 4056-4068. Schneider, W., & Shiffrin, R. M. (1977). Controlled and automatic human information processing: 1. Detection, search, and attention. Psychological Review, 84(1), 1-66.  271 Schuller, A. M., & Rossion, B. (2005). Spatial attention triggered by eye gaze enhances and speeds up visual processing in upper and lower visual fields beyond early striate visual processing. Clinical Neurophysiology, 116(11), 2565-2576. Sereno, M. I., Pitzalis, S., & Martinez, A. (2001). Mapping of contralateral space in retinotopic coordinates by a parietal cortical area in humans. 294(5545), 13501354. Shadlen, M. N., & Newsome, W. T. (1996). Motion perception: Seeing and deciding. 93(2), 628-633. Shepard, M., Findlay, J. M., & Hockey, R. J. (1986). The relationship between eye movements and spatial attention. Quarterly Journal of Experimental Psychology, 38A(3), 475-491. Shiffrin, R. M., & Schneider, W. (1977). Controlled and automatic human information processing: II. Perceptual learning, automatic attending, and a general theory. Psychological Review, 84(2), 127-190. Shinoda, H., Hayhoe, M. M., & Shrivastava, A. (2001). What controls attention in natural environments? Vision Research, 41(25-26), 3535-3545. Shirai, N., Kanazawa, S., & Yamaguchi, M. K. (2006). Anisotropic motion coherence sensitivities to expansion/contraction motion in early infancy. Infant Behavior and Development, 29(2), 204-209. Sidaway, B., Fairweather, M., Sekiya, H., & McNitt-Ray, J. (1996). Time-to-collision estimation in a simulated driving task. Human Factors, 38(1). Siegel, R. M., & Read, H. L. (1997). Analysis of optic flow in the monkey parietal area 7a. Cerebral Cortex, 7, 327-346.  272 Sinica, A. (1995). Identification of early visual evoked potential generators by retinotopic and topographic analyses. Human Brain Mapping, 2(3), 170-187. Smilek, D., Birmingham, E., Cameron, D., Bischof, W., & Kingstone, A. (2006). Cognitive ethology and exploring attention in real-world scenes. Brain Research, 1080(1), 101-119. Smith, A. T., & Snowden, R. J. (1994). Visual Detection Of Motion: Academic Press. Snyder, J., & Kingstone, A. (2000). Inhibition of return and visual search: How many separate loci are inhibited? Perception & Psychophysics, 62(3), 452-458. Snyder, J., & Kingstone, A. (2001). Inhibition of return at multiple locations in visual search: When you see it and when you don't. Quarterly Journal of Experimental Psychology, 54A(4), 1221-1237. Snyder, L. H., Batista, A. P., & Andersen, R. A. (1997). Coding of intention in the posterior parietal cortex. Nature, 386(6621), 167-170. Spires, & Maguire. (2004). A landmark study on the neural basis of navigation. Nature Neuroscience, 7(6), 572-574. Spiro, J. E. (2001). Going with the (virtual) flow. Nature Neuroscience, 4(2), 213-216. Spitzer, H., Desimone, R., & Moran, J. (1988). Increased attention enhances both behavioral and neuronal performance. Science, 240(4850), 338. Squires, N. K., Squires, K. C., & Hillyard, S. A. (1975). Two varieties of long-latency positive  waves  evoked  by  unpredictable  auditory  stimuli  in  man.  Electroencephalography and Clinical Neurophysiology, 38(4), 387–401. Strelow, E. R., & Brabyn, J. A. (1981). Use of foreground and background information in visually guided locomotion. Perception, 10(2), 191-198.  273 Sun, H. J., Campos, J. L., & Chan, G. S. W. (2004). Multisensory integration in the estimation of relative path length. Experimental Brain Research, 154(2), 246-254. Sunaert, S., Van Hecke, P., Marchal, G., & Orban, G. A. (1999). Motion-responsive regions of the human brain. Experimental Brain Research, 127(4), 355-370. Tanaka, K., & Saito, H. (1989). Analysis of motion of the visual field by direction, expansion/contraction, and rotation cells clustered in the dorsal part of the medial superior temporal area of the macaque monkey. Journal of Neurophysiology, 62(3), 626-641. Taylor, T. L., & Klein, R. M. (1998). On the causes and effects of inhibition of return. Psychonomic Bulletin and Review, 5(4), 625-643. Taylor, T. L., & Klein, R. M. (2000). Visual and motor effects in inhibition of return. Journal of Experimental Psychology: Human Perception and Performance, 26(5), 1639-1656. Tetewsky, S. J., & Duffy, C. J. (1999). Visual loss and getting lost in Alzheimer's disease. Neurology, 52(5), 958-958. Tetewsky, S., & Duffy, C. J. (1999). Visual loss and getting lost in Alzheimer's disease. Neurology, 52(5), 958-965. Theeuwes, J. (1991). Exogenous and endogenous control of attention: The effect of visual onsets and offsets. Perception & Psychophysics, 49(1), 83-90. Theeuwes, J. (1992). Perceptual selectivity for color and form. Perception & Psychophysics, 51(6), 599-606. Theeuwes, J. (1993). Visual selective attention: A theoretical analysis. Acta Psychologica, 83(2), 93-154.  274 Theeuwes, J. (1995). Abrupt luminance change pops out; abrupt color change does not. Perception & Psychophysics, 57(5), 637-644. Theeuwes, J., & Godijn, R. (2002). Irrelevant singletons capture attention: Evidence from inhibition of return. Perception & Psychophysics, 64(5), 764-770. Theeuwes, J., Kramer, A. F., & Atchley, P. (1998). Visual marking of old objects. Psychonomic Bulletin & Review, 5(1), 130-134. Theeuwes, J., Kramer, A. F., Hahn, S., & Irwin, D. E. (1998). Our eyes do not always go where we want them to go: Capture of the eyes by new objects. Psychological Science, 9(5), 379-385. Thomas, E. A. C. (1974). The selectivity of preparation. Psychological Review, 81(5), 442-464. Tipper, C., & Kingstone, A. (2005). Is inhibition of return a reflexive effect? Cognition, 97(3), B55-B62. Tipper, S. P., Driver, J., & Weaver, B. (1991). Object-centered inhibition of return of visual attention. Quarterly Journal of Experimental Psychology: Human Experimental Psychology, 43A(2), 289-298. Tipper, S. P., Jordan, H., & Weaver, B. (1999). Scene-based and object-centered inhibition of return: Evidence for dual orienting mechanisms. Perception & Psychophysics, 61(1), 50-60. Tipper, S. P., Lortie, C., & Baylis, G. C. (1992). Selective reaching: Evidence for actioncentered attention. Journal of Experimental Psychology: Human Perception and Performance, 18(4), 891-905.  275 Tipples, J. (2002). Eye gaze is not unique: Automatic orienting in response to uninformative arrows. Psychonomic Bulletin and Review, 9(2), 314-318. Tootell, R. B. H., Mendola, J. D., Hadjikhani, N. K., Ledden, P. J., Liu, A. K., Reppas, J. B., et al. (1997). Functional analysis of v3a and related areas in human visual cortex. Journal of Neuroscience, 17(18), 7060-7078. Treisman, A. M. (1988). Features and objects: The fourteenth bartlett memorial lecture. The Quarterly Journal of Experimental Psychology, 40(2), 201-237. Treisman, A., & Gelade, G. (1980). A feature-integration theory of vision. Cognitive Psychology, 12, 97-136. Treue, S. (2001). Neural correlates of attention in primate visual cortex. Trends in Neurosciences, 24(5), 295-300. Treue, S., & Maunsell, J. H. R. (1996). Attentional modulation of visual motion processing in cortical areas MT and MST. Nature, 382(6591), 539-541. Turano, K. A., Geruschat, D. R., & Baker, F. H. (2003). Oculomotor strategies for the direction of gaze tested with a real-world activity. Vision Research, 43(3), 333346. Underwood, G., Chapman, P., Brocklehurst, N., Underwood, J., & Crundall, D. (2003). Visual attention while driving: Sequences of eye fixations made by experienced and novice drivers. Ergonomics, 46(6), 629-646. Ungerleider, L. G., & Mishkin, M. (1982). Two cortical visual systems. In Analysis Of Visual Behavior (pp. 549–586). Vaina, L. M., & Rushton, S. K. (2000). What neurological patients tell us about the use of optic flow. International Review of Neurobiology, 44, 293-313.  276 Van Voorhis, S., & Hillyard, S. A. (1977). Visual evoked potentials and selective attention to points in space. Perception & Psychophysics, 22, 54–62. Vecera, S. P., & Rizzo, M. (2006). Eye gaze does not produce reflexive shifts of attention: Evidence from frontal lobe damage. Neuropsychologia, 44(1), 150-159. Vertegaal, R. (2002). Designing attentive interfaces. Proceedings of the symposium on Eye tracking research & applications, 23-30. Vogel, E. K., & Luck, S. J. (2000). The visual N1 component as an index of a discrimination process. Psychophysiology, 37(02), 190-203. von Mühlenen, A., Rempel, M. I., & Enns, J. T. (2005). Research article unique temporal change is the key to attentional capture. Psychological Science, 16(12), 979. Vuilleumier, P. (2002). Facial expression and selective attention. Current Opinions in Psychiatry, 15(3), 291-300. Walker, R., Walker, D., Husain, M., & Kennard, C. (2000). Control of voluntary and reflexive saccades. Experimental Brain Research, 130(4), 540-544. Waller, D., Hunt, E., & Knapp, D. (1998). The transfer of spatial knowledge in virtual environment training. Presence: Teleoperators & Virtual Environments, 7(2), 129-143. Walter, W. G., Cooper, R., Aldridge, V. J., McCallum, W. C., & Winter, A. L. (1964). Contingent negative variation: An electrical sign of sensorimotor association and expectancy in the human brain. Nature, 203, 380-384. Wann, J., & Land, M. (2000). Steering with or without the flow: Is the retrieval of heading necessary? Trends in Cognitive Sciences, 4(8), 319-324.  277 Warren Jr, W. H., & Saunders, J. A. (1995). Perceiving heading in the presence of moving objects. Perception, 24(3), 315-331. Warren Jr, W. H., Kay, B. A., Zosh, W. D., Duchon, A. P., & Sahuc, S. (2001). Optic flow is used to control human walking. Nature Neuroscience, 4(2), 213-216. Warren, W. H., & Hannon, D. J. (1988). Direction of self-motion is perceived from optical flow. Nature, 336(6195), 162-163. Watamaniuk, S. N. J. (1993). Ideal observer for discrimination of the global direction of dynamic random-dot stimuli. Journal of the Optical Society of America A, 10, 1628. Weinberg, H. (1972). The contingent negative variation: Its relation to feedback and expectant attention. Neuropsycologia, 10, 299-306. Woldorff, M. G., Fox, P. T., Matzke, M., Lancaster, J. L., Veeraswamy, S., Zamarripa, F., et al. (1997). Retinotopic organization of early visual spatial attention effects as revealed by PET and erps. Human Brain Mapping, 5(4), 280-286. Wright, M. J., Geffen, G. M., & Geffen, L. B. (1995). Event related potentials during covert orientation of visual attention: Effects of cue validity and directionality. Biological Psychology, 41(2), 183-202. Yantis, S. (1993). Stimulus-driven attentional capture and attentional control settings. Journal of Experimental Psychology: Human Perception and Performance, 19(3), 676-681. Yantis, S. (1993). Stimulus-driven attentional capture. Current Directions in Psychological Science, 2(5), 156-161.  278 Yantis, S., & Hillstrom, A. P. (1994). Stimulus-driven attentional capture: Evidence from equiluminant visual objects. Journal of Experimental Psychology: Human Perception and Performance, 20(1), 95-107. Yantis, S., & Johnston, J. C. (1990). On the locus of visual selection: Evidence from focused attention tasks. Journal of Experimental Psychology: Human Perception and Performance, 16(1), 135-149. Yantis, S., & Jonides, J. (1984). Abrupt visual onsets and selective attention: Evidence from visual search. Journal of Experimental Psychology: Human Perception and Performance, 10(5), 601-621. Yantis, S., & Jonides, J. (1990). Abrupt visual onsets and selective attention: Voluntary versus automatic allocation. Journal of Experimental Psychology: Human Perception and Performance, 16(1), 121-134. Yarbus, A. L. (1967). Eye Movements And Vision: Plenum Press New York. Yee, H., Pattanaik, S., & Greenberg, D. P. (2001). Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments. ACM Transactions on Graphics (TOG), 20(1), 39-65. Zeki, S. (1993). A Vision Of The Brain: Blackwell Scientific Publications Oxford. Zeki, S., Watson, J. D., Lueck, C. J., Friston, K. J., Kennard, C., & Frackowiak, R. S. (1991). A direct demonstration of functional specialization in human visual cortex. Journal of Neuroscience, 11(3), 641-649.  279 Appendix A: Neural Processing of Optic Flow The perception of heading from the information provided by the optic flow field, including the FOE, is dependent upon the integration of activity at multiple levels of the brain’s complex, hierarchically organized motion processing system (Culham et al., 2001; Zeki, 1993). The necessary involvement of a large network of brain regions for the perception of optic flow underscores the complexity of optic flow in general, and the FOE in particular, as visual stimuli. Higher-level associative processing in the visual system is divided into two anatomically distinct yet interacting dorsal and ventral processing streams. An abundance of neurological, neurophysiological (Goodale & Milner, 1992; Jeannerod, 1999; Ungerleider & Mishkin, 1982), neurophysiological (Rizzolatti et al., 1988; Sakata et al., 1997), and neuroimaging (Decety, 1999) research suggests that the ventral and dorsal streams are functionally distinct; specialized to process different aspects of visual information. The ventral pathway, often referred to as the “what” pathway, projects to inferotemporal cortex, which is critical to color and form discrimination, object recognition, and possibly perceptual awareness. The dorsal pathway, called the “how” pathway, projects to posterior parietal cortex and medial temporal regions, and is thought to be critical to visuomotor processes essential for visually guided action. Motion processing is one domain of the dorsal visual stream. Neural sensitivity to motion begins in the magnocellular layers of the lateral geniculate nucleus (LGN) of the thalamus, which have response properties conducive to motion detection, such as faster, more transient firing in response to changing luminosity (Culham et al., 2001). Directionally selective cells are found in areas of V1 (the primary  280 visual cortex) receiving magnocellular LGN inputs (Hubel & Wiesel, 1977). These signals are spatially integrated such that the complexity of directional selectivity increases as motion information makes its way through the system from V1 to V3 to V2 and V3A, to V5/MT+ (Tootell et al., 1995; Tootell et al., 1997; Cavanagh & Mather, 1989; Culham, 2001). The integration of increasingly specific motion signals enables sensitivity to the complex motion information especially relevant to the perception of optic flow. While MT is responsive to coherent patterns of translational motion, neurophysiological recordings in the macaque monkey suggest that MT does not select well for radial motion (e.g. Mikami et al., 1986), a significant component of an optic flow field generated by locomotion. MT inputs converging in macaque MST, however, give this region large and complex receptive fields that are responsive to the complex motion that comprises optic flow, including radial expansion, rotation, and spiral motion (Duffy & Wurtz, 1991; Lappe et al., 1996; Tanaka & Saito, 1989). This sensitivity to radial flow, combined with a topographical mapping of the visual field, enables the coding of heading direction in a population of MST cells, as the heading point would be localized to the region of the visual field represented by the cells exhibiting maximal firing rates (Andersen, 1997). MST neurons also receive extra-retinal inputs, which allow them to account for eye movement-related changes in retinal stimulation by re-tuning their response properties (Bradley et al., 1996). The differential sensitivity to translational motion and radial flow seen in macaque MT and MST is paralleled in the human system with different regions of the human MT+ complex exhibiting selectivity to different components of motion. Morrone  281 and colleagues (2000), for example, demonstrated differential responses to global translational motion and components of optic flow (radial and rotational motion) in two separate regions within human MT+ complex. While both types of stimuli induced a robust activation of MT+, the region of cortex activated by radial/rotational motion was more ventral that the region activated by translational motion. Higher cortical areas in the dorsal visual stream that receive input from the MT+ complex also demonstrate sensitivity to the motion components associated with optic flow. Neurons in the ventral intraparietal area (VIP) exhibit response properties similar to those of MST/MT+ (Schaafsma & Duysens, 1996). Unlike MST, however, while these cells are responsive to spiraling and radial expansion, they are not sensitive to translation. The cells of the anterior portion of the superior temporal gyrus (STG) are particularly responsive to the radial expansion associated with forward movement (Andersen & Siegel, 1999). Andersen and Siegel found that a smaller proportion of neurons in this area are also responsive to rotation, spiraling, and to a lesser extent, translation. The lateral intraparietal region (LIP) also demonstrates sensitivity to complex flow stimuli (Shadlen & Newsome, 1996). These authors found that LIP neurons display responsiveness to both sensory and motor signals, and conclude that LIP carries out a critical role in the translation between visual perception and visually guided action, particularly with respect to intentional eye movements. The LIP region projects to Area 7a, in which some cells are selective for a particular type of flow (e.g. radial vs. rotational), some cells are selective for particular flow directions (e.g. radial expansion vs. compression), and some cells have a general response to different kinds and directions of flow stimuli (Siegel & Read, 1997). Sigel and Read argue that Area 7a contributes a motion representation that  282 is distinct from that of MST or LIP, which may be critical to representing flow specific to particular egocentric movements. In summary, our sensitivity to the complex motion stimuli present in an optic flow field requires the integration of signals throughout a hierarchy of cortical visual processing regions. The heading point in an optic flow field is therefore a complex, behaviorally relevant visual stimulus represented at a high level in the dorsal stream of the visual system.  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0066164/manifest

Comment

Related Items