Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Do action-relevant properties of objects capture attention and prime action? Lam, Melanie Yah-Wai 2006

You don't seem to have a PDF reader installed, try download the pdf

Item Metadata

Download

Media
[if-you-see-this-DO-NOT-CLICK]
ubc_2006-0535.pdf [ 5.41MB ]
Metadata
JSON: 1.0077004.json
JSON-LD: 1.0077004+ld.json
RDF/XML (Pretty): 1.0077004.xml
RDF/JSON: 1.0077004+rdf.json
Turtle: 1.0077004+rdf-turtle.txt
N-Triples: 1.0077004+rdf-ntriples.txt
Original Record: 1.0077004 +original-record.json
Full Text
1.0077004.txt
Citation
1.0077004.ris

Full Text

DO ACTION-RELEVANT PROPERTIES OF OBJECTS CAPTURE ATTENTION AND PRIME ACTION? by MELANIE YAH-WAI LAM B.A., Simon Fraser University, 2003 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in THE FACULTY OF GRADUATE STUDIES (Human Kinetics) THE UNIVERSITY OF BRITISH COLUMBIA October 2006 © Melanie Yah-Wai Lam, 2006 ABSTRACT In a recent series of studies, Tucker and Ellis (Ellis & Tucker 2000; Tucker & Ellis 1998) have proposed that objects automatically prime components of potential actions that they afford. Recent work, however, suggests that perhaps some form of cognitive coding or attentional mechanism may be responsible for the orientation effect described by Tucker and Ellis (1998) (Lyons et al., 2001; Anderson et al, 2002; Phillips & Ward, 2002). The primary purpose of the experiments reported here was to further examine the orientation effect and to investigate whether attention is captured by the action-relevant properties of objects. As a means of investigating whether attention was indeed directed to the action-relevant feature of an object, we assessed eye movement behaviour during the perception of an object. In Experiment 1, we sought to perform a conceptual replication of Tucker and Ellis (1998) original experiment and attempt to replicate the orientation effect. Thus, participants made speeded judgements of the vertical orientation of a common household object that was presented in varying vertical and horizontal orientations. The results revealed an absence of eye movements which suggested that attention may not be overtly captured by the action-relevant property (the handle) of the object presented. In addition, our reaction time (RT) results did not reveal an interaction between horizontal orientation and response. In Experiment 2, we asked participants to judge the horizontal, instead of the vertical, orientation of the presented object to examine the orientation effect when horizontal orientation was actually relevant to the task. The pattern of eye movements replicated much of those found in Experiment 1. In contrast to the RT results of Experiment 1, there was a trend toward an orientation effect such that participants responded more quickly when the arrangement of the horizontal orientation overlapped with the response set. Taken together, the results of Experiment 1 and 2 suggest a potential influence for intention and the relevant stimulus dimension for task performance. In Experiment 3, we examined how intentional set of participants impacts the influence of the objects' action-relevant features by varying the relevant stimulus dimension and stimulus-response mapping instructions. The results showed that when the horizontal orientation was the relevant dimension to identify, the handle orientation had an influence on the response hand but not when the vertical orientation was the relevant dimension. This would suggest that RT was unaffected by the orientation of the object along the task-irrelevant dimension. iv TABLE OF CONTENTS Abstract ii Table of Contents v List of Tables vList of Figures vii Acknowledgments x 1. Introduction 1 1.1 Graspable Objects Automatically Prime Motor Responses 2 1.2 Affordances of Cognitive Coding 4 1.3 Do Action Relevant Properties of Objects Capture Attention 7 2. Experiment 1 0 2.1 Methods2.1.1 Participants 10 2.1.2 Apparatus and Stimuli 11 2.1.3 Design and Procedure 4 2.1.4 Data Reduction 15 2.2 Results and Discussion 6 2.2.1 Reaction Time 12.2.2 Eye Movements 8 2.2.3 Summary 21 3. Experiment 2 3 3.1 Methods 4 3.1.1 Participants 23.1.2 Apparatus and Stimuli.. 24 3.1.3 Design and Procedure3.2 Results and Discussion 25 3.2.1 Reaction Time3.2.2 Eye Movements 27 3.2.3 Summary 30 4. Experiment 3 1 4.1 Methods4.1.1 Participants 31 4.1.2 Apparatus and Stimuli 32 4.1.3 Design and Procedure 3 4.2 Results and Discussion 34 4.2.1 Reaction Time4.2.2 Summary 36 5. General Discussion 9 References 45 Appendix A.Appendix B 54 LIST OF TABLES Table 1.1 Stimulus set used in Experiments 1 and 2 vii LIST OF FIGURES Figure 1.1 Examples of the type of stimuli used by Tucker and Ellis (1998) 3 Figure 1.2 Mean RTs as a function of left-right object orientation and response. (Figure adapted from Tucker & Ellis, 1998) 3 Figure 1.3 Examples of stimuli used by Lyons et al. (2000). (a) and (b) one-handled objects; (c) neutral or two-handled object. (From Lyons et al., 2000) 5 Figure 1.4 Mean RTs as a function of handle orientation and response. (Adapted from Lyons et al., 2000) 6 Figure 2.1 Schemata of experiment setup 11 Figure 2.2 Schemata of display apparatus (a) from participant's viewpoint (b) from experimenter's viewpoint 13 Figure 2.3 Mean RT as a function of vertical orientation and hand of response 18 Figure 2.4 Examples of eye movement recordings in which a saccade was observed. Top and middle panel: saccades were directed to the right of fixation. Bottom panel: saccade was directed to the left of fixation 19 Figure 2.5 Typical examples of eye movement recordings during the experimental block of trials. In contrast to the post-experiment block, no saccades were observed during these trials 20 Figure 3.1 Mean RT as a function of left-right handle orientation and hand of response 26 Figure 3.2 Examples of eye movement recordings in which no saccade was observed. These trials were typical of the experimental block 28 viii Figure 3.3 Examples of eye movement recordings in which a saccade was made, (a) Saccade directed to left oriented handle of stimulus during an experimental trial, (b) saccade directed to left oriented handle of stimulus during a control trial (c) saccade directed to right oriented handle of stimulus during experimental trial, (d) saccade directed to right oriented handle of stimulus during control trial 29 Figure 4.1 (a) Experiment setup and apparatus, (b) setup from participant's point of view 32 Figure 4.2 Stimulus set used in Experiment 3 3 Figure 4.3 Mean RT as a function of horizontal orientation and response hand 35 Figure 4.4 Three-way interaction between relevant dimension, horizontal orientation and hand of response as a function of RT. The graph on the left illustrates when the horizontal dimension is relevant to the task and the graph on the right depicts when the vertical orientation is relevant. 36 Figure 4.5 Influence of the irrelevant dimension on the speed of response, (a) The influence of the horizontal object orientation when identifying the vertical orientation, (b) the influence of the vertical object orientation when identifying the horizontal orientation 38 Figure 4.6 Influence of the relevant dimension on the speed of response, (a) The influence of the horizontal object orientation when identifying the horizontal orientation, (b) the influence of the vertical object orientation when identifying the vertical orientation 38 IX Figure B1.1 Example of drift in eye movement recording. The arrows denote the start and end of the deviation 54 X ACKNOWLEDGMENTS I can no other answer make, but, thanks, and thanks. -William Shakespeare First and foremost, I would like to take this opportunity to thank my supervisor, Dr. Chua. Romeo, the support and feedback that you have offered me throughout my two years was above and beyond that which I could have ever asked for. You are selfless when it comes to the time you take to answer our questions and to ensure that we are all doing well. This will undoubtedly shape and influence the kind of teacher that I aspire to be. I am also greatly indebted to the individual who encouraged me to apply to graduate school and peaked my interest in human motor control, Dr. Weeks. Dan, if you had not asked me to consider doing my Master's, I would have never found myself in the position I am in now and for that I am sincerely thankful. I would also like to extend my gratitude to Dr. Franks for the insight he has imparted throughout this project and the wealth of knowledge he has shared in his graduate course. To my lab mates, Erin and Brendan, I could not have asked for more thoughtful individuals to share this experience with. For the time you both took in answering my questions and for the advice that you offered, I am truly grateful. To my parents and sister, Aimee, I am blessed to have a family made up of people as loving as the three of you. And finally, to my partner, Dave, thank you for your encouragement, patience and love. 1 1. INTRODUCTION One needs only to glance around to experience the wealth of information that our visual system provides. For example, when you decide to reach for a cup of coffee on the kitchen table, your visual system will extract information about the cup's properties (i.e. location, shape, and orientation), the table, and the relationship of the cup relative to oneself and surrounding objects. Intuitively, it makes sense for the visual system to extract and integrate all of the above information so that one is aware of the surroundings before beginning to plan an appropriate reaching movement to the cup of coffee. Once the information has been processed and the appropriate action selected, the planned action can then be executed. From a review of relevant literature, it is apparent that visual processing, culminating in action, does not have to occur in the serial manner suggested above. Indeed, it has been suggested that seeing an object can automatically "afford" (Gibson, 1966)1 an action even in the absence of a movement. For example, recent evidence has shown that the simple viewing of a functional object or "tool"2 (e.g., scissors, hammer) can prime or activate motor-related areas of the brain (e.g., Creem-Regehr & Lee, 2005; Grafton, Fadiga, Arbib & Rizzolatti, 1997), or even draw attention to its location once its affordance properties have been realized (Handy, Grafton, Shroff, Ketay, & Gazzaniga, 2003).3 A number of recent studies by Tucker, Ellis, and colleagues (e.g., Ellis & Tucker, 2000; Derbyshire, Ellis, & Tucker, 2006; Grezes, ' The term "affordance" was first introduced by James Gibson (1966) to encapsulate the notion that features of our environment, or the properties of an object, can indicate and influence future action. The handle on a door, for instance, affords a pulling action, while a lever, suitable to grasp, implies a turning action. Gibson's (1979) theory of affordances suggests that objects afford all possible actions, and thus actions afforded by an object are unconstrained by one's capacity to perceive action possibilities. 2 A tool can be described as an object that is functionally specific for a particular task. Consider a cooking spatula consisting of an elongated handle with a flat flexible metal head on the end of it. If we grasp the spatula by the handle it is likely to afford a scooping, lifting, or flipping type action; however, there are less conventional ways to manipulate the same tool. Should something fall down the side of the stove out of our reach, that same spatula could be grasped by the flat end in order to slide its thin handle down a narrow opening. Despite the number of different ways an object can be manipulated, there is generally one explicit function with which it is associated. 3 For a brief review, see Appendix A. 2 Tucker, Armony, Ellis, Passingham, 2003; Symes, Tucker, & Ellis, 2005; Tucker & Ellis, 2001) have argued that the action-relevant, or affordance properties, of objects potentiate the action possibilities that might be associated with them. The impetus for these investigations was an earlier study by Tucker and Ellis (1998) in which the authors argued that the perception of an object can automatically prime the actions afforded by that object, irrespective of a person's intention. 1.1 Graspable Objects Automatically Prime Motor Responses Tucker and Ellis (1998) presented photographs of household objects (e.g., frying pan, teapot, and hammer) to participants (see Figure 1.1). These objects were selected because they had a specific action-relevant feature (i.e., their graspable surface, or handle). Participants were asked to judge the vertical orientation of the presented object using left and right key-press responses. The objects were presented either right-side up or upside-down, and with either the graspable surface oriented leftward or rightward. The basic premise of this paradigm (modeled after stimulus-response compatibility protocols - see Proctor & Reeve, 1990, for review) was that if the action-relevant feature (i.e. the handle) of the object was represented automatically and primed the action associated with it, then there should be preferential facilitation of the response hand most suited to perform the action. This, in turn, should facilitate speeded responses by that hand. 3 Figure 1.1: Examples of the type of stimuli used by Tucker and Ellis (1998). The basic pattern of results is shown in Figure 1.2. Although the horizontal orientation of the graspable surface was irrelevant to the task, reaction times were nevertheless influenced by the object's horizontal orientation. —•—Left H and —•—Right Hand Left Right Horizontal Orientation Figure 1.2: Mean RTs and error rates as a function of left-right object orientation and response. (Figure adapted from Tucker & Ellis, 1998). As shown in Figure 1.2, responses based on the object's vertical orientation were faster if the graspable surface of the object was also oriented to the side corresponding to the appropriate 4 response. Based on their findings, Tucker and Ellis (1998) concluded that, "certain action-related information—in this case the hand most suited to grasp the object—is represented automatically, regardless of intention, when the object is viewed in the peripersonal space" (p. 836). Tucker and Ellis argued that the facilitative influence of the irrelevant handle orientation on responding was supportive of the proposal that perceived objects automatically afford an appropriate action (cf. Michaels, 1988). The orientation of the graspable surface facilitated the motor response associated with it and therefore preferentially activated the hand most suited to perform that response. This "orientation effect" is reminiscent of the well-known Simon effect (e.g., Simon & Rudell, 1967) in which properties of the stimulus (in the case of the Simon effect, the object's spatial location) that are irrelevant to the task remain a determining factor of response latencies. Explanations of the Simon effect have typically been based upon variations of either "cognitive coding" or "attention-orienting" hypotheses. Coding-based accounts (e.g., Wallace, 1971) propose that it is the correspondence between stimulus codes and response codes as the critical factor in determining response times. Attention-based accounts argue for the influence of attentional biases on responding (e.g., Simon, 1990; Verfaille, Bowers, & Heilman, 1988). These two accounts have also been unified by proposals which suggest that the stimulus code may arise due to a reorienting of attention (e.g., Umilta & Nicoletti, 1992). 1.2 Affordance or Cognitive Coding? In the present series of experiments, we revisited the orientation effect observed by Tucker and Ellis (1998). One of our interests was in determining whether the action-relevant property (e.g., a graspable handle) of an object can capture overt visual attention. There may be reason to suspect that Tucker and Ellis's (1998) action priming effects were brought about by a mechanism that involved attention orienting or cognitive coding. This suspicion comes from studies to follow Tucker and Ellis (1998). Lyons and colleagues (Lyons, 2001; Lyons, Weeks, & Chua, 2000) performed a variation of Tucker and Ellis' experimental protocol in which they included phtotographs of "neutral" objects that had symmetrical graspable surfaces. Figure 1.3 shows an example of a vase with two handles that could be grasped with either the left or right hand, on either the left or right side. Figure 1.3: Examples of stimuli used by Lyons et al. (2000). (a) and (b) one-handled objects; (c) neutral or two-handled object. (From Lyons et al., 2000). Lyons et al. found that the orientation effect originally identified by Tucker and Ellis was eliminated across all stimuli when neutral objects were included in the stimulus set (see Figure 1.4). 6 —*—Left H and —a—Right Hand Lett Neutral Right Horizontal Orientation Figure 1.4: Mean RTs as a function of handle orientation and response. (Adapted from Lyons et al., 2000). That is, the graspable surface (i.e. the handle) no longer had an influence on responding. Lyons et al. argued that these results questioned Tucker and Ellis's assertion that these objects' graspable surfaces automatically afforded and primed the potential action. Lyons et al. proposed that the sensitivity of these object-based compatibility effects to task context may imply some form of cognitive coding mechanism as opposed to an affordance-based mechanism. Lyons et al. proposed, for example, that the response priming that Tucker and Ellis observed may be due the orienting of attention to a salient object feature, or a salient perceptual asymmetry (e.g., the protruding handle) in the object. A role for attention has also been recently suggested by other investigators (e.g., Anderson, Yamagishi, & Karavia, 2002; Phillips & Ward, 2002). Anderson and colleagues (2002) have also argued for an attentional account of the orientation effect. They presented line drawings of two types of stimuli: a pair of scissors which handles either oriented left or right, and a clock face that indicated a time of about 3:15 or 8:45. Participants judged 620 A 560 -550 -540 -• whether the stimuli were oriented clockwise or counter-clockwise. Participants were faster at judging orientations when the required response (left or right hand) was congruent with the side of the handle of the scissors, or with the hands of the clockface. While the results for the scissors could be taken in support of Tucker and Ellis's (1998) action potentiation, Anderson et al. argued that the results for the clockface stimuli instead suggested that stimulus asymmetries resulted in an attentional bias toward the asymmetry. A similar result was found when they used non-object stimuli that had a perceptually salient asymmetry. Anderson et al. argued that the latter stimuli (clockface, non-object shape) would not be expected to afford a particular action, and thus the orientation effects observed could not have been due to an automatic potentiation of a motor response. 1.3 Do Action Relevant Properties of Objects Capture Attention? Motivated by the studies of Lyons (2001; Lyons et al, 2000) and Anderson et al. (2002), we reconsidered the orientation effect observed by Tucker and Ellis (1998) from a perspective that investigates further a potential role for an attention orienting mechanism. Based on the work by Handy et al. (2003), Tucker and Ellis (1998, 2000), and others (e.g., Anderson et al. 2002), the basic issue addressed centred about the following question: when an object - a tool - is presented to an observer, is attention oriented initially to a feature that informs the observer of the object's affordance (i.e. the object's action-relevant feature)? Following the logic suggested by Lyons (2001), this orienting of attention could result in the generation of a stimulus code (e.g., Umilta & Nicoletti, 1990, 1992) that subsequently influences the selection of a response to the object. If the direction in which attention is shifted is spatially compatible with the side of response, then responses would be expected to be facilitated. In 8 addition, once the affordance of the object has been recognized as a result of attention to an object's action-relevant feature, attention may be kept engaged to the location of the object (and perhaps the specific action-relevant feature). This latter point would be consistent with the recent observations noted by Handy and colleagues (2003) on how graspable objects appear to "grab" attention. The primary purpose of the experiments reported here was to further examine the orientation effect (Tucker & Ellis, 1998) and investigate whether attention is captured by the action-relevant properties of objects. As a starting point for these investigations, we opted to examine whether the presentation of a functional object (e.g. a "tool" with a graspable handle) resulted in the overt orienting of attention towards its action-relevant feature (i.e., the graspable handle).4 As a means of investigating whether attention was indeed directed to the action-relevant feature of an object, we assessed eye movement behaviour immediately following presentation. We expected that a significant portion of saccades to be generated initially toward the action-relevant feature of the object when it was presented. We also selected to use real objects, rather than photographs (e.g., Lyons et al., 2000; Phillips & Ward, 2002; Symes et al, 2005; Tucker & Ellis, 1998, 2004) or line drawings (e.g., Anderson et al. 2002). The basic rationale for this decision was the expectation that any 4 It is well-established that attention can be oriented covertly - i.e., in the absence of eye movements (e.g., Posner, 1980). However, evidence has also supported a tight coupling between attention shifts and eye movements (e.g., Deubel & Schneider, 1996; Rizzolatti, Riggio, Dascola & Umilta, 1987; Sheliga, Riggio, & Rizzolatti, 1994). Evidence for a relation between attention and eye movements has been found in studies that have examined whether attentional capture and oculomotor capture measure the same underlying attentional processes. For example, Ludwig and Gilchrist (2002) have shown that oculomotor and attentional capture may have similar underlying processes when the response requires directional localization. Based on the line of reasoning that attentional capture and oculomotor capture may both be a means of measuring the locus of attention (particularly when the eyes are free to move), we used oculomotor capture as a tool to try and identify where attention was being directed. 9 affordance-based effects should be more robust when an actual object, rather than a visual representation of an object, is presented. In Experiment 1, we sought to perform a conceptual replication of Tucker and Ellis (1998) original experiment and attempt to replicate the orientation effect. Thus, participants made speeded judgements of the vertical orientation of a common household object that was presented in varying vertical and horizontal orientations. In Experiment 2, we asked participants to judge the horizontal, instead of the vertical, orientation of the presented object to examine the orientation effect when horizontal orientation was actually relevant to the task. In both experiments, we assessed eye movement behaviour concurrently with manual response performance during the task. In Experiment 3, we examined how intentional set of participants impacts the influence of the objects' action-relevant features by varying the relevant stimulus dimension and stimulus-response mapping instructions. 10 2. EXPERIMENT 1 Adopting the stimulus-response compatibility protocol used by Tucker and Ellis (1998), participants were presented with common functional objects ("tools" - e.g. a cup, a tea pot, a small pitcher) that were oriented either right-side up or upside-down, and with their handle oriented either to the left or right. Participants made a left or right key press response according to the vertical orientation of the object. Horizontal orientation was irrelevant to the task. Eye movements were recorded as a means of inferring where attention was directed when the object was presented. If the action-relevant handle of the objects prime the associated response, then we expected that responses to the vertical orientation of the objects would be facilitated when the handle was oriented congruently with the hand making the response (e.g., Tucker & Ellis, 1998). If the same action-relevant handle also captured overt visual attention, then we expected that small amplitude saccades would be directed toward the handle subsequent to object presentation and preceding the manual response. 2.1 Methods 2.1.1 Participants Twelve (seven females and five males; all right-handed; M age 24.8 years) members of the student population of the University of British Columbia participated in this experiment. All had normal or corrected-to-normal vision. The study was conducted in accordance with the ethical guidelines set by the University of British Columbia. Each participant provided informed consent and was compensated $10 for their time. All but one were na'ive as to the purpose of the study. 11 2.1.2 Apparatus and Stimuli Participants were seated in a height adjustable chair directly in front of the display apparatus which rested on a table (see Figure 2.1 for schemata of setup). A chin rest was attached to the table at a height of 28 cm and was used to stabilize the viewing position at a distance of 58 cm. Two custom-built response boxes were used to record button-press responses made with the index finger of each hand (response buttons were separated by 42 cm). Participants were asked to wear a head band to provide comfort when fitted with the head mounted eye tracker. Display apparatus Figure 2.1: Schemata of experiment setup. Participants were fitted with the ASL eye tracker. A chin-rest was placed at the front of the table and the stimulus display apparatus was positioned at the opposite end of the table. The response apparatus was placed on the table and participants were sitting in height adjustable chairs. The stimulus set selected for this experiment consisted of six common household objects with handles (e.g., watering can, tea cup, milk jug, teapot and coffee mugs) that could be associated naturally with a grasping action. These objects differed in their size and shape. 12 Colour was consistent across all objects (white). All stimuli were presented in each of two horizontal (handle left, handle right) and vertical (right side up and upside down) orientations (see Table 1 for details of the stimulus set). Teacup Dimensions: Height 8.7 cm Width 11.6 cm Milk jug Dimensions: Height 10.2 cm Width 12.2 cm Gravyboat Dimensions: Height 8.8 cm Width 15.5 cm Watering can Dimensions: I Height 9.9 cm Width 15.6 cm Coffee mug Dimensions: Height 11.3 cm Width 11.1 cm Teapot Dimensions: Height 8.8 cm Width 14.5 cm Table 1.1: Stimulus set used in Experiments 1 and 2. Note. In Experiment 3, only the coffee mug was presented. Stimuli were placed inside a wooden box (40 cm height X 30 cm width X 25 cm depth) which had a one-way mirror fitted on the front. The one-way mirror allowed the experimenter to place the stimulus inside the display box hidden from the participant's view. The mirror became transparent when a set of 4 light emitting diodes (LED) strips affixed inside the top of the box was illuminated, allowing participants to view the object inside. A piece of black felt was affixed to the back of the display apparatus allowing the experiment to 13 remove and replace the object in between trials without being detected (see Figure 2.2 a and b). A custom written E-Prime script (Psychology Software Tools, Pittsburgh, PA) was used to display the stimulus for each trial in order for the experimenter to place the appropriate object; to control the onset of the lights in the display apparatus; and to record the button press responses. An Applied Science Laboratories (ASL) Model H6 eye tracker unit (Applied Science Laboratories, Bedford, MA) was used to record eye movements. The eye position signal was sampled at 120 Hz. This is a monocular system, measuring the left eye only. The ASL unit 14 tracks the eye line of gaze by measuring the position of the pupil in relation to the position of the corneal reflection (CR). Since the position of the CR remains constant, any rotation of the eye can be measured by the amount of separation between the CR and the pupil. This reflects a change in the point of gaze. A careful calibration was conducted at the beginning of each session. During calibration, participants were instructed to look at a nine points scene (arranged in a 3 X 3 matrix) for which the positions in the scene image were known. While the participant fixated on each point, the eye position was measured by the ASL system. A mapping between the two sets of points was generated which allowed the system to assess the participant's point of gaze in the scene for any frame by using this established mapping. The evaluation was conducted up to three times during a single session to ensure that the eye movements were being measured with a high level of accuracy. Re-calibrations were carried out when the fixation cross no longer accurately reflected the point of gaze of the participant. 2.1.3 Design and Procedure The experiment consisted of ten practice trials followed by one block of 144 experimental trials in which each of the six objects were presented twenty-four times. The order of stimulus presentation was randomized and the factors of vertical orientation (right side up, upside down) and handle orientation (left, right) were fully balanced (e.g., 36 presentations X 2 vertical orientations X 2 handle orientations = 144 trials).5 5 Participants also completed a block of 48 trials after the experimental trials in which they were asked to purposefully make an eye movement to the object's handle prior to executing a response. The rationale for including such a block was to ensure that saccades directed to the handle could be distinguished from those trials in the experimental condition in which no eye movement was made. This was also a means of ensuring that the visual angle from the center of the mass to the handle was large enough that if an eye movement was made to the handle it could be identified. 15 Participants were instructed to identify the vertical orientation of the stimulus (i.e., right side up or upside down) by making either a left or right button press with the appropriate hand according to a distinct mapping condition. Mapping was counterbalanced across all participants. The horizontal orientation of the object was irrelevant to the task; however, we did not draw attention of this to the participants. An emphasis was made on responding as quickly as possible while maintaining accuracy. A trial consisted of an inter-trial interval of approximately four seconds, during which time a red fixation point was centrally displayed on the mirror. Once the stimulus was placed inside the display box, the experimenter would say, "Ready", at which time participants would be required to stabilize their eye gaze by fixating on the red point. Participants were not given any information with respect to eye movement behaviour during the trial. In addition, participants were also asked to refrain from blinking during a trial since a blink would constitute no eye data. Once the experimenter triggered the start of the trial with a key-press, a constant fore-period of 1000 ms followed after which time the lights inside of the display box was illuminated, allowing the participant to view the object inside. The stimulus remained in view until a response was made. Participants were not given feedback on response latencies or response errors. 2.1.4 Data Reduction Gaze position data from the ASL eye tracker were saved during each trial and later imported into custom-written software for further analyses. Each trial was subjected to an analysis identifying the presence of a saccadic eye movement. When a saccade was present, markers were placed at the beginning and the end of a saccade, in turn providing such information as saccadic RT, saccade duration (ms) and the magnitude of the visual angle (degrees) traversed 16 by the saccade. Because we were assessing potential eye movements to the object handles, horizontal saccades were of primary interest. The block of trials completed after the experimental block served as a control to ensure that when participants shifted their eyes voluntarily to the handle of the object, a distinct change in the horizontal coordinates could be identified. An assessment of the control data revealed that a saccade made to the handle of the object was readily identifiable according to a pre-defined change in amplitude and latency. A saccadic eye movement was characterized as a change in amplitude of at least 1.5 degrees and a latency of less then 50 milliseconds. Saccadic latency was recorded as the time in milliseconds from the initiation of the eye movement to when the eye movement terminated (identified by a plateau in the amplitude). Saccadic eye movements were identified in all participants during the control block. In some trials saccades were directed to the opposite side of the handle or to the spout of objects such as the watering can, gravy boat and teapot. In addition, some saccades were made post-reaction time and may be the result of having completed a block of trials in which responses were made as quickly and accurately as possible without instructed eye movement. Reaction time data were recorded with the E-Prime script used to control the experimental apparatus and monitor the response buttons. Reaction times were recorded in milliseconds. 2.2 Results and Discussion 2.2.1 Reaction Time Two participants were removed from the analysis because the bulk of their reaction times (RTs) exceeded 1000 ms. Errors and reaction times (RTs) more than 2 standard deviations 17 away from each participant's condition means were excluded from analysis. 3.1 % of the experimental trials were removed as errors and 3.9 % of the trials were removed as outliers, leaving 93.0 % of the raw data as correct response trials. The condition means for correct response RTs were calculated across all objects for each participant. These data were subjected to a mixed analysis of variance (ANOVA) with the between-subjects factor of Response Mapping (right-hand-upright/left-hand-inverted or left-hand-upright/right-hand-inverted [UR-DL or UL-DR, respectively]), and the within-subjects factors of Horizontal Orientation (left orientation, right orientation) and Response (left hand, right hand). Analysis revealed only a significant main effect for Horizontal Orientation, F (1, 8) = 6.613, p = .0331. Responses when the handle was to the left (Af = 634 ms) were faster than when the handle was to the right (Af = 644 ms). No other main effects or interactions reached significance. Of particular interest, the interaction between Horizontal Orientation and Response (i.e., Tucker and Ellis's orientation effect) failed to reach significance, F (1, 8) < 1. As shown in Figure 2.3, there was no benefit of the handle orientation being congruent with the side of response. 18 700 -j 680 -660 - " & 640 -K-cc 620 -II—~~ 600 -580 -| 1 Left Right Horizontal Orientation Figure 2.3: Mean RT as a function of horizontal orientation and hand of response. 2.2.2 Eye Movements Figure 2.4 shows examples of voluntary saccades to the object handle. These traces were obtained from trials performed during the post-experiment control block, in which participants made volitional saccades to the handle of the presented object. The grand average magnitude of eye movements in the horizontal direction was approximately 6-7 degrees across all conditions and participants. The grand average of the saccade duration was approximately 42 ms. During these voluntary saccade trials, the grand mean saccadic RT across all conditions and participants was approximately 410 ms. —4— Left Hand » Right Hand o x 100 -I 1 : 1 1 : . 1-0 0.5 1 1.5 2 2.5 Time (sec) = 115-1 , : • • r 0 0.5 1 1.5 2 Time (sec) Time (sec) Figure 2.4: Examples of eye movement recordings in which a saccade was observed. Top and middle panel: saccades were directed to the right of fixation. Bottom panel: saccade was directed to the left of fixation. 0.8 1 Time (sec) 1.6 0.8 Time (sec) Figure 2.5: Typical examples of eye movement recordings during the experimental block of trials. In contrast to the post-experiment block, no saccades were observed during these trials. 21 Figure 2.5 shows examples of eye movement recordings during the experimental block of trials. In stark contrast to the traces shown in Figure 2.4, no saccades were detectable during these experimental trials. An assessment of the eye movement recordings revealed that there were no overt orienting eye movements during stimulus presentation. In a typical trial, participants would begin by fixating on the fixation light in the center of the display. Upon stimulus presentation, horizontal eye movements remained stationary up until the time a response was made. The absence of any eye movement in the horizontal direction was reflected in the recordings as a relatively stable flat line (see Figure 2.5). We inferred from this observation that the action-relevant feature of the objects (i.e., the handle) did not induce overt shifts of attention.6 2.2.3 Summary The purpose of this first experiment was to examine whether the task irrelevant handle orientation facilitated spatially compatible responses and to identify whether overt orienting of the eyes to the graspable features of the object occurred. While covert attention cannot be discerned from the eye recordings, what can be established from the eye recordings is whether overt attention was being captured by some element of the stimulus. Tucker and Ellis (1998) proposed that "certain action-related information—in this case the hand most suited to grasp the object—is represented automatically when the object is viewed in the peripersonal space" (p.836). Eye movements made during the course of a trial were of particular interest because they are an observable indicator of where overt attention was 6 Three participants did shift their horizontal gaze to the graspable surface on a few trials. However, this only accounted for 2.4 % of all the trials in the experimental block. A comparison of the mean saccade duration in the few trials that an eye movement was made with those in the control block revealed a latency of 34 ms, a slight difference of about 8 ms. In contrast to the saccades generated in the control block, saccadic reaction times were longer (~ 517 ms) than those in the control block (~ 410 ms). 22 being allocated. The absence of eye movements during the experimental block, in contrast to the control block, suggested that attention was not being overtly captured by the graspable surface of the object presented. Futhermore, our reaction time results did not support Tucker and Ellis's (1998) argument. There was no facilitation in reaction time when the irrelevant horizontal orientation was congruent with the response hand (see Figure 2.3). 23 3. EXPERIMENT 2 The orientation effect (Tucker & Ellis, 1998) was not reproduced in Experiment 1. We expected to find the orientation effect if, as Tucker and Ellis argued, the action-relevant graspable feature of the objects had an automatic influence on the motor response. This was despite the task instructions which made .the graspable feature (and its orientation) irrelevant to the task. However, one possible explanation for the findings of Experiment 1 was that participants were told that the goal of the task was to identify the vertical orientation of the presented stimuli. Intention may influence response selection such that it can override information in the environment that is detected by the actor. As a result, the horizontal orientation was rendered irrelevant to the required task since it did not provide necessary information to the identification of the vertical orientation of the stimulus. Taking this into consideration, if the horizontal orientation was made relevant dimension to identify, it may then be coded as part of the stimulus. The expected result would be an orientation effect in which the left-right codes of the horizontal orientation is spatially mapped onto the left-right response set. Consequently, when the horizontal orientation and the response hand were compatible (left-left, right-right), responding would be faster as opposed to when they were incompatible (left-right, right-left). Experiment 2 was designed to determine if the horizontal dimension, when made relevant to the task, could facilitate speeded responses when it was congruent with the side of response. 24 3.1 Methods 3.1.1 Participants Twelve (nine females and three males; all right-handed; M 24.5 age years) new participants from the student population of the University of British Columbia took part in this experiment. All had normal or corrected-to-normal vision. The study was conducted in accordance with the ethical guidelines set by the University of British Columbia. Each participant provided informed consent and was compensated $10 for their time. All were naive to the purpose of the study. 3.1.2 Apparatus and Stimuli The apparatus and stimulus set were identical to the Experiment 1. All stimuli were presented in both horizontal (handle left, handle right) and vertical (right side up and upside down) orientations. 3.1.3 Design and Procedure The design was similar to the preceding experiment with differences only in the instructions given to participants. Rather than identifying the vertical orientation, participants were asked to judge the horizontal orientation of the object using left and right button press responses (right-hand-right-handle/left-hand-left-handle or right-hand-left-handle/left-hand-right handle [RR-LL or RL-LR, respectively]). The response mapping conditions can be described as being either "compatible" or "incompatible". In the compatible mapping, participants made a right button press when the handle was oriented to the right and a left button press when the handle was to the left. In the incompatible mapping, a left response would be made when the 25 handle was pointing to the right and a right response when the handle was to the left. Mapping instructions was counterbalanced across all participants. Ten practice trials were followed by 144 experimental trials. The order of stimulus presentation was randomized and the factors of vertical orientation (right side up, upside down) and handle orientation (left, right) were fully balanced (e.g., 36 presentations X 2 vertical orientations X 2 handle orientations = 144 trials). 3.2 Results and Discussion 3.2.1 Reaction Time Errors and reaction times (RTs) more than two standard deviations away from each participant's condition means were excluded from this analysis. 4.3 % of the experimental trials were removed as errors and 3.3 % of the trials were removed as outliers, leaving 92.4 % of the raw data as correct response trials. The condition means for correct response RTs were calculated across objects for each participants. These data were subjected to a mixed analysis of variance (ANOVA) with the between-subjects factor of Response Mapping (compatible or incompatible) and the within-subjects factors of Vertical Orientation (upright, inverted) and Response (left hand, right hand). Participants in the compatible (RR-LL) mapping group (M = 523 ms) appeared to make faster responses than those in the incompatible (RL-LR) mapping group (M = 572 ms). However, this between-subject effect was not statistically significant, F(l, 8) = 1.22, p -.2955. There were also no other effects or interactions that reached significance (all Fs <= 1.0). 26 Figure 3.1 shows the mean reaction times as a function of the horizontal orientation of the stimulus and the hand of response. Such a pattern is typical of what would be expected for a prototypical spatial compatibility effect. That is, when the stimulus was horizontally oriented to the right, right-hand responses (M = 520 ms) were faster than left-hand responses (Af = 581 ms) and when the stimulus was oriented to the left, left-hand responses (M = 526 ms) were faster than right-hand responses (M = 562 ms). An advantage for the "compatible" mapping was expected, which would have indicated that the responses in the RR-LL (compatible) mapping were faster than responses in the RL-LR (incompatible) mapping. However, the pattern of results shown in Figure 3.1 is typically obtained from a repeated-measures design, in which participants perform under both mappings. In the present study, mapping was a between-subjects factor. One reason that the mapping effect did not reach significance may be due to large between-subjects variability since there was only six participants in each group. 620 -600 -580 -1 560 -i i. LettHand fe 540 - Right Hand 520 -•* ""i i 5 00 -4 80 -Le1t Right Horizontal Orientation Figure 3.1: Mean RT as a function of left-right handle orientation and hand of response. 27 3.2.2 Eye Movements The pattern of eye movements replicated much of those found in Experiment 1 (a general absence of eye movements during stimulus presentation). The same criterion used in Experiment 1 to define a saccade was applied to the current experiment. No orienting eye movements were detected which would suggest that the action-relevant property of the stimuli was not capturing overt attention. When responding to the horizontal orientation of the presented stimulus, participants maintained a relatively stable horizontal eye position (see Figure 3.2), with the exception of two participants. These two participants made saccadic eye movements to the handle location in 56.6% of their combined experimental trials. A comparison of the eye movements made by these two participants with their eye movements in the control block (in which participants were asked to look at the handle prior to making a response) revealed no difference in saccadic RT. The mean onset of a saccade made during the experimental block was 331 ms across all conditions while the saccadic RT was 336 ms during the control block. Therefore, there was little difference in the oculomotor behaviour in the control condition as opposed to the experimental condition (see Figure 3.3). 140 SS 13 135 1 130 5 125 8 120 2 115 I 11° •= 105 H x 100 0 0.2 0.4 0.6 0.8 1 Time (sec) 1.2 1.4 1.6 Figure 3.2: Examples of eye movement recordings in which no saccade was observed. These trials were typical of the experimental block. Figure 3.3: Examples of eye movement recordings in which a saccade was made, (a) Saccade directed to left oriented handle of stimulus during an experimental trial, (b) saccade directed to left oriented handle of stimulus during a control trial (c) saccade directed to right oriented handle of stimulus during experimental trial, (d) saccade directed to right oriented handle of stimulus during control trial. 30 3.2.3 Summary The purpose of this experiment was to resolve whether the horizontal (left-right) orientation can have an influence over the execution of a response with a particular hand when identifying that object. The premise was that the orientation effect found by Tucker and Ellis (1998) could be elicited when the horizontal orientation was now relevant to the task at hand. One reason for the failure to replicate the orientation effect in Experiment 1 may be because the handle was not being coded as part of the stimulus and that the goal of the task (to identify the vertical orientation of the stimulus) had a stronger influence on response selection. By making the horizontal orientation relevant to the task, perhaps it could now be coded as part of the stimulus and the effect would be revealed. Although there appeared to be a trend for the orientation effect, the results were not statistically significant. In the present experiment, we monitored eye movements, as in Experiment 1. The results again showed that no overt horizontal eye/orienting movements were observed, despite the change in instructions, emphasizing the horizontal orientation of the objects. Taken together, the results of Experiment 1 and 2 suggest a potential influence for intention and the relevant stimulus dimension for task performance. In the next study, we adopted a fully repeated-measures design to further examine the possible role of intention. Participants completed similar tasks as outlined in the two previous experiments. However, they not only identified the vertical orientation of the stimuli in a session but also the horizontal orientation. Eye movement data were no longer collected in Experiment 3 since Experiment 1 and 2 have already established that saccadic eye movements do not appear necessary to carry out the task. 31 4. EXPERIMENT 3 Tucker and Ellis (1998) have claimed that, "information (actual action possibilities) must be present if the intentions one forms are to relate to the world" (pg. 833). This implies that irrespective of how one intends to act upon an object that objectstill affords a particular set of actions that can be made toward it. It is thus surprising that Tucker and Ellis only examined the influence of one dimension (vertical orientation). One way that intention can be manipulated is to instruct participants to attend to different dimensions of the stimuli in separate blocks of trials. In the present experiment, participants identified the vertical object orientation, as in Tucker and Ellis's experiment, in a set of trials and the horizontal object orientation in another set. Rather than assigning participants to separate mapping conditions, as carried out in Experiment 1 and 2, participants performed both mappings in separate trial blocks. 4.1 Methods 4.1.1 Participants Nine (five females and four males; all right-handed; M age 23.1 years) members of the student population of the University of British Columbia participated in this experiment. All had normal or corrected-to-normal vision. The study was conducted in accordance with the ethical guidelines set by the University of British Columbia. Each participant provided informed consent prior to participation. Participants were naive as to the.purpose of the experiment. 32 4.1.2 Apparatus and Stimuli Participants sat in a height adjustable chair at a table facing a white cardboard screen (see Figure 4.1 a and b for illustration of setup). The screen had a cutout which was approximately 20.1 cm (height) by 17.8 cm (width). A chin rest was attached to the table at a height of 28 cm and was used to stabilize the viewing position at a distance of 58 cm. We used liquid-crystal goggles to limit the amount of visual information available to the participants during each trial and between trials. Participants placed their left and right index finger on two telegraph keys approximately 42 cm apart. Eye movements were not monitored in the present experiment. The display setup allowed for participants to actually reach and touch the cup (although they were not require to do so - see Future Directions). Figure 4.1: (a) Experiment setup and apparatus, (b) setup from participant's point of view. The stimulus set selected for this experiment differed from Experiment 1 and 2. Rather than using a variety of common household objects, the stimuli for Experiment 3 was simplified and made up of six different types of cups, all white in colour and varying in shape and size (see Figure 4.2). All stimuli were presented in each of two horizontal (handle left, handle right) and vertical (right side up and upside down) orientations. 33 4.1.3 Design and Procedure Participants performed two block of trials for each combination of the Relevant Dimension condition. In the Vertical Relevant, participants were asked to judge the vertical orientation of the object, irrespective of the horizontal orientation. In the Horizontal Relevant condition, participants were asked to judge the horizontal orientation of the object, irrespective of the vertical orientation. For each relevant dimension, participants performed according to one of two Mapping rules. When the vertical orientation was relevant, participants performed one block of trials under an UR-DL mapping and a one block under the UL-DR mapping. When the horizontal orientation was relevant, participants performed one block of trials under an RR-LL mapping and a one block under the RL-LR mapping. The order of presentation of the relevant dimension and mapping were alternated between subjects. Trials began with a constant foreperiod of 1250 ms during which time vision was occluded (translucent state). After this time, the liquid-crystal goggles became transparent allowing participants to view the stimulus presented to them. Once the participant selected their response, the goggles shut which occluded vision allowing the experimenter to swap the 34 stimuli. Participants were given a block of 10 practice trials for each mapping condition and prior to the experimental blocks. The order of stimulus presentation was randomized and the factors of vertical orientation (right side up, upside down) and handle orientation (left, right) were fully balanced (e.g., 30 presentations X 2 vertical orientations X 2 handle orientations = 120 trials). Participants completed one block of 120 trials for each mapping by relevant dimension combination for a total of 480 trials. 4.2 Results and Discussion 4.2.1 Reaction Time Errors and reaction times (RTs) more than two standard deviations away from each participant's condition means were excluded from this analysis. 1.6 % of the experimental trials were removed as errors and 4.3 % of the trials were removed as outlier, leaving 94.1 % of the raw data as correct response trials. The condition means for correct response RTs were calculated across all objects for each participants. These data were subjected to a repeated measures ANOVA with the independent variables of Relevant Dimension (vertical, horizontal), Horizontal Orientation (left, right), Vertical Orientation (upright, inverted) and Response (left hand, right hand). Analysis revealed a main effect for Relevant Dimension, F(l, 8) = 75.215, p < .0001. Specifically, participants were faster to identify the horizontal orientation of an object (Af = 376 ms) than the vertical orientation of an object (Af = 466 ms). A significant two-way interaction between Horizontal Orientation and Response Hand was also evident, F(l, 8) = 11.687, p = .0091 (see Figure 4.3). Responses were facilitated whenever the handle orientation and response side were congruent. Right hand responses demonstrated 35 significantly faster RT when the handle was oriented to the right (M = 405 ms) than to the left (M = 434 ms). Conversely, for responses made with the left hand, objects horizontally oriented to the left (M = 411 ms) yielded faster RT than those oriented to the right (M = 434 ms). —•— Le tl H and -•—Right Hand Left Right Horizontal Orientation Figure 4.3: Mean RT as a function of horizontal orientation and hand of response. In addition, Relevant Dimension also reliably interacted with both Horizontal Orientation and Response Hand, resulting in a significant three-way interaction, F(l, 8) = 13.369, p = .0064. As shown in Figure 4.4 a, when the horizontal orientation was the relevant dimension, the handle orientation had an influence on the hand executing the response. However, when the goal of the task was to identify the vertical orientation, the interaction between the orientation of the handle and the hand making the response was not apparent (see Figure 4.4 b). In other words, the horizontal orientation of the stimulus only had an influence over a response when it was the dimension that is relevant to the task. 36 Figure 4.4: Three-way interaction between relevant dimension, horizontal orientation and hand of response as a function of RT. (a) When the horizontal dimension is relevant to the task, (b) when the vertical orientation is relevant. 4.2.2 Summary By having all participants complete each of the four response mapping conditions, a more robust and powerful test of the orientation effect was performed. In Experiment 1 and 2, participants were only assigned to one mapping condition and it was possible that an orientation effect was not found because of between-subject variability. One way to reduce this variability was to run a fully repeated-measures design. Experiment 3 also aimed to resolve the impact of intention on response selection. To manipulate intention, participants were instructed to identify either the vertical orientation or the horizontal orientation of the stimulus in distinct blocks of trials. Such a design allowed a thorough assessment of the influence that the relevant and the irrelevant dimension had on RT. When attending to the vertical orientation of an object, Tucker and Ellis (1998) found strong effects with respect to the influence of the irrelevant horizontal object orientation (specifically, the orientation of the action-relevant object handle) on the speed of response. 37 Our results were not consistent with their finding. Despite a Horizontal Orientation X Response Hand interaction, the three-way interaction of Relevant Dimension X Horizontal Orientation X Response Hand indicated that the dimension that was being identified had an important an influence on this interaction. Therefore, when participants were asked to identify the horizontal orientation of the stimulus, the orientation of the stimulus' handle offered pertinent information with respect to the selection of the correct response. Thus, the handle was presumably coded as a salient stimulus feature and its correspondence with the code for the response determined response performance. In the Figure 4.5 a and b, it was evident that reaction time was unaffected by the orientation of the object along the task-irrelevant dimension. When the horizontal orientation was relevant to the goal of the task (see Figure 4.6 a), the congruency (orientation) effect between handle orientation and side of response was clearly evident. Vertical object orientation did not have the same impact even when it was the relevant dimension. But this is perhaps not surprising, as the stimulus and response dimensions were orthogonal to each other (i.e., vertical stimulus dimension versus horizontal response dimension). 38 460 -(40 • 1 <2°" £ <oo • 380 • 360 -343 -320 • -LetHand Right Hand Horizontal Orientation Figure 4.5: Influence of the irrelevant dimension on the speed of response, (a) The influence of the horizontal object orientation when identifying the vertical orientation, (b) the influence of the vertical object orientation when identifying the horizontal orientation. 4 SO 440 | 420 fc 400 380 360 340 330 -LetHand Right Hand Lett Right Horizontal Orientation 500 4 80 4 SO 440 420 400 3B0 360 340 320 -^-LelHand .Right Hand | Up Down Vertical Orientation Figure 4.6: Influence of the relevant dimension on the speed of response, (a) The influence of the horizontal object orientation when identifying the horizontal orientation, (b) the influence of the vertical object orientation when identifying the vertical orientation. 39 5. GENERAL DISCUSSION The present studies were motivated primarily by the work of Tucker and Ellis (1998), who argued that the action-relevant properties of objects automatically potentiate the action possibilities afforded by those properties. We used protocols that were typical of stimulus-response compatibility paradigms, in which we examined whether the spatial congruence between the orientation of the action-relevant feature of an object and the hand most suited to perform the associated action, influenced the speed of simple motor responses. Following the lead of Tucker and Ellis (1998), we presented participants with objects that had graspable handles (i.e., the "action-relevant feature"), and examined whether simple manual responses (discrete key presses) were facilitated when the handle was on the same side of space as the hand that was responding. We used actual objects, rather than photographs (cf. Tucker and Ellis, 1998), based on our intuition that if objects did indeed prime the actions they afforded, these effects would be more robust with real objects (rather than their facsimiles). A second source motivating these studies came from recent suggestions (e.g., Anderson et al., 2002; Lyons, 2001) that object asymmetries leading to attentional orienting or attentional biases, rather than object affordances, may have been the critical factor underlying Tucker and Ellis's (1998) observations of "action potentiation by seen objects." The possibility that attention orienting may play a role in Tucker and Ellis' orientation effect were based on earlier suggestions that spatial compatibility effects (or variants such as the Simon effect) could be brought about by the spatial correspondence between attention shifts and manual responses (e.g., Nicoletti & Umilta, 1989; Umilta & Nicoletti, 1990; 1992). Since the object presented in the current studies (and particularly in Tucker and Ellis (1998)) were asymmetrical in nature (with a protruding handle), the asymmetrical feature -40 independent of whether or not it was action-relevant (i.e., graspable), could have resulted in attention orienting toward it. In this instance, spatial codes would be generated for both the stimulus and response sets and based on the degree of correspondence between the two, response selection would be affected. The outcome would be either a benefit or detriment in the time to make a response. To that end, we monitored oculomotor behaviour during task performance in the present experiments, to examine whether object identification was associated with overt attentional capture by the action-relevant feature of the object. (We realize that this examination of overt orienting was only a starting point, as it is possible that attention could be oriented covertly, in the absence of eye movements). Experiment 1 served as a conceptual replication of Tucker and Ellis (1998). Participants identified the vertical orientation of the presented objects with left and right manual responses. The horizontal orientation of the objects (and their handles) were task irrelevant. The findings did not corroborate the orientation effect observed by Tucker and Ellis (1998). Reaction times were not influenced by the irrelevant horizontal object orientation. Furthermore, no overt orienting eye movements were detected during stimulus presentation. Experiment 2 was intended to examine whether the horizontal orientation of the graspable surface of the objects could have an impact on responding if it were the task-relevant dimension. Participants identified the horizontal orientation of the stimulus object, with vertical orientation being task-irrelevant. The results did not show the expected reaction time advantage for the congruent spatial mapping. Although group differences in reaction time were in the expected direction, there may have been insufficient power in the between-subject design to yield a significant effect. Nevertheless, the results for our measure of overt 41 orienting again failed to reveal consistent capture of the eyes. This was the case despite task instructions which now deemed the orientation of the object handles as task relevant. Taken together the eye movement (or lack thereof) results of Experiments 1 and 2 indicate that overt orienting of attention to the action-relevant feature of the stimulus was not required to accomplish the tasks. From an object affordance perspective (e.g., Ellis & Tucker, 2000; Tucker & Ellis, 1998), this suggests that the action-relevant feature of an object do not result in overt attention orienting. From a perspective of stimulus feature coding (e.g., Anderson et al., 2002), this suggests that salient spatial asymmetries in stimulus objects also do not necessarily result in overt orienting. What we are not able to conclude at this point, is whether covert shifts of attention play a role. This remains an issue for future work. In Experiment 3, we shifted our focus toward a closer examination of the potential role of intention and the relevant stimulus dimension in task performance. Experiment 3 combined the task protocols in Experiments 1 and 2 into a single design. What was clearly demonstrated was that intention can modulate the purported priming effect described by Tucker and Ellis (1998). When the horizontal orientation was relevant to selection of a response, the vertical orientation did not interfere. Moreover, an orientation effect emerged such that response benefited from a correspondence between the horizontal orientation and the hand of response. Of primary interest, however, was the influence that the horizontal orientation would have when participants were instructed to identify the vertical orientation of the stimulus. No orientation effect was elicited. Taken as a whole, these findings highlight the strong role for goal representations and that the automatic potentiation of actions associated with a particular object feature is not as robust as suggested. 42 Together, the present experiments provide evidence that call into question the argument that motor responses are generated according to the action most suited (e.g., hand most suited to grasp an object) for the visually perceived object. What these experiments also show is that the visually asymmetrical stimuli do not necessarily induce an attentional bias that can be observed by oculomotor capture, as proposed by Lyons et al. (2000). Our speculation is that the orientation effect (Ellis & Tucker, 2000; Symes et al., 2006; Tucker & Ellis, 1998) is due to perceptual biases (perhaps an attentional bias involving covert - not overt - attention) brought about by the asymmetrical geometric shapes of the stimuli (Anderson et al., 2002), and not due to affordance properties of the stimuli. When these asymmetrical properties are the relevant dimension of the task (Experiment 3), they are coded during the translation of the stimulus to the appropriate response. When they are not task-relevant, they do not form a necessary part of the stimulus code, which may explain the lack of robustness of the orientation effect. We should note that a recent study by Tucker and Ellis themselves (Symes et al., 2006) did not show a strong replication of the orientation effect. Given our results, how can we reconcile our initial conclusions with findings from the neuroimaging literature (see Appendix A) that show that viewing a functional object (e.g., scissors, hammer) can activate motor-related areas of the brain (e.g., Creem-Regehr & Lee, 2005; Grafton, Fadiga, Arbib & Rizzolatti, 1997; Handy, Grafton, Shroff, Ketay, & Gazzaniga, 2003)? Based on our work, we do not and cannot dispute existing evidence that suggest the priming of neural networks by action-relevant objects (e.g., tools). In these studies, the action-relevant objects were typically presented or compared with other non action relevant objects. The results indicate that the tools generated motor-related cortical 43 activity. Based on our experiments, we are limiting our conclusion to the task context investigated by Tucker and Ellis (1998). Specifcally, we are unable to support the proposal that an action-relevant feature within an action-relevant object (i.e., the handle of a tool) automatically primes afforded actions. The tool itself as a whole may prime a general motor response (e.g., activate premotor cortical areas - as suggested by Graton et al., 1997) or draw attention away from a non-tool object (e.g., Handy et al., 2003), but we were not able to find evidence of a specific action-priming effect at the level suggested by Tucker and Ellis (1998). There is one last caveat that we must also consider with our present studies. At this point, we have only examined a very constrained response context - simple key presses. However, an important factor that warrants investigation is how action intention can modify the means with which action-relevant properties are processed. For example, Bekkering and Neggers (2002) have demonstrated that the type of action intended (i.e. pointing or grasping) can alter the selective visual processing of action-relevant properties when completing a visual search task. In their study, participants were instructed to saccade and to point or grasp the target object in an array which also made up of distractors; the target object was identified by orientation or colour. Eye movements were analyzed to indicate where spatial attention was allocated. Erroneous initial saccades were fewer in number when the target object was based on orientation and the intended action was a grasp as opposed to a pointing movement. In contrast, erroneous initial saccades were equally likely when directed to a target object defined by orientation or colour. Bekkering and Neggers's (2002) findings demonstrate that different types of actions (e.g., key presses, pointing, grasping) can modify where attention is allocated and what 44 object features capture attention. Converging support for the influence of action intention on selective attention is provided by Weir et al. (2003). These authors showed that when participants performed reaching and grasping actions toward different types of objects (e.g., a knob that required turning, or a switch that required pulling), other distractor objects only interfered when the actions that they afforded were incongruent with what the participant was intending to do (see also Pavese & Buxbaum, 2002). Therefore, consideration of how the type of action intended (e.g., key press versus grasp) can influence the degree to which attention might be captured by a tool's action-relevant feature is warranted. Based on Bekkering and Negger's (2002) findings, we would predict that a grasping action to an object should be a more favourable condition with respect to the object handle's affordance, in contrast to a key press action. Since the graspable feature of an object supplies information necessary to physically interact with it, visual attention should be captured by those features. Our current efforts are now directed toward this issue. . 45 REFERENCES Anderson, S.J., Yamagishi, N. & Karavia, V. (2002). Attentional processes link perception and action. Proceedings of the Royal Society B: Biological Sciences, 269, 1225-1232. Arbib, M.A. (1997) From visual affordances in monkey parietal cortex to hippocampo-parietal interactions underlying rat navigation. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 352, 1429-1436. Bekkering, H. & Neggers, S.F. (2002). Visual search is modulated by action intentions. Psychological Science, 13, 4, 370-374. Creem-Regehr, S.H. & Lee, J.N. (2005). Neural representations of graspable objects: are tools special? Cognitive Brain Research, 22, 457- 469 Ellis, R. & Tucker, M. (2000) Micro-affordance: the potentiation of components of action by seen objects. British Journal of Psychology, 91, 451-471. Derbyshire, N., Ellis, R., & Tucker, M. (2006).The potentiation of two components of the reach-to-grasp action during object categorisation in visual memory. Acta Psychologica, 122 , 74 -98. Deubel, H., & Schneider W.X. (1996). Saccade target selection and object recognition: Evidence for a common attentional mechanism. Vision Research, 36, 1827-37. Gibson, J.J. (1966). The Senses Considered as Perceptual System. Houghton-Mifflin, Boston, MA. Gibson, J.J. (1979). The Ecological Approach to Visual Perception. Houghton-Mifflin, Boston, MA. Grafton, S.T., Fadiga L., Arbib M.A., & Rizzolatti G. (1997). Premotor cortex activation during observation and naming of familiar tools. Neuroimage, 6, 231-236. Grezes J., Tucker M., Armony J, Ellis R. & Passingham RE. (2003). Objects automatically potentiate action: an fMRI study of implicit processing. European Journal of Neuroscience, 17, 12, 2735-2740. Handy T.C., Schaich Borg J., Turk D.J., Tipper C.M., Grafton S.T. & Gazzaniga M.S. (2005). Placing a tool in the spotlight: spatial attention modulates visuomotor responses in cortex. Neuroimage, 26, 1, 266-276. Handy T.C, Grafton ST., Shroff N.M., Ketay S. & Gazzaniga M.S. (2003). Graspable objects grab attention when the potential for action is recognized. Nature Neuroscience, 6,4,421-427. Jeannerod, M. (1994). The representing brain: neural correlates of motor intention and imagery. Behaviour and. Brain Science, 17, 187-246. Ludwig C.J. & Gilchrist, I.D. (2002). Stimulus-driven and goal-driven control over visual selection. Journal of Experimental Psychology: Human Perception and Performance: 28,4,902-912. Lyons, J., Weeks, D.J., & Chua, R. (2000). The influence of object orientation on speed of object identification: Affordance facilitation or cognitive coding? Journal of Sport and Exercise Pyschology, 22 (suppl.), S72. Lyons, J.L. (2001). The influence of object orientation on speed of object identification : affordance facilitation or cognitive . Unpublished doctoral dissertation, Simon Fraser University, British Columbia, Canada. Michaels, C.F; (1988). S-R compatibility between response position and destination of apparent motion: Evidence of the detection of affordances. Journal of Experimental Psychology: Human Perception and Performance, 14, 231-240. Murata, A., Fadiga, L., Fogassi, L., Gallese, V., Raos, V. & Rizzolatti, G. (1997) Object representation in the ventral premotor cortex (area F5) of the monkey. Journal of Neurophysiology, 78, 2226-2230. Nicoletti, R. & Umilta, C. (1989). Splitting visual space with attention. Journal of Experimental Psychology: Human Perception and Performance, 15, 1, 164-169. Pavese, A. & Buxbaum, L.J. (2002). Action matters: the role of action plans and object affordances in selection for action. Visual Cognition, 9, 4/5, 559-590. Phillips, J.C. & Ward, R. (2002). S-R correspondence effects of irrelevant visual affordance: Time course and specificity of response activation. Visual Cognition, 9, 4/5, 540-558. Posner, M. (1980). Orienting of attention. The Quarterly Journal of Experimental Psychology, 32, 3-25. Proctor, R. W., & Reeve, T. G. (Eds.) (1990). Stimulus-response compatibility: An integrated perspective. Amsterdam: North-Holland. Rizzolatti, G., Riggio, L., Dascola, I., & Umilta, C. (1987). Reorienting attention across the horizontal and vertical meridians: evidence in favor of a premotor theory of attention. Neuropsychologia, 25, IA, 31-40. Sheliga, B. M, Riggio, L., & Rizzolatti, G. (1994). Orienting of attention and eye movements. Experimental Brain Research, 98, 507-522. Simon, J. R. (1990). The effects of an irrelevant directional cue on human information processing. In R.W. Proctor & T.G. Reeve (Eds.), Stimulus-response compatibility: An integrated perspective (pp. 89-116). Amsterdam: North-Holland. Simon, J. R., & Rudell, A. P. (1967). Auditory S-R compatibility: The effect of an irrelevant cue on information processing. Journal of Applied Psychology, 51, 300-304. 48 Symes, E., Tucker, R. & Ellis, M. (2005). Dissoicating object-based and space-based affordances. Visual Cognition, 12, 1337-1361. Tucker, M. & Ellis, R. (1998). On the relations between seen objects and components of potential actions. Journal of Experimental Psychology: Human Perception and Performance, 24, 830-846. Tucker, M. & Ellis, R. (2001). The potentiation of grasp types during visual object categorization. Visual Cognition, 8, 769-800. Umilta, C, & Nicoletti, R. (1990). Spatial stimulus-response compatibility. In R. W. Proctor & T. G. Reeve (Eds.), Stimulus-response compatibility: An integrated perspective (pp. 89-116). Amsterdam: North-Holland. Umilta, C., & Nicoletti, R. (1992). An integrated model of the Simon effect. In J. Alegria, D. Holender, J. Junca de Morais, & M. Radeau (Eds.), Analytic approaches to human cognition (pp. 331-350). Amsterdam: North-Holland. Verfaellie, M., Bowers, D., and Heilman, K. H. (1988). Attentional factors in the occurrence of stimulus-response compatibility effects. Neuropsychologia, 26, 435-444. Wallace, R. J. (1971). S-R compatibility and the idea of a response code. Journal of Experimental Psychology, 88, 354-360. Weir, P.L, Weeks, D.J., Welsh, T.N., Elliott, D., Chua, R., Roy, E.A. & Lyons, J. (2003). Influence of terminal action requirements on action-centered distractor effects. Experimental Brain Research, 149, 207-213. 49 APPENDIX A A. WHAT MAKES TOOLS SPECIAL? A tool can be described as an object that is functionally specific for a particular task. Consider a cooking spatula consisting of an elongated handle with a flat flexible metal head on the end of it. If we grasp the spatula by the handle it is likely to afford a scooping, lifting, or flipping type action; however, there are less conventional ways to manipulate the same tool. Should something fall down the side of the stove out of our reach, that same spatula could be grasped by the flat end in order to slide its thin handle down a narrow opening. Despite the number of different ways an object can be manipulated, there is generally one explicit function with which it is associated. Creem-Regehr and Lee (2005) best describe the dissociation between tools and other sorts of objects stating that even though a rock, for example, can be grasped, it does not "have a semantic identity tied to an action representation" as perhaps say a hammer (p. 457). Neuroimaging studies support the idea that tool properties relay a particular set of actions. A.l Neuroimaging Research A. 1.1 Tool Viewing Activates Pre-motor Areas It has been suggested that object potentiation requires the automatic activation of motor patterns/schemas (Arbib, 1997; Jeannerod, 1994). PET (Positron Emission Tomography) studies have supported this assertion. Activation of F5 neurons in the premotor area has been observed in monkeys during the passive viewing of different shaped objects such as a plate, a ring, a cube, a cylinder, a cone, or a sphere (Murata, Fadiga, Fogassi, Gallese, Raos & Rizzolatti, 1997). Grafton, Fadiga, Arbib & Rizzolatti (1997) have investigated if a similar response could be observed in humans. Using PET to localize 50 cortical activity, three different viewing conditions were tested: passive observation of a tool (e.g. scissors, hammer), silent identification of the tool, or silent identification of the action associated with the tool. Each of these conditions resulted in the activation of the left precentral sulcus, which is a component of the premotor cortex and more specifically a sector where arm/hand movements are represented. Furthermore, activation in this area increased when participants silently distinguished the action related to the tool. Grafton et al. (1997) suggested that "man-made tools form a special category of objects that are strongly associated with specific movements" (p.234) and that the exclusive response for a particular tool is stored as a motor representation in the dorsal premotor cortex for future retrieval. Their statement is supported by Murata et al.'s (1997) findings in the passive observation condition. In previous studies, the objects presented as stimuli are ones familiar to the participant and its function is recognizable (Murata et al., 1997, Grafton et al., 1997). One issue that has yet to be considered is whether a physically manipulable object that has no recognized function activates similar motor processing brain regions. Is it the graspable nature of a tool from which its motor representation evolves or the semantic knowledge about the tool's function that triggers motor processes? Perhaps it is some combination of the two. Creem-Regehr and Lee (2005) aimed to resolve this issue by gauging the neural activity using fMRI during the presentation of an image of a 3-D tool or a 3-D shape (see Creem-Regehr and Lee, 2005, for example of stimuli). The objects in both categories were graspable but differed in their recognized function; a tool was characterized by its graspable surface (i.e. handle) while a shape did not have a specific graspable region but could still be clutched (e.g. cylinder). Participants were required to complete both a passive viewing and imagined 51 grasping task in attempt to determine whether passive visual perception resulted in action potentiation and if the intention to act on the object modified neural activation. Consistent with Grafton et al.'s (1997) finding, premotor regions were active during the passive viewing of tools. The posterior parietal area, which has not been previously identified as an area of activation, was also stimulated. These areas of activation were not, however, elicited during shape presentation. Creem-Regehr and Lee (2005) reasoned that if no overt action is necessary to activate areas of the brain that select the appropriate movement toward an object then that object must strongly afford a particular action when merely viewing it. Evidently, this is the case for tools which have a recognized function as opposed to shapes which do not. When instructed to imagine grasping the tools or shapes a temporal-parietal frontal network became active. However, the site and magnitude of premotor and parietal cortex activation differed for the two stimulus arrays and the middle temporal gyrus and fusiform gyrus was activated when viewing tool stimuli only. These findings imply that recognizing the specific function of a tool has an impact on its representation for action and could be attributed to prior awareness of potential motor responses. To date, the neuroimaging studies reviewed have explored the distinctive properties of tools which automatically prime specific motor representations. However, these studies have been limited to the simple act of viewing an object, naming the object or action, or imagining the action. But does the pattern of cortical activation vary when making a motor response? And how does the congruency of the action afforded by an object and the required response influence cortical activity? These were questions that Grezes, Tucker, Armony, Ellis and Passingham (2003) addressed following a behavioural study which had participants making either a precision or a power grip response depending on whether the image of a real 52 graspable object was natural (e.g. cucumber) or manufactured (e.g. screw) (Tucker & Ellis, 2001). They anticipated faster reaction times (RT) when the type of grip afforded by the object matched the response type and slower RTs when the two did not correspond, despite the irrelevance of object size on response selection. Neural activity was measured by means of event-related fMRI and the same stimulus set and procedures from the aforementioned study were used. As predicted, the behavioural results reflected faster motor responses for congruent trials in contrast to incongruent trials. Once more, activation was noted in the dorsal premotor cortex as well as the anterior parietal and inferior frontal cortex. When correlations between behavioural responses and cortical activation were assessed, Greves et al. (2003) remarked that those participants with a larger RT difference between congruent and incongruent trials also had an increase in activation in the formerly cited areas. They attributed this to a sort of competition between executing the desired response (intention) and the automatic responses that are primed by the visual input (implicit processing). A. 1.2 Tools Capture Visual Attention Keeping with a competition model, Handy et al. (2003, 2005) asked whether "the implicit recognition of action-related object attributes lead to an orienting of visual spatial attention to the locations of graspable objects" (p. 421). Their study design was based on the argument that objects "compete" for access to what is a limited cortical processing capacity, such that one's selective attention—and neural processes—are biased towards the most competitive objects within a given visual scene. Both of Handy et al.'s (2003, 2005) studies provide evidence that when visual attention is grabbed by an object, not only does the cortical response to that object increase, but there is a reduction in the magnitude of the cortical response to other, non-attended objects. In Handy et al.'s (2003) experiment, 53 participants were asked to maintain fixation while one of four different object pairings were presented on either side of it: tool-right, tool-left, no tool or both tools (see Handy et al., 2003, for timing and sequence of events and different trial types). The different categories of objects (tools vs. non-tools) were not disclosed to participants and they were reminded that the objects had no bearing on the task at hand. Participants were instructed to base their button press response on the location of the target which was a "square wave grating" superimposed on one of the two objects (see Fig. 2a). Event-related potentials elicited by the target were evaluated as a function of displaying the tool in the left or right visual field and the upper or lower visual field. Interactions were identified for the visual field that the target was displayed and trial type. Spatial attention was oriented toward tools but only when it was presented in the right visual hemifield or the upper visual field, although the latter had a stronger bias rather than an absence in the opposite visual field. Handy et al. (2003) reasoned that a right visual field advantage may be due to the recognition of action-related properties of the bilateral display. And with the knowledge that there is a lower hemifield advantage in directing visually guided actions relative to the upper hemifield, this offered supportive evidence that visual field asymmetries exist with respect to visuomotor processing. Their argument that visual field asymmetry can be explained in relation to a right visual field advantage in visuomotor processing was substantiated in an experiment using event-related fMRI in which dorsal regions of premotor and prefrontal cortices showed significantly greater activity during tool-right relative to tool-left trials. Handy et al.'s study thus demonstrates that tools can capture visual attention toward it, even when it is irrelevant to the task. This finding demonstrates that graspable tools can modulate where visual attention is directed. 54 APPENDIX B There were also deviations in the recording larger than 1.5 degrees and lasting longer than 50 m. These deviations can be attributed to drift which was the result of the pupil contracting in response to the onset of the display lights. The possibility that these deviations were merely saccades can be countered by the fact that the duration of these drifts was longer than 100 ms and as mentioned previously, saccades were defined as being less than 50 ms in duration. Drifts were apparent in the data of three participants. Time (sec) Figure Bl.l: Example of drift in eye movement recording. The arrows denote the start and end of the deviation. Note: the duration over which the eye recording deviates is approximately 350 ms. 

Cite

Citation Scheme:

    

Usage Statistics

Country Views Downloads
China 14 35
United States 5 2
Japan 5 0
Australia 1 0
Germany 1 0
City Views Downloads
Beijing 10 0
Ashburn 5 0
Tokyo 5 0
Shenzhen 4 35
Perth 1 0
Düsseldorf 1 0

{[{ mDataHeader[type] }]} {[{ month[type] }]} {[{ tData[type] }]}
Download Stats

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.831.1-0077004/manifest

Comment

Related Items