"Arts, Faculty of"@en . "Psychology, Department of"@en . "DSpace"@en . "UBCV"@en . "Brennan, Allison Anne"@en . "2010-08-19T14:11:16Z"@en . "2010"@en . "Master of Arts - MA"@en . "University of British Columbia"@en . "Does person perception \u00E2\u0080\u0093 the impressions we form from watching others\u00E2\u0080\u0099 behavior \u00E2\u0080\u0093 hold clues to the mental states of people engaged in cognitive tasks? We investigate this with a two-phase method: in Phase 1 participants search on a computer screen (Experiment 1) or in an office (Experiment 2); in Phase 2 other participants rate their video-recorded behavior. We find ratings are sensitive to stable traits (search ability), temporary states (cognitive strategy), and environment (task difficulty). We also find that the visible behaviors critical to success vary between settings (e.g., eye movements are important in search on computer screens; head movements for search in an office). Positive emotions are linked to search success in both settings. These findings demonstrate that person perception can inform cognition beyond traditional measures of performance, and as such, offer great potential for studying cognition in natural settings with measures that are both rich and relatively unobtrusive."@en . "https://circle.library.ubc.ca/rest/handle/2429/27534?expand=metadata"@en . " PERSON PERCEPTION INFORMS UNDERSTANDING OF COGNITION DURING VISUAL SEARCH by Allison Anne Brennan B.A., Highest Distinction, The University of Virginia, 2008 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS in The Faculty of Graduate Studies (Psychology) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) August 2010 \u00C2\u00A9 Allison Anne Brennan, 2010 \u00E2\u0080\u00A9 ii\u00E2\u0080\u00A9 ABSTRACT Does person perception \u00E2\u0080\u0093 the impressions we form from watching others\u00E2\u0080\u0099 behavior \u00E2\u0080\u0093 hold clues to the mental states of people engaged in cognitive tasks? We investigate this with a two-phase method: in Phase 1 participants search on a computer screen (Experiment 1) or in an office (Experiment 2); in Phase 2 other participants rate their video-recorded behavior. We find ratings are sensitive to stable traits (search ability), temporary states (cognitive strategy), and environment (task difficulty). We also find that the visible behaviors critical to success vary between settings (e.g., eye movements are important in search on computer screens; head movements for search in an office). Positive emotions are linked to search success in both settings. These findings demonstrate that person perception can inform cognition beyond traditional measures of performance, and as such, offer great potential for studying cognition in natural settings with measures that are both rich and relatively unobtrusive. \u00E2\u0080\u00A9 iii\u00E2\u0080\u00A9 PREFACE This research was part of collaboration between Allison A. Brennan, Dr. James T. Enns, Marcus R. Watson, and Dr. Alan Kingstone. All research was approved by the Behavioral Research Ethics Board (BREB) at the University of British Columbia (UBC BREB Number H09-01732). \u00E2\u0080\u00A9 iv\u00E2\u0080\u00A9 TABLE OF CONTENTS \u00E2\u0080\u00A9\u00E2\u0080\u00A9 Abstract......................................................................................................................... ii\u00E2\u0080\u00A9 Preface ......................................................................................................................... iii\u00E2\u0080\u00A9 Table of Contents........................................................................................................ iv\u00E2\u0080\u00A9 List of Tables................................................................................................................ v\u00E2\u0080\u00A9 List of Figures ............................................................................................................. vi Acknowledgements ................................................................................................... vii Dedication.................................................................................................................. viii Introduction .................................................................................................................. 1\u00E2\u0080\u00A9 Experiment 1: Computer Display Visual Search....................................................... 6\u00E2\u0080\u00A9 Method ................................................................................................................. 6\u00E2\u0080\u00A9 Phase 1: Visual Search................................................................................... 6\u00E2\u0080\u00A9 Phase 2: Person Perception of Search........................................................... 9\u00E2\u0080\u00A9 Results ............................................................................................................... 11\u00E2\u0080\u00A9 Phase 1: Visual Search................................................................................. 11\u00E2\u0080\u00A9 Phase 2: Person Perception of Search......................................................... 15\u00E2\u0080\u00A9 Discussion.......................................................................................................... 23 Experiment 2: Natural Setting Visual Search .......................................................... 30 Method ............................................................................................................... 30\u00E2\u0080\u00A9 Phase 1: Visual Search................................................................................. 30\u00E2\u0080\u00A9 Phase 2: Person Perception of Search......................................................... 32\u00E2\u0080\u00A9 Results ............................................................................................................... 34\u00E2\u0080\u00A9 Phase 1: Visual Search................................................................................. 34\u00E2\u0080\u00A9 Phase 2: Person Perception of Search......................................................... 36\u00E2\u0080\u00A9 Discussion.......................................................................................................... 42 General Discussion ................................................................................................... 46 References.................................................................................................................. 51 Appendices................................................................................................................. 55 Appendix A: Mood Grid...................................................................................... 55 Appendix B: Rating Scales ................................................................................ 56 Appendix C: UBC BREB Certificate of Approval................................................ 58 Appendix D: HSP Consent Form \u00E2\u0080\u0093 Phase 1 ...................................................... 59 Appendix E: HSP Consent Form \u00E2\u0080\u0093 Phase 2 ...................................................... 61 Appendix F: HSP Debriefing Form..................................................................... 63 \u00E2\u0080\u00A9\u00E2\u0080\u00A9\u00E2\u0080\u00A9\u00E2\u0080\u00A9\u00E2\u0080\u00A9 \u00E2\u0080\u00A9 v\u00E2\u0080\u00A9 LIST OF TABLES Table 1: State, trait, and difficulty effect sizes.............................................................. 15 Table 2: Person perception and performance comparison (Experiment 1) ................. 20 Table 3: Person perception rating correlations (Experiment 1).................................... 22 Table 4: Person perception and performance comparison (Experiment 2) ................. 40 Table 5: Person perception rating correlations (Experiment 2).................................... 42 \u00E2\u0080\u00A9 vi\u00E2\u0080\u00A9 LIST OF FIGURES Figure 1: Search display and targets in Experiment 1 ................................................... 7 Figure 2: Experiment 1 searcher photographs............................................................. 11 Figure 3: Search performance (Experiment 1)............................................................. 14 Figure 4: Person perception ratings (Experiment 1) .................................................... 17 Figure 5: Search targets and display in Experiment 2 ................................................. 31 Figure 6: Experiment 2 searcher photographs............................................................. 33 Figure 7: Search performance (Experiment 2)............................................................. 35 Figure 8: Person perception ratings (Experiment 2) .................................................... 37 \u00E2\u0080\u00A9 vii\u00E2\u0080\u00A9 ACKNOWLEDGEMENTS I sincerely thank my supervisor, Dr. James T. Enns, for his continued assistance and sense of humor. I also owe many thanks to my committee members, Dr. Alan Kingstone and Dr. Todd C. Handy. I offer much appreciation to all members of the UBC Vision Lab, in particular to Nancy Deng, Maggie Dong, KL Ta, and Maya Whitehead for their excellent work on this project. Special thanks are owed to my parents Kevin and Anne, and sister Lesley for their unceasing love, interest, and encouragement. I also thank my partner, Stephan Nieweler, for teaching me that Vancouver has so much to offer. This research was supported with grants from the Natural Sciences and Engineering Research Council (Canada): a Discovery Research Grant to J. Enns and an Alexander Graham Bell Canada Graduate Scholarship (CGS) to A. Brennan. \u00E2\u0080\u00A9 viii\u00E2\u0080\u00A9 DEDICATION \u00E2\u0080\u00A9\u00E2\u0080\u00A9\u00E2\u0080\u00A9\u00E2\u0080\u00A9\u00E2\u0080\u00A9\u00E2\u0080\u00A9\u00E2\u0080\u00A9\u00E2\u0080\u00A9\u00E2\u0080\u00A9\u00E2\u0080\u00A9 To my parents\u00E2\u0080\u00A9 \u00E2\u0080\u00A9 1\u00E2\u0080\u00A9 INTRODUCTION A typical experiment in cognitive psychology involves the presentation of a stimulus in a controlled laboratory setting, systematic variation of the conditions under which the stimulus is presented, and measurement of the participant\u00E2\u0080\u0099s response in the form of button presses or brief vocal responses. Analysis of the relations between the stimulus input and behavioral output then usually leads to hypotheses about the inner workings of the mind. This approach has done much to further our understanding of the mind, leading to the wealth of knowledge cited in every cognition textbook. However, a growing dissatisfaction with the failure of these finding to generalize to everyday situations has led to an interest in studying \u00E2\u0080\u009Ccognition in the wild\u00E2\u0080\u009D (Hutchins, 1995). Here we continued this trend by conducting two visual search experiments that had an increased degree of naturalism, both in the stimuli presented as inputs and in the range of behaviors measured as outputs. Much of the focus of recent research aimed at greater naturalism has involved presenting stimulus inputs that more closely approximate the conditions of everyday life (Kingstone, Smilek, & Eastwood, 2008; Smilek, Weiheimer, Kwan, Reynolds, & Kingstone, 2009). For example, studies of attentional orienting have moved from arrows as directional cues, to eye gaze, pointing human hands, and oriented body postures (Nummenmaa & Calder, 2008); studies of face perception have moved from simple schematic line drawings to photos and video clips (Palermo & Rhodes, 2007); and studies of scene perception have moved from line drawings to photos of naturalistic scenes (Henderson, 2005). All of these studies are motivated by an interest in being able to generalize the results beyond the narrow stimulus conditions found in typical controlled laboratory settings. In the realm of visual search, which is the focus of the present study, experiments have begun to move away from stimuli consisting of abstract \u00E2\u0080\u00A9 2\u00E2\u0080\u00A9 geometric shapes and high probability targets toward the more realistic situations confronting people who must search in the workplace, such as the search of displays used for screening baggage and the imaging of medical conditions, where the probability of being presented with a target is much lower (Wolfe, Horowitz, Van Wert, Kenner, & Kibbi, 2007). We note, however, that these searches, although realistic in their own contexts, are still limited in comparison to the kinds of search that people undertake every day. In the typical day of most people, search occurs many times, whether it involves looking for car keys or locating a book on a shelf. In these most naturalistic visual search tasks we can all be considered lifetime experts (Levin & Beck, 2004; Beck, Levin, & Angelone, 2007). The first goal of the present study was therefore to extend the study of naturalistic visual search to these everyday conditions. We began in Experiment 1 with a study of search in a typical cognition laboratory setting, but our participants are asked to search for pictures of common objects that are presented in a realistically messy office. Anchored in these data, we then ventured out in Experiment 2 to the completely uncharted territory of visual search in the real-life setting of the messy office that was only depicted in Experiment 1. Our study of everyday visual search is well grounded in much relevant background, simply because it is one of the most studied tasks of cognitive psychology (Wolfe, 1998). As such, we already know much about search when it occurs within the relatively sterile conditions of the cognitive laboratory. Specifically, we know search performance is influenced by (1) individual differences in search ability (stable traits, Boot, Becic & Kramer, 2009), (2) the adoption of cognitive strategies (temporary states, Smilek, Enns, Eastwood, & Merikle, 2006; Smilek, Dixon, & Merikle, 2006), and (3) environmental factors that make search easy or difficult, such as clutter (Smilek, Weiheimer, Kwan, Reynolds, & Kingstone, 2009) and visual eccentricity (Wolf, 1998). Thus, a first goal of the present study was to establish that these three classes of factors are still relevant when participants are searching through depicted natural scenes \u00E2\u0080\u00A9 3\u00E2\u0080\u00A9 (Experiment 1), before asking whether they are also relevant when participants are fully immersed in a real-world setting (Experiment 2). One important practical consideration addressed by Experiment 2 was whether a real-world experiment could be expected to generate data that were comparable in statistical power and effect magnitude, all within the 1-hour time span of a typical laboratory experiment. A second goal of the present study was extend the naturalism of visual search on the dimension of behavioral output. In our view, it is just as important to increase the richness of the behavioral output measured in a study, as it is to emphasize increases in the richness and variety of stimuli and contexts that are the input to the mind. We were inspired in this emphasis by the work of Tunnell (1977), who described naturalistic experimentation as consisting of three distinct components: research in natural settings, the systematic manipulation of naturalistic variables, and the unobtrusive measurement of naturally occurring behaviors. In Tunnell\u00E2\u0080\u0099s (1977) view, each of these three components must be present if the results of a cognitive experiment are to generalize to the complex conditions of everyday life. As background to this emphasis on naturalistic output, we note that the measurement of behavior can be ordered, at least as a first approximation, along a continuum from first-person reports made by study participants, to second- person measurements made by experimenters, to third-person judgments made by unbiased observers of the original study participants. Modern-day psychophysicists, as well as personality and social psychologists, all share with the early introspectionist tradition in both Philosophy and Psychology, a strong reliance on self-report as the primary measure of output (Baumeister, Vohs, & Funder, 2007). And this, despite a large body of research demonstrating that people often err when reporting their internal states (e.g., attitudes, beliefs, emotions, traits, motives) and introspecting about the reasons for their actions (Wilson, 2009). \u00E2\u0080\u00A9 4\u00E2\u0080\u00A9 In reaction to the unreliability of first-person reports made by even highly trained introspectionists, cognitive psychologists during the past century swung almost entirely to the measurement of objective performance. Here the second-person experimenter set up the conditions and equipment for the measurement of a response in a well-defined task, but was careful not to otherwise intervene in the recording. At present, the vast majority of research in cognitive psychology still utilizes these performance measures, recording the response time and accuracy of simple manual key presses or vocal responses. These measures are sometimes complimented and compared with the measurement of eye movements (Watson, Brennan, Kingstone, & Enns, in press; Hoffman & Subramaniam, 1995; Khurana & Kowler, 1987) and/or limb movements (Chapman & Goodale, 2008; Goodale, 1990) in order to further inform understandings of the mind\u00E2\u0080\u0099s hidden mechanisms. In this study we asked whether and how third-person perception \u00E2\u0080\u0093 the impressions we form from watching others\u00E2\u0080\u0099 behavior \u00E2\u0080\u0093 could also be used to advance the measurement of the mind\u00E2\u0080\u0099s output and enhance our understanding of inner-cognition. The guiding assumption was that we are all experts in the art of person perception, with the usual expectation of individual variation in this ability that depends on both genetic makeup and life experience. Much research has now shown that ordinary people, as a group, are quite capable of accurately inferring the inner mental processes of others through their outward expression, including on such dimensions as transient emotions (Ekman, 1972) and more enduring personality traits (Naumann, Vazire, Rentfrow, & Gosling, 2009). To investigate this, we developed a new two-phase research methodology, which we applied to naturalistic visual search. In Phase 1 of each experiment, participants engaged in a standard cognitive task while their performance was measured and they were videotaped. Phase 1 participants were aware that their behavior was being recorded, but in order to elicit behavior that is as natural as possible, we did not reveal to them the true purpose of the recordings until their \u00E2\u0080\u00A9 5\u00E2\u0080\u00A9 behavior had been measured. In Phase 2 of each experiment, a second sample of participants from the same population used their everyday person perception skills to observe and interpret the behavior of Phase 1 participants. This allowed us to compare third-person measures (Phase 2) directly with traditional second- person measures (Phase 1). Specifically, we asked: (1) Are person perception ratings sensitive to the same factors as performance measures, i.e., traits, states, and environmental factors? (2) How does the sensitivity of person perception and performance measures compare with one another? and (3) Which person perception ratings best predict search performance? Our goal in pursuing these questions was to determine whether cognitive studies could harness the person perception expertise of ordinary people, making judgments in real time, to inform understandings of cognition. Directly comparing the sensitivity of person perception and traditional performance measures to these variables will provide an objective way to determine the validity of the person perception measures as indices of the cognitive processes involved in visual search. Finally, in all of these analyses we kept a lookout for the possibility that measures of person perception during search provided us with information about relevant cognitive processes that was undetectable when relying solely on performance measures. \u00E2\u0080\u00A9 6\u00E2\u0080\u00A9 EXPERIMENT 1: COMPUTER DISPLAY VISUAL SEARCH In this experiment we implemented our two-phase research methodology for the first time: in Phase 1 participants searched in photographs of a cluttered office presented on a computer screen; in Phase 2 other participants used their everyday person perception abilities to observe and interpret the video-recorded behavior of searchers. We explored the influence of individual differences in search ability (stable traits), cognitive strategy (temporary states), and search difficulty (environmental factors) on searchers\u00E2\u0080\u0099 efficiency of target detection in Phase 1, and the sensitivity of person perception ratings to each of these factors in Phase 2. Method Phase 1: Visual Search Participants. Twenty-eight undergraduates (21 female) received course credit for participating in a half-hour session. One participant was excluded because of equipment failure; two participants were excluded because they had already participated in another experiment where \u00E2\u0080\u0098active\u00E2\u0080\u0099 and \u00E2\u0080\u0098passive\u00E2\u0080\u0099 search strategy was manipulated; and finally, one participant was excluded at random in order to achieve an equal number (12) of participants in each of the two strategy instruction groups. All participants gave written informed consent and were treated in accordance with APA standards. Stimuli and Apparatus. Search displays consisted of 80 photos of the cluttered office shown in Figure 1A, presented on a 24\u00E2\u0080\u009D iMac computer. SuperLab 4 software was used to randomize the trials and to collect the keyboard responses of the participants. The scene encompassed almost 40 degrees of visual angle horizontally and 32 degrees vertically, and the target objects were each less than 1 degree of visual angle in size. The built-in iSight webcam of the iMac recorded video (1280 x 1024 pixel resolution) of the participants\u00E2\u0080\u0099 upper body and head continuously through each testing session. \u00E2\u0080\u00A9 7\u00E2\u0080\u00A9 One of ten common target objects, depicted in Figure 1B, was present in each photo (keys, tea, pill bottle, milk, chalk eraser, mug, hole punch, box, staple remover, tape) in one of eight different locations. The locations for each object were determined by the orthogonal combination of four possible quadrants of visual space (relative to the center of the image) and two eccentricities (near the image center, in the periphery). Figure 1: Search display and targets in Experiment 1. (A) The cluttered office photograph in which Phase 1 participants (Experiment 1) searched, (B) the ten target objects for which participants searched. \u00E2\u0080\u00A9 8\u00E2\u0080\u00A9 Procedure. Each participant was tested in a single session, consisting of three randomized blocks of the 80 unique photos, for a total of 240 trials. To familiarize participants with the photos of the office and the target objects, each session began with participants viewing a screen with images of all ten of the target objects for 30 sec. Participants then viewed a photo of the cluttered office without any of the target objects in it, but overlaid with a white grid indicating the numbered quadrants of visual space. While viewing this image, participants were asked to personally note key features of the cluttered scene, so that they would be able to retain these divisions of space once the grid lines were removed for the search test. Participants then practiced on 10 trials randomly selected from the larger set. Their instructions were to indicate the location of the target on each trail, as quickly and accurately as possible, by pressing one of four keys labeled 1-4, spatially mapped to correspond to the four quadrants on the screen. Participants were also told that that the webcam would be turned on throughout the session, so that we would be able to determine where they are looking while searching. On each trial, participants first saw a photo of one of the 10 target objects, displayed in the centre of the screen for 2 sec. The office photo containing the target in one of its eight possible locations was then shown until participants responded with a key press or until 15 sec had elapsed (also recorded as an error). The only factor that was systematically varied between participants was the instruction concerning cognitive strategy (Smilek, Enns, Eastwood, & Merikle, 2006). A random half of the participants were assigned to the active group and told, \u00E2\u0080\u009CThe best strategy for this task, and the one that we want you to use in this study, is to be as active as possible as you look at the screen. The idea is to deliberately direct your attention to determine your response. Sometimes people find it difficult or strange to direct their attention \u00E2\u0080\u0093 but we would like you to try your best. Try to respond as quickly and accurately as you can while using this \u00E2\u0080\u00A9 9\u00E2\u0080\u00A9 strategy. Remember, it is very critical for this experiment that you actively search for the target.\u00E2\u0080\u009D The other half of participants were assigned to the passive group and told, \u00E2\u0080\u009CThe best strategy for this task, and the one that we want you to use in this study, is to be as receptive as possible and let the target pop into your mind as you look at the screen. The idea is to let the display and your intuition determine your response. Sometimes people find it difficult or strange to tune into their gut feelings \u00E2\u0080\u0093 but we would like you to try your best. Try to respond as quickly and accurately as you can while using this strategy. Remember, it is very critical for this experiment that you let the target just \u00E2\u0080\u009Cpop\u00E2\u0080\u009D into your mind.\u00E2\u0080\u009D After the search task, all participants indicated their mood (extremely unpleasant to extremely pleasant) and arousal (extremely low energy to extremely high energy) during the experiment, both using 9-point scales (see Appendix A). Phase 2: Person Perception of Search Participants. 69 undergraduates (52 female) received course credit for participating in a half-hour session. These participants were instructed to rate the visible behavior of the Phase 1 searchers along the following dimensions: Ratings of global attribution. Energy Level: one group of nine participants responded to each video clip with a 1-6 point rating, indicating how much physical energy was displayed by the person in the clip. Search Ability: another group of 11 participants responded to each video clip with a 1-6 point rating, indicating how efficient they believed the person in the clip was at the search task in general. Search Activity: another group of 10 participants responded to each video clip with a 1-4 point rating, indicating how likely it was that this person had been instructed by the experimenter to search actively or passively (1 = confident of passive, 2 = guess passive, 3 = guess active, 4 = confident of active). \u00E2\u0080\u00A9 10\u00E2\u0080\u00A9 Ratings of local behavior. Head Movement: one group of 10 participants responded to each video clip with a 1-6 point rating, indicating the relative frequency of head movements. Eye Movement: another group of 10 participants rated the relative frequency of eye movements in the same way. Ratings of mindset. Interest: one group of 10 participants responded to each video clip with a 1-6 point rating, indicating how interested (vs. bored) the searcher appeared. Positive Emotion: another group of 9 participants rated the expression of emotion (1= negative, 6 = positive) displayed by the searcher upon finding the target. See Appendix B for the complete rating scales used in Phase 2. Stimuli and Apparatus. Eight of the 240 trials for each Phase 1 participant were selected and edited into video clips that began when the search display appeared on the screen and ended when participants\u00E2\u0080\u0099 made their response indicating a target object location, see Figure 2 for representative still photographs of search. The eight clips were selected by sampling orthogonally across three dimensions: (1) task familiarity: half of the clips were from the beginning of the session (trials 0-32); half were from the end (trials 214-240), (2) target eccentricity: half involved targets located near the centre of the scene; half involved targets located in the periphery, (3) task difficulty: half of the clips were from trials defined as easy; half were from trials defined as hard (based on the average search time of all Phase 1 participants). The video clips of hard searches (originally M = 4864 msec, SD = 558.34 msec) were shortened to include only the last four sec of the search, including the discovery of the target. Easy search clips were unedited (M = 2353 msec, SD = 281.43 msec). \u00E2\u0080\u00A9 11\u00E2\u0080\u00A9 Figure 2: Experiment 1 searcher photographs. Representative still images for Phase 1 participants (Experiment 1). Photographs are actors posing as participants. Procedure. Each participant was tested in a single session where they viewed a total of 192 video clips of Phase 1 participants in a random order (24 Phase 1 participants x 2 levels of familiarity x 2 levels of eccentricity x 2 levels of search difficulty). The video clips were presented on a 24\u00E2\u0080\u009D iMac computer, using SuperLab 4. In order to familiarize participants with the range of behavior depicted in the videos, participants practiced prior to being tested by rating 10 video clips selected at random from the entire set. Results Phase 1: Visual Search Figure 3 shows the mean correct response time (RT) in panel A and mean proportion correct (PC) in panel B for the 10 target objects and two strategy groups among the Phase 1 participants. Panel C combines these two measures \u00E2\u0080\u00A9 12\u00E2\u0080\u00A9 into an overall efficiency score by dividing correct RT by PC. This score is a convenient way to combine search time and accuracy when they are strongly related, as they are here, because it corrects RT values that are underestimated when participants are willing to trade errors for response time (Townsend & Ashby, 1983; Watson et al, in press). The data shown in Figure 3C indicate two main findings. First, some targets are generally more difficult to find than other targets, and second, participants in the active instruction group were generally able to search more efficiently than those in the passive group. Search efficiency scores were examined in several other analyses in order to test the effects of target (1-10), block (1-3), quadrant (upper-left, upper-right, lower- right, lower-left), and eccentricity (near, far). Note that these factors could not all be examined simultaneously in a single analysis because there was often missing data in the lowest-level cells of the complete design because of response errors and the random selection of conditions across trials. Nonetheless, each of these factors had a significant main effect on search efficiency, including significant differences among the 10 target objects (F(9, 26) = 87.75, p < .001), a significant improvement in efficiency over trial block (mean reduction of 1225 ms from block 1 to block 3, F(2, 52) = 82.22, p < .001), greater efficiency in the lower vs. the upper visual field (mean difference of 557 ms, F(1, 26) = 8.02, p < .01) with no difference in the right vs. left visual field (mean difference of 60 ms, F(1, 26) < 1.0), and greater efficiency when targets were near vs. far (mean difference of 842 ms, F(1, 26) = 27.41, p < .001). In order to coalesce a sufficient amount of data in each cell so that factor interactions could be examined, we simplified the structure so that each variable was reduced to two levels. The 10 target objects were grouped into the two clusters readily apparent in Figure 3D, with seven of the objects grouped into an easy condition (tea, pill bottle, milk, chalk eraser, mug, box, tape, mean score = 2314) and three objects grouped into a hard condition (keys, hole punch, staple remover, mean score = 5628). The three levels of block were subdivided into \u00E2\u0080\u00A9 13\u00E2\u0080\u00A9 two levels (first vs. second half of trials). Because there were no reliable left vs. right differences among the display quadrant locations, they were analyzed further only in terms of visual field (upper, lower). Finally, the eccentricity factor remained as it was (near vs. far). These four repeated-measures factors were then combined with the between- group factor of strategy (active, passive) in a mixed-design analysis of variance (ANOVA) of the efficiency scores, as shown in Figure 3D. This analysis indicated a significant main effect of difficulty, F(1,26) = 220.24, p < .001, a main effect of strategy (active searchers were more efficient than passive searchers by 740 ms on average, F(1,26) = 5.69, p < .03) and a significant strategy x difficulty interaction (active searchers were especially more efficient than passive searchers on the harder-to-find targets, F(1,26) = 4.97, p < .04). \u00E2\u0080\u00A9 14\u00E2\u0080\u00A9 Figure 3: Search performance (Experiment 1). (A) Mean correct response time (RT) and (B) mean proportion correct (PC) for the 10 target objects and two strategy groups among the Phase 1 (Experiment 1) participants. (C) Overall efficiency score (correct RT divided by PC). (D) Mean overall efficiency scores in Phase 1 (Experiment 1) plotted as a function of strategy (active, passive) and difficulty (easy, hard). Error bars are +/1 one standard error of the mean. \u00E2\u0080\u00A9 15\u00E2\u0080\u00A9 No other interactions involving search strategy were significant (all p\u00E2\u0080\u0099s > .10), although each of the main effects involving experimental factors were significant in this analysis, as described in the previous paragraph (p < .05 for block, target difficulty, visual field, eccentricity). The effect sizes for the factors of state, trait, and difficulty are shown in Table 1. Table 1: State, trait, and difficulty effect sizes. Mean differences, standard errors, and two effect size measures (Cohen\u00E2\u0080\u0099s d and partial eta-squared) for the effects of strategy, ability, and difficulty in Phase 1 of Experiments 1 and 2. \u00E2\u0080\u00A9 Strategy\u00E2\u0080\u00A9 Ability\u00E2\u0080\u00A9 Difficulty\u00E2\u0080\u00A9 Experiment 1: Computer Screen \u00E2\u0080\u00A9 \u00E2\u0080\u00A9 \u00E2\u0080\u00A9 Mean difference (milliseconds)\u00E2\u0080\u00A9 771\u00E2\u0080\u00A9 1146\u00E2\u0080\u00A9 3314\u00E2\u0080\u00A9 Pooled standard error\u00E2\u0080\u00A9 296\u00E2\u0080\u00A9 286\u00E2\u0080\u00A9 172\u00E2\u0080\u00A9 Cohen\u00E2\u0080\u0099s d\u00E2\u0080\u00A9 .373\u00E2\u0080\u00A9 .566\u00E2\u0080\u00A9 2.59\u00E2\u0080\u00A9 Partial eta-sq \u00E2\u0080\u00A9 .380\u00E2\u0080\u00A9 .575\u00E2\u0080\u00A9 .951\u00E2\u0080\u00A9 Experiment 2: Real Office \u00E2\u0080\u00A9 \u00E2\u0080\u00A9 \u00E2\u0080\u00A9 Mean difference (seconds) \u00E2\u0080\u00A9 1.84\u00E2\u0080\u00A9 3.34\u00E2\u0080\u00A9 4.26\u00E2\u0080\u00A9 Pooled standard error\u00E2\u0080\u00A9 0.746\u00E2\u0080\u00A9 0.699\u00E2\u0080\u00A9 .632\u00E2\u0080\u00A9 Cohen\u00E2\u0080\u0099s d\u00E2\u0080\u00A9 .468\u00E2\u0080\u00A9 .906\u00E2\u0080\u00A9 1.25\u00E2\u0080\u00A9 Partial eta-sq\u00E2\u0080\u00A9 .234\u00E2\u0080\u00A9 .499\u00E2\u0080\u00A9 .643\u00E2\u0080\u00A9\u00E2\u0080\u00A9 Note. Cohen\u00E2\u0080\u0099s d = large mean \u00E2\u0080\u0093 small mean / pooled standard deviation partial eta-sq = SSeffect / (SSeffect + SS error) Phase 2: Person Perception of Search The mean ratings of the video clips on each of the dimensions generally yielded high inter-rater agreement among Phase 2 participants (Cronbach\u00E2\u0080\u0099s alpha for rated ability = .672, activity = .920, energy = .882, head = .981, eye = .913, emotion = .912, interest = .881). In what follows, we will first describe the experimental factors that each rating dimension was sensitive to, and compare their sensitivity to those of traditional performance measures. Next, we will present analyses examining which ratings best predict search performance, before turning to a comparison of the ability of first- and third-person measures to account for variance in the performance data of Phase 1. \u00E2\u0080\u00A9 16\u00E2\u0080\u00A9 Are person perception ratings sensitive to search traits, states, and difficulty? The individual rating scales were examined with regard to whether they were sensitive to (1) individual differences in search ability (traits), (2) adopted search strategy (states), and (3) the environmental factor of target difficulty. In order to examine the individual differences variable, independent of the strategy variable, participants in each strategy group were coded as high or low in ability, based on a median split of their search efficiency scores. This permitted repeated measures ANOVAs to be conducted on each rating dimension examining the influence of ability (low, high), strategy (active, passive), and search difficulty (easy, hard), with each rater in Phase 2 contributing 24 scores to each cell in this design. Figure 4 shows the mean ratings for each of seven rated dimensions, as a function of search ability (low, high) and search strategy (active, passive). The task difficulty factor is not shown to simplify the presentation. These are grouped for convenience into ratings that assess behavior at a relatively global level (ability, activity, energy), at a more local level (head movements, eye movements), and mindset (emotion, interest). \u00E2\u0080\u00A9 17\u00E2\u0080\u00A9 Figure 4: Person perception ratings (Experiment 1). Mean ratings for each of seven rated dimensions in Phase 2 (Experiment 1): Global Attribution (ability, activity, and energy), Local Behavior (head and eye movement), and Mindset (emotion and interest), as a function of search ability (low, high) and search strategy (active, passive). Error bars are +/1 one standard error of the mean. \u00E2\u0080\u00A9 18\u00E2\u0080\u00A9 Ratings of global attribution. Phase 2 participants were most sensitive to the factors of ability, strategy, and difficulty when they were asked to rate how active searchers were. Ratings of overall activity were sensitive to all three factors (ability: F(1,9) = 59.89, p < .001; strategy: F(1,9) = 69.65, p < .001, and difficulty: F(1,9) = 5.86, p < .04), with greater levels of rated activity assigned to higher ability searchers, to active searchers, and to search in the more difficult conditions. Ratings of ability were less sensitive to the same three factors, showing only a marginally significant sensitivity to the strategy and difficulty factors (ability: F(1,10) = 1.72; strategy: F(1,10) = 4.68, p < .06, and difficulty: F(1,10) = 3.39, p < .10). Ratings of energy were sensitive only to ability factor (ability: F(1,9) = 17.44, p < .01; strategy: F(1,9) = 1.80; and difficulty: F(1,9) < 1). Ratings of local behavior. Ratings of head movement frequency were sensitive to all three factors (ability: F(1,8) = 214.74, p < .001; strategy: F(1,8) = 17.55, p < .01, and difficulty: F(1,8) = 132.67, p < .001). Ratings of eye movement frequency were sensitive to the factors of ability and strategy (ability: F(1,9) = 75.26, p < .001; strategy: F(1,9) = 17.25, p < .01, and difficulty: F(1,9) = 2.60). Ratings of mindset. Rating of searchers\u00E2\u0080\u0099 emotion upon detection of the target was sensitive to all three factors (ability: F(1,9) = 60.60, p < .001; strategy: F(1,9) = 22.59, p < .01, and difficulty: F(1,9) = 16.12, p < .01). Ratings of interest were sensitive to the factors of ability and strategy (ability: F(1,9) = 20.22, p < .001; strategy: F(1,9) = 33.90, p < .01, and difficulty: F(1,9) < 1.0). How do person perception ratings and performance measures compare in sensitivity? Given that many of the person perception ratings were sensitive to search ability, strategy and difficulty, it is important to address how person perception ratings compare in their sensitivity to traditional measures of response time and accuracy. The following analysis shows that some ratings approximated the sensitivity of response time and accuracy while others did not. \u00E2\u0080\u00A9 19\u00E2\u0080\u00A9 Table 2 compares the third-person ratings directly with the performance measure of efficiency (mean correct RT / PC) in their sensitivity to the influences of ability and strategy on visual search. These analyses were conducted as multiple regression/correlation analyses, where for each measure, we examined the simple correlation coefficients for the orthogonal predictors of ability (high, low) and strategy (active, passive). The unit of prediction was the mean efficiency score for each of the 24 participants in the hard search conditions, where individual differences in the performance of the searchers were the greatest. \u00E2\u0080\u00A9 20\u00E2\u0080\u00A9 Table 2: Person perception and performance comparison (Experiment 1). Multiple regression analyses of individual differences in ability (based on a median split of search efficiency in each strategy group) and strategy (based on the random assignment of participants to either an active or a passive strategy condition) as predicted by performance (RT / PC) and by the seven third-person ratings in Phase 2 (Experiment 1). The data include all 24 searchers and are taken from trials in the hard search condition, where individual differences were the greatest. Factor Correlation t-value p R R2 Phase 1 Performance: Response Time / Proportion Correct Ability .684 5.58 < .01 .827 .685 Strategy .465 3.80 < .01 Phase 2 Global Behavior: Ability Ratings Ability -.016 0.01 > .25 .027 .001 Strategy .022 0.02 > .25 Activity Ratings Ability -.581 3.72 < .01 .698 .488 Strategy -.387 2.48 < .05 Energy Ratings Ability -.464 2.40 < .03 465 .216 Strategy -.029 0.15 > .25 Phase 2 Local Behavior: Head Movement Ratings Ability -.296 1.42 > .10 .300 .090 Strategy -.050 0.24 > .25 Eye Movement Ratings Ability -.332 1.82 > .10 .548 .300 Strategy -.436 2.39 < .03 Phase 2 Mindset Attributions: Emotion Ratings Ability -.429 2.41 < .03 .576 .332 Strategy -.384 2.15 < .04 Interest Ratings Ability -.498 2.78 < .05 .571 .326 Strategy -.279 1.56 < .10 \u00E2\u0080\u00A9 Table 2 shows that this analysis for the performance measure of search efficiency accounted for 68.5% of the individual differences in search performance, and that the efficiency measure was significantly sensitive to \u00E2\u0080\u00A9 21\u00E2\u0080\u00A9 individual differences arising from both ability, r = .684, t(21) = 5.58, p < .001 and strategy, r = .465, t(21) = 3.80, p < .001. By way of comparison, among the global attribution ratings, strategy was the most sensitive rating scale, accounting for 48.8% of the variance and also showing independent sensitivity to ability r = .581, t(21) = 3.72, p < .01 and strategy, r = .387, t(21) = 2.48, p < .02. Ability ratings and energy ratings were both less sensitive, accounting for 27.7% and 21.8% of the variance, respectively, with neither showing a significant specific sensitivity to either ability or strategy. Among the local behavioral ratings, frequency of eye movements (30.0% of variance explained) showed greater sensitivity to individual differences in performance than frequency of head movements (9.0% of variance explained). Finally, the two ratings of mindset also showed some sensitivity, with variance shared between efficiency scores and interest ratings equal to 28.2% and between efficiency scores and emotion ratings equal to 33.2%. Which ratings best predict search performance? Finally, it is important to address which of the rated dimensions, either alone or in combination, do the best job of accounting for variance in the performance data of Phase 1, which we do in the following analyses. Table 3 shows a table of cross-correlations for the seven different rating scales tested in this experiment. Clearly, there is much overlap in the information contained in these various rating scales, with for example, activity and eye movement frequency having much in common (r = .773) and emotion and interest sharing a great deal of variance (r = .807). In contrast, eye movement and head movement frequency are relatively uncorrelated (r = .119). \u00E2\u0080\u00A9 22\u00E2\u0080\u00A9 Table 3: Person perception rating correlations (Experiment 1). Correlations among the seven third-person rating scales in Phase 2 of Experiment 1. Activity Energy Head Movement Eye Movement Emotion Interest Ability .460 .643 .397 .161 .683 .694 Activity .653 .421 .773 .566 .667 Energy .509 .285 .680 .781 Head movement .119 .790 .664 Eye movement .197 .320 Emotion .843 \u00E2\u0080\u00A9 In an effort to see which of these rating scales contributed uniquely to the individual differences in search efficiency, we entered all seven of the ratings as predictors in a simultaneous multiple regression model in which the efficiency scores of the 24 searchers represented the outcome variable. The full model involving all seven ratings yielded an R2 value of .636, F(7, 16) = 3.99, p < .01. By systematically removing ratings that contributed least to the total variance explained, as indicated by their partial coefficients, we found that a reduced model involving only two of the ratings, frequency of eye movements and emotion upon finding the target, still accounted for a similar amount of variance, R2 = .608, F(2, 21) = 16.31, p < .001. The partial coefficients for both of these ratings were significant (eyes: t(21) = 2.78, p < .01; emotion: t(21) = 4.35, p < .001) indicating that they each contributed significantly as predictors beyond the simple correlation involving only one of the predictors. How do first- and third-person measures compare in their ability to predict search performance? Ratings made by the searchers themselves at the end of Phase 1 showed no hint of correlation with performance (the correlation between efficiency scores and self-reported mood and arousal were both r < .07, p > .25). Additionally, the self-report measures of mood and arousal were unable to \u00E2\u0080\u00A9 23\u00E2\u0080\u00A9 predict searchers\u00E2\u0080\u0099 assignment to the factors of ability (median split of search efficiency in each strategy group) or strategy (active vs. passive instructions), all ps > .25. However, when correlations were examined between the two self- report measures and ratings made by Phase 2 observers, one was significant: self-reported mood was positively correlated with third-person ratings of interest, r(22) = .470, p < .02. The remaining correlations were all not significant (p > .14). Discussion The results of the search experiment in Phase 1 indicated that each of the three factors we had hoped to manipulate in the search task had a strong influence on the efficiency of target detection. They included search difficulty (environment), cognitive strategy (states), and individual differences in ability (traits). First, with regard to search difficulty, some target objects were generally more difficult to find than other target objects, regardless of where in the display they were located and regardless of which cognitive strategy participants had been instructed to use. Second, the random assignment of participants to either an active or a passive cognitive strategy had a systematic influence on search efficiency, with active searchers being generally faster and more accurate. Third, there were sizable individual differences in search ability even within each strategy group, such that a median split of the participants in each group resulted in differences that were comparable in magnitude to the effects of adopting a cognitive strategy. These three findings in Phase 1 therefore set the stage for our analysis of person perception in Phase 2. Person perception is sensitive to the factors influencing search. The main finding in Phase 2 was that some third-person ratings of search behavior were almost as sensitive as the performance measures in Phase 1 to the performance factors of ability, strategy, and difficulty. Of the three global rating scales we tested, ratings of searcher \u00E2\u0080\u009Cactivity\u00E2\u0080\u009D were most acutely sensitive to all three factors. In comparison, ratings of searcher \u00E2\u0080\u009Cenergy\u00E2\u0080\u009D showed significant sensitivity only to the ability factor; ratings of \u00E2\u0080\u009Cability\u00E2\u0080\u009D showed marginal \u00E2\u0080\u00A9 24\u00E2\u0080\u00A9 sensitivity to strategy and difficulty. When it came to judgments of local behavior, ratings of head movement frequency were sensitive to all three factors; ratings of eye movement frequency were sensitive to ability and strategy. With regard to third-person judgments of mindset, rating of searchers\u00E2\u0080\u0099 emotional expressions were sensitive to all three factors and ratings of interest were sensitive to ability and strategy. These findings clearly demonstrate that naive third-person observers, meaning observers who are unaware of what the searchers are seeing and unaware of the experimental factors that are being manipulated, are influenced in their judgments of the visible behavior of searchers by the same factors that are influencing the searchers. Not all person perception ratings are equally sensitive to search performance. A head-to-head-comparison between our combined performance measure of search efficiency (correct RT / pc) and the seven different third-person rating scales (see Table 2) indicated that some scales were clearly more sensitive than others. In particular, ratings of searcher \u00E2\u0080\u009Cactivity\u00E2\u0080\u009D were the most sensitive, followed by ratings of eye movement frequency, and then by ratings of positive emotional expression and interest. Because some of these ratings scales were also highly interrelated (they correlated with one another) we used a simultaneous regression procedure to help determine which rating scales contained the greatest amount of unique information with regard to search performance. This analysis indicated that over 60% of all the variance in the performance of individual searchers could be accounted for by just two rated variables: the frequency of eye movements and expression of positive emotion in the faces of searchers. These findings add much to our understanding of why search performance varies between individuals, both as a stable trait and as a function of cognitive strategy. Take as a case in point, the relative contributions of head and eye movements to \u00E2\u0080\u00A9 25\u00E2\u0080\u00A9 successful search, which in this experiment involved looking for common objects in a naturalistic scene presented on a large 24\u00E2\u0080\u009D computer screen. A priori, it is not easy to predict whether head or eye movements will be more important to search performance under these conditions. Clearly, because of the limits of visual acuity under these conditions, some amount of eye and/or head movement will be required before the target can be located and identified. That is, the 2-3 degree foveal region of the eye must make contact with the target object, which is itself less than 1 degree of visual angle in size, in a scene that encompasses almost 40 degrees of visual angle horizontally and 32 degrees vertically. So some amount of gaze re-orienting is a necessity for successful search. But the mere observation that some combination of eye and head orienting is required, does not in itself indicate which combination will be optimal. The existing literature is also not helpful on the question of whether search success is generally linked more to head or eye movements, or even to the more detailed question of whether more or fewer eye movements during search are of greater benefit. The obvious reason for the lack of data on the question of head movements is that almost all previous laboratory studies of visual search have been conducted on screens that encompass a very small region of the visual field, and also because much of the research has been conducted with participants whose head movements are artificially restricted with use of a chin- rest. Thus, the question has never been posed. And, on the question of eye movement frequency and search success, the previous literature is best described as mixed. Some of the existing research indicates that efficient search (rapid and accurate target identification) is associated with fewer overall eye movements during search (Boot, Becic & Kramer, 2009; Schoonard, Gould & Miller, 1973; Shapiro & Raymond, 1989; Togami, 1984). Consistent with this idea, some reports even indicate that preventing searchers from making any eye movements at all can sometimes be beneficial (Klein & Farrell, 1989; Zelinsky & Sheinberg, 1997). \u00E2\u0080\u00A9 26\u00E2\u0080\u00A9 Yet, at the same time, other reports indicate that more frequent eye movements are advantageous in search, especially when targets viewed in peripheral vision are difficult to distinguish from distractors (Boot, Becic, & Kramer, 2009). A final complicating factor relevant to the present study is that there are important links between cognitive search strategy and eye movements, at least for searches conducted on computer screens. In a recent study of this question, Watson, Brennan, Kingstone & Enns (in press) reported that participants instructed with a passive strategy waited longer before beginning to move their eyes and then made fewer saccades overall during search than actively instructed participants. Moreover, in that study, as in a previous study (Smilek, Enns, Eastwood, & Merikle, 2006), a passive search strategy was associated with more efficient visual search. These authors concluded that cognitive strategies alter how oculomotor behaviors are deployed in the service of visual search. It is in this context that the present constellation of findings with regard to eye and head movements must be interpreted. To recap, (1) actively-instructed searchers made more eye movements and head movements than passively instructed searchers, (2) more frequent eye and head movements were each positively associated with efficient visual search and with the adoption of an active cognitive strategy, and (3) in direct comparison, frequent eye movements were a larger contributor to predicting search success than frequent head movements. We conclude from the stark contrast between these results and the previous results indicating a passive advantage in search, that a passive advantage may only occur when participants search through displays where head and eye movements are unnecessary. In much of the past research eye movements have been unnecessary either because the displays are so small that they can be apprehended with a fixed gaze or because the differences between targets and distractors have been so large that direct fixation of these objects is not required. Our working hypothesis is that active head and eye \u00E2\u0080\u00A9 27\u00E2\u0080\u00A9 movement will likely benefit search, as they do in the present study, when the search task involves displays that demand an active reorienting of gaze to differentiate the target from the background. We will have another opportunity to test this hypothesis in Experiment 2, where participants search using an even wider field of view. Indirect ratings of search performance are more sensitive than direct attributions of ability. One of the most interesting findings with regard to person perception was that ratings of searcher \u00E2\u0080\u009Cactivity\u00E2\u0080\u009D and of \u00E2\u0080\u009Ceye movement frequency\u00E2\u0080\u009D were more sensitive to the state and trait differences between searchers than ratings of searcher \u00E2\u0080\u009Cability.\u00E2\u0080\u009D This is of interest because ratings of \u00E2\u0080\u009Cactivity\u00E2\u0080\u009D and \u00E2\u0080\u009Ceye movement frequency\u00E2\u0080\u009D are relatively indirect measures of the psychological construct under investigation (i.e., the ability to search efficiently), whereas the rating of searcher \u00E2\u0080\u009Cability\u00E2\u0080\u009D is a relatively direct measure. Furthermore, folk wisdom often suggests that direct measures should trump indirect measures of psychological states. In other words, if you want to know something about what people are thinking or doing, just ask them. In the case of person perception, this translates to \u00E2\u0080\u009Cjust look at them and see.\u00E2\u0080\u009D However, many recent reports in the social psychology literature warn against following this folk wisdom, arguing instead that one should be cautious about this intuitively appealing advice. In many cases, the measurement of a person\u00E2\u0080\u0099s attitude that best predicts their behavior is done using an implicit test of the attitude. By way of contrast, an explicit test is more vulnerable to the influence of experimenter-demand (Orne, 1962), socially appropriate responding (Greenwald, Poehlman, Uhlmann, & Banaji, 2009; Maison, Greenwald, & Bruin, 2004) and to the limitations of consciously accessible decision processes (Dijksterhuis, 2004).\u00E2\u0080\u00A9\u00E2\u0080\u00A9 The greater sensitivity of the indirect over direct rating scales in the present study is therefore further support for the benefits of indirect (or implicit) measurement of internal cognitive states. At the same time, we believe it will be important in \u00E2\u0080\u00A9 28\u00E2\u0080\u00A9 future research to determine exactly which behaviors are being used to make ratings of \u00E2\u0080\u009Cability.\u00E2\u0080\u009D It will be important to do so if only because these ratings were made no less reliably than the other ratings, indicating that the rating scale taps into a communal belief about what is required for successful search. Yet, the fact that direct ability ratings do not correlate as well as indirect scales with performance in Phase 1 means that direct ability ratings potentially hold the key to understanding meta-myths (shared but inaccurate folk wisdom) about the behaviors or attributes that are linked to success in visual search.\u00E2\u0080\u00A9 The role of positive emotions in search performance. The one finding that perhaps best highlights the important role that person perception can play in studying cognition is the correlation between ratings of positive emotional expression and search efficiency. Participants who were judged as being most happy upon finding the target (or as judged in a highly- correlated rating scale, to be the most \u00E2\u0080\u009Cinterested\u00E2\u0080\u009D in the task) were also those who tended to find the targets most efficiently. This finding is unique because it is one that cannot, even in principle, be discovered by measuring only objective performance by way of time and accuracy. This emotion-performance link was also not evident in the first-person mood reports made by searchers upon completion of the task. This could imply that the critical emotional ingredient is somehow linked to the moment-to-moment events ongoing in the task and therefore visible to onlookers, although not available to the searcher in a post- task assessment of mood. Or it could imply that onlookers are better able than even first-person observers to make relative judgments comparing one person\u00E2\u0080\u0099s emotional states to those of others (Wilson, 2009). In any case, this is a question that is ripe for further research. Third-person ratings outperform first-person reports in predicting successful search. As indicated, self-report ratings of mood at the conclusion of the search task showed no sign of being linked to search efficiency. The self-report ratings of \u00E2\u0080\u00A9 29\u00E2\u0080\u00A9 arousal were also not correlated with either search performance or the third- person ratings. This is notable because intuition might suggest that something as fundamental as physiological arousal, visible in heightened muscle tension and other cardiovascular signs, should be easily accessible to both the self and to onlookers. If so, then third-person ratings of \u00E2\u0080\u009Cactivity\u00E2\u0080\u009D (and perhaps head and eye movement frequency) should be correlated to some degree with first-person reports of arousal. Yet they were not. Indeed, the only sign of a link between first and third person measures came in the modest correlation between third- person ratings of interest and self-reported mood. Searchers who rated themselves as being in a more positive mood were judged by third-party onlookers to be more interested in the task. On the whole, these results add to the growing concerns over the validity of self- reports of internal states. As mentioned earlier, many social psychologists today still rely primarily on self-report measures of internal states, even when third- person measures have been demonstrated to be more reliable (Baumeister, Vohs, & Funder, 2007). Examples of this research include the finding that third- person assessments of internal states (e.g., attitudes, beliefs, emotions, traits, motives), as well the rationale behind behavioral actions, are more reliable than first-person assessments of the same states (John & Robins, 1994; Furnham & Stringfield,1998; Wilson, 2009). Because this is a first study of its kind to compare first and third-person measures of the role of emotions in the performance of a visual search task, it is premature to draw any strong conclusions. At best, we offer these findings as a call for further work in this area. \u00E2\u0080\u00A9 30\u00E2\u0080\u00A9 EXPERIMENT 2: NATURAL SETTING VISUAL SEARCH In this experiment we implemented our two-phase research methodology in the actual cluttered office that was merely depicted in Experiment 1. After participants searched in Phase 1, other participants in Phase 2 once again rated the video-recorded behavior of searchers. By comparing these results to those of Experiment 1, we tested whether real-world search is influenced by the same factors as search on a computer display (i.e., individual differences in search ability, cognitive strategy, and search difficulty). As in the previous experiment, we then compared the sensitivity of person perception ratings to each of these factors in Phase 2. Method Phase 1: Visual Search Participants. Twenty-four undergraduates (12 female) received course credit for participating in a half-hour session. All gave written informed consent and were treated in accordance with APA standards. Procedure. Figure 5A shows five common objects (book, hole punch, keys, mug, and pill bottle) that were each hidden in a cluttered office (see Figure 5B) at three different locations. Each target object was visible to the participant once the door to the office door was opened, but appeared in a different location on each occasion (see Figure 5C). Participants always stood on a location indicated by tape on the floor positioned 30 cm from the doorway. A digital video camera (image resolution 1024 x 768) in the right corner of the office captured the upper body and head of the participant. Participants were informed they would be video recorded in order to determine where they are looking while searching \u00E2\u0080\u00A9 31\u00E2\u0080\u00A9 Figure 5: Search targets and display in Experiment 2. (A) The five target objects, (B) the participant\u00E2\u0080\u0099s view of the cluttered office, and (C) schematic indicating the location of 12 target objects (green = easy, blue = hard) in Phase 1 (Experiment 2). The video camera can be seen just below the right-hand side picture on the far wall. \u00E2\u0080\u00A9 32\u00E2\u0080\u00A9 Each participant was tested in a single session involving 15 trials in random order. Prior to testing, participants held and viewed the target objects in order to become familiar with them. All participants were instructed to find the target as quickly as possible. In addition, 12 participants were instructed to actively direct their attention in search of the target; the other 12 were instructed to passively search for the target (e.g., let the target pop into mind), as was done in Experiment 1 (and following Smilek, Enns, Eastwood, & Merikle, 2006) with two alterations: \u00E2\u0080\u009Cas you look around the room\u00E2\u0080\u009D (in place of \u00E2\u0080\u009Cas you look at the screen\u00E2\u0080\u009D) and \u00E2\u0080\u009Cto find the target\u00E2\u0080\u009D (in place of \u00E2\u0080\u009Cto determine your response\u00E2\u0080\u009D). A trial began with the experimenter displaying the target, before going into the office and placing it in the scene with the door closed. Trial timing (with a stopwatch) began when the participant opened the door to the office. The trial ended when participants raised their arm to point at the target. Stopwatch timing was later confirmed by examining the video record of each trial. Upon completion the search task, visual search participants did not report their mood and arousal during the experiment, as although correlated with third- person ratings of interest, this first-person measure did not further inform understandings of search behavior or performance. Phase 2: Person Perception of Search Participants. 59 undergraduates (45 female) received course credit for participating in a half-hour session. Some participants made global attributions about the search behavior of Phase 1 participants: 10 rated search ability (whether a participant was good vs. bad at search), and 10 judged whether participants had been instructed to be active or passive. Other participants made local ratings of Phase 1 participants\u00E2\u0080\u0099 search behavior: 7 each rated eye movements and head movements made during search. Additional participants rated Phase 1 participants\u00E2\u0080\u0099 mindset: 10 rated how interested (vs. bored) \u00E2\u0080\u00A9 33\u00E2\u0080\u00A9 searchers appeared and 7 rated the positive emotion expressed upon search success. See Appendix B for the complete rating scales used in Phase 2. Stimuli. Four video clips were selected for each of the 24 Phase 1 search participants for viewing by Phase 2 person perception participants (96 video clips of search behavior in total). For each Phase 1 participant, we selected clips of two easy and two harder levels of search difficulty, as defined by the average performance of Phase 1 participants (see Results below) to determine whether ratings would be sensitive to factors influencing task difficulty. We edited the selected video clips in iMovie so that they began with the door opening and ended with the participant pointing to the target (Figure 6). Additionally, we edited the video clips of difficult searches (originally M = 9.08 sec, SD = 4.54 sec) to include only the final 5 sec of search, including the discovery of the target, so that they approximated the length of easy searches (M = 4.82 sec, SD = 1.64 sec). Figure 6: Experiment 2 searcher photographs. Representative still images taken for Phase 1 participants (Experiment 2). Photographs are actors posing as participants. \u00E2\u0080\u00A9 34\u00E2\u0080\u00A9 Procedure. Each participant was tested in a single session where they viewed the 96 video clips of Phase 1 participants\u00E2\u0080\u0099 search behavior, described above, in a randomized order. The experiment was presented to Phase 2 participants on a 24\u00E2\u0080\u009D iMac computer, using SuperLab 4. Participants completed 10 practice ratings, selected at random, to become familiar with the full range of behavior exhibited by Phase 1 search participants before beginning the experiment. Participants were instructed to rate the behavior of previous participants who were searching for common objects in a cluttered office. If judging whether Phase 1 participants had been instructed to search either actively or passively, participants read both sets of instructions prior to the start of the experiment. Phase 2 participants made local, global, and mindset ratings using the same 4- and 6-point scales as Experiment 1. However, Phase 2 participants did not make energy ratings this time because Experiment 1 results showed they were so closely related to rating of activity. Results Phase 1: Visual Search Figure 7 shows the mean response time (RT) in panel A for 12 of the 15 conditions (5 objects x 3 locations). We omitted the 3 most difficult conditions from further consideration because they resulted in very long searches with much variation (mean time = 15.91, SD = 7.09). Closer inspection revealed that these were objects were located on the extreme perimeter of the office, such that their view was occluded when the door was not fully open. \u00E2\u0080\u00A9 35\u00E2\u0080\u00A9 Figure 7: Search performance (Experiment 2). (A) Mean correct response time (RT) for Phase 1 (Experiment 2) and (B) mean correct response time (RT) plotted as a function of strategy (active, passive) and difficulty (easy, hard). Error bars are +/1 one standard error of the mean. The remaining 12 conditions the were subdivided into two clusters for subsequent analyses, with 6 relatively easy conditions in one cluster (mean time = 4.82 s, SD = 1.64) and 6 harder conditions in the other (mean time = 9.08 s, SD = 4.54), as shown in Figure 7B. These search data replicated the two main findings of Experiment 1, namely that some targets are generally more difficult to find than other targets, and second, participants in the active instruction group were generally able to search more efficiently than those in the passive group. These observations were supported by a mixed ANOVA involving the repeated measures factor of difficulty (easy, hard) and the between-participant factor of strategy (active, passive). The main effect of difficulty was significant, F(1, 22) = 29.27, p < .01, as was the strategy x difficulty interaction, F(1, 22) = 5.29, p < .03. \u00E2\u0080\u00A9 36\u00E2\u0080\u00A9 Simple effects indicated that although strategy had no influence on the easy search trials, F < 1.0, actively instructed participants were significantly faster to find the targets in the hard trials than those given passive instructions, F(1, 22) = 4.46, p < .05. The effect sizes for the factors of state, trait, and difficulty are shown in Table 1. Phase 2: Person Perception of Search The mean ratings of the video clips on each of the dimensions we tested generally yielded high inter-rater agreement (Cronbach\u00E2\u0080\u0099s alpha for rated ability = .880, activity = .823, head = .887, eye = .823, emotion = .935, interest = .877). As in Experiment 1, we first describe the factors that each rating dimension was sensitive to and compare the sensitivities of person perception ratings to traditional performance measures, before turning to analyses examining which combination of ratings might best predict search performance in this task. To which factors are person perception ratings sensitive? The individual rating scales were examined with regard to whether they were sensitive to (1) individual differences in search ability (traits), (2) adopted search strategy (states), and (3) the environmental factor of target difficulty. As in Experiment 1, the individual differences factor was obtained by coding participants in each strategy group as either high or low in ability, based on a median split of the mean search times. This permitted repeated-measures ANOVAs to be conducted on each rating dimension examining the influence of ability (low, high), strategy (active, passive), and search difficulty, with each rater in Phase 2 contributing 24 scores to each cell in this design. Figure 8 shows the mean ratings for each of six rated dimensions, as a function of search ability (low, high) and search strategy (active, passive). \u00E2\u0080\u00A9 37\u00E2\u0080\u00A9 Figure 8: Person perception ratings (Experiment 1). Mean ratings for each of six rated dimensions in Phase 2 (Experiment 2): Global Attribution (ability and activity), Local Behavior (head and eye movement), and Mindset (emotion and interest), as a function of search ability (low, high) and search strategy (active, passive). Error bars are +/1 one standard error of the mean \u00E2\u0080\u00A9 \u00E2\u0080\u00A9 38\u00E2\u0080\u00A9 Ratings of global attribution. The blind ratings of Phase 2 participants with regard to searcher activity were sensitive to both traits and states (ability: F(1,9) = 38.79, p < .001; strategy: F(1,9) = 25.80, p < .001), showing higher levels of rated activity for higher ability and more active searchers, but not to task difficulty F(1,9) = 3.29, p < .11). Ratings of ability were not sensitive to either the trait or the state factors (ability: F(1,9) < 1.0; strategy: F(1,9) = 3.24, p < .11), but did show a significant sensitivity to difficulty, F(1,10) = 43.95, p < .01), with higher levels of ability assigned to clips in which the targets were easier to find. Ratings of local behavior. Ratings of head movement frequency were sensitive to all three factors (ability: F(1,6) = 43.77, p < .001; strategy: F(1,6) = 12.99, p < .01, and difficulty: F(1,6) = 21.86, p < .001), with higher ratings assigned to higher ability searchers, active searchers, and in the harder search conditions. Ratings of eye movement frequency were sensitive only to the factor of strategy, F(1,6) = 15.92, p < .01, with higher ratings assigned to searchers adopting an active than a passive strategy. Ratings of mindset. Ratings of interest were sensitive to all three factors (ability: F(1,9) = 7.82, p < .02; strategy: F(1,9) = 23.91, p < .01, and difficulty: F(1,9) = 31.83, p < .01), with higher ratings assigned to higher ability searchers, active searchers, and in the harder search conditions. Ratings of emotion upon detection of the target were sensitive to strategy (higher ratings in the active than passive strategy, F(1,9) = 5.87, p < .05) and to difficulty (higher ratings in the hard than in the easy condition, F(1,9) = 87.87, p < .01). How do the sensitivities of person perception ratings and performance measures compare? Table 4 compares the third-person ratings directly with the performance measure of search efficiency (mean correct RT / PC) in their sensitivity to the influences of ability and strategy on visual search. These analyses were conducted as multiple regression/correlation analyses, where for each measure, we examined \u00E2\u0080\u00A9 39\u00E2\u0080\u00A9 the simple correlation coefficients for the orthogonal predictors of ability (high, low) and strategy (active, passive). The unit of prediction was the mean efficiency score for each of the 24 participants in the hard search conditions, where individual differences in the performance of the searchers were the greatest. Table 4 shows that this analysis for the performance measure of search efficiency accounted for 52.5% of the individual differences in search performance, and that the efficiency measure was significantly sensitive to individual differences arising from both ability, r = .597, t(21) = 3.97, p < .001 and strategy, r = .410, t(21) = 2.73, p < .01. \u00E2\u0080\u00A9 40\u00E2\u0080\u00A9 Table 4: Person perception and performance comparison (Experiment 2). Multiple regression analyses of individual differences in ability (based on a median split of search efficiency in each strategy group) and strategy (based on the random assignment of participants to either an active or a passive strategy condition) as predicted by performance (RT / PC) and by the six third-person ratings in Phase 2 (Experiment 2). The data include all 24 searchers and are taken from trials in the hard search condition, where individual differences were the greatest. Factor Correlation t-value p R R2 Phase 1 Performance: Response Time / Proportion Correct Ability .597 3.97 < .01 .724 .525 Strategy .410 2.73 < .02 Phase 2 Global Behavior: Ability Ratings Ability .239 1.16 > .25 .338 .114 Strategy -.239 1.16 > .25 Activity Ratings Ability -.467 2.10 < .05 .517 .267 Strategy -.400 1.80 < .09 Phase 2 Local Behavior: Head Movement Ratings Ability -.558 3.17 < .01 .591 .349 Strategy -.193 1.09 > .25 Eye Movement Ratings Ability .100 0.47 > .25 .158 .025 Strategy -.123 0.57 > .25 Phase 2 Mindset Attributions: Emotion Ratings Ability .241 1.14 > .25 .241 .058 Strategy -.014 0.07 > .25 Interest Ratings Ability -.138 0.67 > .25 .311 .097 Strategy -.279 1.35 > .10 \u00E2\u0080\u00A9\u00E2\u0080\u00A9 By way of comparison, among the global attribution ratings, activity was the most sensitive rating scale, accounting for 26.7% of the variance and also showing independent sensitivity to ability r = .392, t(21) = 2.10, p < .05 and marginally to strategy, r = .336, t(21) = 1.80, p < .09. Ability ratings were much less sensitive, accounting for only 11.4% of the variance, and showing no significant sensitivity to either ability or strategy (p > .25). \u00E2\u0080\u00A9 41\u00E2\u0080\u00A9 Among the local behavioral ratings, frequency of eye movements (34.9% of variance explained) showed greater sensitivity to individual differences in performance than frequency of head movements (2.5% of variance explained). Finally, the two ratings of mindset showed very little sensitivity to search performance on their own (5.8% and 9.7% of variance explained, respectively) but each of them was involved in a highly significant (p < .01) two-way interaction of trait and state factors. When these interaction terms were included in the regression analysis the explained variance increased to 35.7% (emotion) and 24.7% (interest). This interaction is illustrated in Figure 8 for the emotion ratings (the interest ratings show a similar crossover pattern). This pattern in the ratings indicates that searchers were judged as expressing the highest degree of positive emotion (and interest) when the cognitive strategy they adopted (state) matched their search ability (trait). That is, high-ability active searchers and low- ability passive searchers expressed the most positive emotion whereas low ability active searchers and high-ability passive searchers expressed the least. Which ratings best predict search performance? Table 5 shows a table of cross-correlations for the six rating scales tested in this experiment. Clearly, there is much overlap in the information contained in these various rating scales, with for example, activity ratings and head movement frequency ratings having much in common (r = .667) and emotion and interest sharing a great deal of variance (r = .677). In contrast, eye movement and head movement frequency are relatively uncorrelated (r = -.076). \u00E2\u0080\u00A9 42\u00E2\u0080\u00A9 Table 5: Person perception rating correlations (Experiment 2). Correlations among the seven third-person rating scales in Phase 2 of Experiment 2. Activity Head Movement Eye Movement Emotion Interest Ability .289 .024 -.300 .446 .402 Activity .677 .185 .276 .816 Head movement -.076 -.202 .363 Eye movement -.057 .158 Emotion .677 \u00E2\u0080\u00A9 In an effort to see which of these rating scales correlated most strongly with individual differences in search efficiency, we entered all six of the ratings as predictors in a multiple regression model in which the efficiency scores of the 24 searchers represented the outcome variable. The full model involving all six ratings yielded an R2 value of .449, F(6, 17) = 2.31, p < .08. By systematically removing ratings that contributed least to the total variance explained, as indicated by their partial coefficients, we found that a reduced model involving only two of the ratings, rated ability and frequency of head movements, accounted for a similar amount of variance, R2 = .401, F(2, 21) = 7.02, p < .01. The partial coefficients for these ratings were significant for head movements, t(21) = 3.33, p < .01, and marginally significant for ability, t(21) = 2.78, p < .09, indicating that head movements were the largest single predictor of search success in this experiment. Discussion Real world search is influenced by the same factors as computer screen search. The results of search performance followed the main trends of the previous experiment, despite the fact that there were vast differences in the location of the search (in an actual office versus on a computer screen), in the procedural details (each participant only searched on 15 versus 240 trials), in the manner of responding (participants pointed directly to targets versus pressing one of four \u00E2\u0080\u00A9 43\u00E2\u0080\u00A9 keys spatially-mapped to target locations), and in the measurement of search speed (an average search took 7-10 seconds versus 2-5 seconds). Regardless of all these differences, some target objects were generally more difficult to find than other objects for all participants, allowing us to categorize the results based on search difficulty. Second, the assignment of participants to active and passive strategy instructions again resulted in active searchers being generally faster. Third, there were again individual differences in ability within each strategy group, similar in magnitude to the effects of adopting a cognitive strategy. Thus, the stage was again set for an analysis of person perception of visual search, but this time it could be conducted on search behavior that more closely approximated everyday searches in a natural setting. Person perception is sensitive to the factors influencing search. The findings for person perception in this more natural context also resembled the main findings of the previous experiment at a broad brush. That is, indirect ratings of searcher \u00E2\u0080\u009Cactivity\u00E2\u0080\u009D were again more sensitive than direct ratings of searcher \u00E2\u0080\u009Cability\u00E2\u0080\u009D to individual differences in traits (ability) and states (strategy). Ratings of head and eye movement frequency were again sensitive to these same individual differences. However, there were also important finer-grain differences that the person perception measures indicated about search in the context of a computer screen (Experiment 1) versus an actual search in the same office (Experiment 2). First, head movement frequency was linked more closely to success in search in the real office, whereas eye movements had been more closely associated to search on the screen. Second, positive emotions were not as directly related to search success in the real office as they had been in the computer display search. Instead, in the office search context, third-person ratings of emotion pointed to an interaction between trait and state, such that when individuals who generally find targets quickly (high ability) were also those who had been assigned to the active strategy, they tended to display more positive emotion than lower ability \u00E2\u0080\u00A9 44\u00E2\u0080\u00A9 searchers. Conversely, when individuals who generally find targets more slowly (low ability) were assigned to the passive strategy, they showed more positive emotion than when they were assigned to the active strategy. Such a trait-state congruency effect on emotional expression is reminiscent of Fluency Theory (Reber, Winkielman, & Schwarz, 1998), which proposes that more fluent processing produces more positive responses. In this framework, searchers experienced maximum enjoyment when their traits (ability) and states (strategy) aligned, and importantly, this increased enjoyment was visible in searchers\u00E2\u0080\u0099 overt behaviors and expressions. The present results suggest that emotion may actually be a critical component in visual search for common objects, which has most often been considered an emotion-free cognitive task. In summary, we return to the two questions raised in the introduction to this study. First, with regard to the feasibility of studying visual search in a natural context, we interpret the present study as offering a resounding \u00E2\u0080\u009Cyes!\u00E2\u0080\u009D This is indicated by the demonstration that even a \u00C2\u00BD hour session of visual searches, involving only 15 discrete measurements of performance, provides sufficient statistical power to measure important influences of states, traits, and environment factors on visual search. It is also indicated by the similarity of the person perception findings in the two settings we have tested. With regard to the second question, of how the visible behaviors of searchers differs between lab and life environments, we believe the results are also able to offer a clear answer. For example, we interpret the present results as indicating that successful search depends to a large extent on how actively the searcher re- orients their gaze from moment to moment while searching for an object. Whereas active gaze reorientation may under some special circumstances be detrimental to search success, such as in the case of a small viewing window or when objects are easily discriminable using only peripheral vision, it is likely to be more generally very beneficial to search success. This was evident in our wide- screen computer search (Experiment 1) where eye movement frequency was a \u00E2\u0080\u00A9 45\u00E2\u0080\u00A9 key predictor of search success and it was even more evident in our office search (Experiment 2), where frequent head movements were required over and above eye movements to take in many of the target objects, given their location in the searcher\u00E2\u0080\u0099s field of view. \u00E2\u0080\u00A9 46\u00E2\u0080\u00A9 GENERAL DISCUSSION The first aim of this study was to study the cognitive task of visual search in a natural setting, that is, in a setting that more closely resembles the way humans search every day. We chose to study visual search in particular, because it is one of the most studied tasks of cognitive psychology, and as such, a task we already know a great deal about when it occurs in the relatively sterile conditions of the cognitive laboratory. Our motivation for seeing whether the understanding gained in the laboratory generalized to more naturalistic conditions was in response to calls from a number of quarters for increased ecological validity in cognitive research (Baumeister, Vohs, & Funder, 2007; Hutchins, 1995; Kingstone, Smilek, & Eastwood, 2008; Tunnel, 1977). A second, and equally important, goal of the present study was to increase the naturalness of the behavioral response of participants. We noted that most previous research in this area had emphasized naturalism in experimental stimuli rather than naturalism in the setting of the research and in the behavioral response of the participant. To accomplish this goal, we developed a new two- phase research methodology. In Phase 1 participants performed a search while they were videotaped. In Phase 2 other participants used their everyday person perception skills to observe and interpret the behavior of Phase 1 participants. Two comparisons were of particular importance. First, by comparing search in a naturalistic scene on a computer (Experiment 1) with search through the actual scene in an office (Experiment 2), we extended the study of visual search for the first time into naturalistic settings and behavioral responses. Second, by comparing third-person measures of behavior (Phase 2) with objective measures of performance (Phase 1) in each experiment, we explored for the first time the extent to which person perception from na\u00C3\u00AFve participants are sensitive to the same factors that influence search performance (i.e., traits, states, and environmental factors). \u00E2\u0080\u00A9 47\u00E2\u0080\u00A9 The data with regard to our first comparison \u00E2\u0080\u0094 search of a naturalistic scene versus search in the same naturalistic settings with more naturalistic responses \u00E2\u0080\u0094 pointed to two main findings. First, this study demonstrated the feasibility of studying visual search in this more natural context. This was indicated by the result that even a one-half hour session of visual search with each participant, involving only 15 discrete measurements of performance (i.e., trials), provided sufficient statistical power to measure important influences of states, traits, and environment factors on visual search. Indeed, the magnitude of the effects for individual differences in ability (traits), cognitive strategy (states), and search difficulty (environment) were comparable in Experiments 1 and 2. Second, the feasibility of studying search in a more naturalistic setting was also indicated by the similarity of the person perception findings in the two settings we studied, as we discuss next. The data with regard to our second comparison \u00E2\u0080\u0094 person perception measures of search versus performance measures \u00E2\u0080\u0094 also provided several clear answers. First, the primary finding of Phase 2 in both experiments was that some third- person ratings of search behavior were comparable in sensitivity to the performance measures in Phase 1. For example, in Experiment 1, where correct response time was able to account for about 69% of the state- and trait-related performance of individual searchers, ratings of \u00E2\u0080\u009Cactivity\u00E2\u0080\u009D were able to account for 49%, ratings of eye movement frequency were able to account for 30%, and ratings of emotion were able to account for 33%. These were all highly significant findings, showing that third-person ratings compare quite well with performance measures in their sensitivity to individual differences in visual search. When the unique contributions of each rating scale was compared, over 60% of all the variance in the performance of individual searchers could be accounted for by just two rating scales: the frequency of eye movements and the expression of positive emotion in the faces of searchers. \u00E2\u0080\u00A9 48\u00E2\u0080\u00A9 In Experiment 2, which involved search in a more natural setting, but which was also restricted by practical limitations in the number of trials that could be conducted in a \u00C2\u00BD hour testing session, we still observed reasonable sensitivity in both the performance measures and the person perception measures to individual differences in state and trait factors. Whereas response time in this experiment accounted for 53% of the variance, ratings of \u00E2\u0080\u009Cactivity\u00E2\u0080\u009D were able to account for 27% and ratings of head movement frequency were able to account for 35%. When the unique contributions of each rating scale was compared, the results showed that over 40% of all the variance in the performance of individual searchers could be accounted for by using just the head movement frequency scale on its own. Of equal importance to the aforementioned successes of the present study, are the findings that point towards questions that have yet to be answered. One possible direction for future research, on the issue of sensitivity of person perception to cognition, is the finding that the more direct ratings of search \u00E2\u0080\u009Cability\u00E2\u0080\u009D were far less sensitive to individual differences than the relatively more indirect ratings of search \u00E2\u0080\u009Cactivity\u00E2\u0080\u009D and the frequency of eye or head movements. This finding suggests that person perception is not only sensitive to the important underlying factors in visual search (e.g., eye and head movements, positive emotions), but that person perception is also vulnerable to some false stereotypes or myths about which behaviors in an individual are related to success in visual search. Teasing apart these two classes of variables will be critical to the long-term success of using person perception in the study of cognition. A second direction for future research concerns the present finding that greater activity in eye and head movements are linked to search success. We interpret the present findings as indicating that successful search depends to a large extent on how actively the searcher re-orients their gaze from moment to moment while searching for an object. Whereas active gaze reorientation may \u00E2\u0080\u00A9 49\u00E2\u0080\u00A9 under some special circumstances be detrimental to search success, as reported in previous studies involving a small viewing window or objects that are easily discriminable using only peripheral vision, head and eye movements are likely to be more generally beneficial to search success. This activity-success relationship in search was evident when participants searched on a wide-screen computer display (Experiment 1) where eye movement frequency was a key predictor of search success, and it was even more evident in our office search (Experiment 2), where frequent head movements was a key to success over and above eye movements. A third direction for future research concerns the role of positive emotions in performance. To recap our findings, the computer search in Experiment 1 indicated a direct link between positive emotional experiences and successful search. In contrast, the office search in Experiment 2 pointed to state-trait congruency effect in the emotional experience of searchers. Reconciling these differences will likely only come about through additional research, but here we offer a tentative hypothesis to help guide future work. One way to resolve the apparent difference in the two patterns of results for emotion is to consider the possibility that both effects exist in principle, but that there was not sufficient power to detect both of them in each of the present experiments. Under this hypothesis, positive emotions are always linked to more efficient cognitive processing, and equally important, state-trait congruency effects always play a role in the emotional experience of a study participant. Positive emotions and fluent cognition are simply different directions on the same two-way street, such that positive emotions generally benefit cognitive processing and fluent cognition generally alters one\u00E2\u0080\u0099s emotional experience in a positive way. From this perspective, the way the results for emotion in Experiments 1 and 2 differ from one another is more a matter of emphasis than of kind. Each result points to a different side of this bi-directional relationship, with Experiment 1 showing the emotion-to-cognition connection most strongly \u00E2\u0080\u00A9 50\u00E2\u0080\u00A9 and Experiment 2 revealing the link from cognitive performance to emotional experience. If this view is correct, then future studies altering the emotional experience of a study participant in advance of task should have a direct effect on their performance. Conversely, altering the performance efficiency of a study participant (e.g., through increasing expertise or external manipulations of ease of processing) should have a direct and positive effect on their emotional experience. Both of these possibilities are already suggested in previous research and theory, with Flow Theory (Csikszentmihalyi & Rathunde, 1993) focusing more on the influences of emotion on cognition and action, and Fluency Theory (Reber et al, 1998) focusing more on the influence of cognition on emotion. What has not been done so far is to systematically examine these two directions of influence in standard tasks of cognitive performance. In conclusion, we present these findings as an important proof-of-concept that person perception, which has a well-established tradition of study in its own right (Weisbuch & Ambady, in press; Rule, Macrae, & Ambady, 2009), can be harnessed to assist in the study of basic cognitive processes. We believe this finding, and our interpretation of it, opens up a potentially rich new world of measurement for cognitive research. For example, instead of being constrained by obtrusive and expensive eye tracking and body-posture monitoring equipment for the study of cognition in a natural setting, it now seems feasible to use person perception measures based on unobtrusively obtained videos to achieve the same goal. Furthermore, the video can be kept for future data mining, if and when new aspects of behavior become relevant, as was the case with expressions of positive emotion in the present studies. The challenge for the future will be to determine the reliability and range of third-person observational skills in leading to a better understanding of first-person cognition. \u00E2\u0080\u00A9 51\u00E2\u0080\u00A9 REFERENCES Baumeister, R.F., Vohs, K.D., & Funder, D.C. (2007). Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior?. Perspectives on Psychological Science, 2, 396-403. Boot, W.R., Becic, E., & Kramer, A.F. (2009). Stable individual differences in search strategy?: The effect of task demands and motivational factors on scanning strategy in visual search. Journal of Vision, 9(3), 1-16. Beck, M.R., Levin, D.T., & Angelone, B.L. (2007). Metacognitive errors in change detection: Lab and life converge. Consciousness and Cognition, 16(1), 58-62. Chapman, C.S., & Goodale, M.A. (2008). Missing in action: the effect of obstacle position and size on avoidance while reaching. Experimental Brain Research, 191(1), 83-97. Csikszentmihalyi, M., & Rathunde, K. (1993). The measurement of flow in everyday life: Toward a theory of emergent motivation. In J. E. Jacobs (Ed.), Nebraska Symposium on Motivation 1992 (Vol. 40, pp. 57\u00E2\u0080\u009397). Lincoln: University of Nebraska Press. Dijksterhuis, A. (2004). Think different: The merits of unconscious thought in preference development and decision making. Journal of Personality and Social Psychology, 87(5), 586-598. Ekman, P. (1972). Universals and cultural differences in facial expressions of emotion. In J. Cole (Ed.), Nebraska Symposium on Motivation 1971, (Vol. 19, pp. 207-283). Lincoln, NE: University of Nebraska Press. Furnham, A., & Stringfield, P. (1998). Congruence in job-performance ratings: A study of 360 degree feedback examining self, manager, peers, and consultant ratings. Human Relations, 51(4), 517-530. Greenwald, A.G., Poehlman, T.A., Uhlmann, E.L., & Banaji, M.R. (2009). Understanding and using the Implicit Association Test: III. Meta-analysis of predictive validity. Journal of Personality and Social Psychology, 97(1), 17\u00E2\u0080\u009341. \u00E2\u0080\u00A9 52\u00E2\u0080\u00A9 Goodale, M. A. (1990). Vision and Action: The Control of Grasping, Norwood, NJ: Ablex. Henderson, J. M. (Ed.) (2005). Real-world scene perception. New York: Psychology Press. Hoffman, J.E., & Subramaniam, B. (1995). The role of visual attention in saccadic eye movements. Perception and Psychoohysics, 57(6), 787-795. Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: MIT Press. John, O.P., & Robins, R.W. (1994). Accuracy and bias in self-perception: Individual differences in self-enhancement and the role of narcissism. Journal of Personality and Social Psychology, 66(1), 206-219. Khurana, B., & Kowler, E. (1987). Shared attentional control of smooth eye movements and perception. Vision Research, 27(9), 1603-1618. Kingstone, A., Smilek, D. & Eastwood, J. D. (2008). Cognitive ethology: A new approach for studying human cognition. British Journal of Psychology, 99, 317-345. Klein, R., & Farrell, M. (1989). Search performance without eye movements. Perception & Psychophysics, 46(5), 476-482. Levin, D.T., & Beck, M.R. (2004). Thinking about seeing: Spanning the difference between metacognitive failure and success. In D.T. Levin (Ed), Thinking and Seeing: Visual Metacognition in Adults and Children. Cambridge, MA: MIT Press. Maison, D., Greenwald, A.G., & Bruin, R.H. (2004). Predictive validity of the Implicit Association Test in studies of brands, consumer attitudes, and behavior. Journal of Consumer Psychology, 14(4), 405\u00E2\u0080\u0093415. Naumann, L. P., Vazire, S., Rentfrow, P. J., & Gosling, S. D. (2009). Personality judgments based on physical appearance. Personality and Social Psychology Bulletin, 35, 1661-1671. Nummenmaa, L., & Calder, A.J. (2009). Neural mechanisms of social attention. Trends in Cognitive Sciences, 13(3), 135-143. \u00E2\u0080\u00A9 53\u00E2\u0080\u00A9 Orne, M.T. (1962). On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. American Psychologist, 17, 776-783. Palermo, R., & Rhodes, G. (2007). The perception of emotion and social cues in faces. Neuropsychologia, 45(1), 75-92. Reber, R., Winkielman, P., & Schwarz, N. (1998). Effects of perceptual fluency on affective judgments. Psychological Science, 9, 45-48. Rule, N., Macrae, C.N. & Ambady, N. (2009). Ambiguous group membership is extracted automatically from faces. Psychological Science, 20, 441-443. Schoonard, J. W., Gould, J. D., & Miller, L. A. (1973). Studies of visual inspection. Ergonomics, 16(4), 365-379. Shapiro, K. L., & Raymond, J. E. (1989). Training of efficient oculomotor strategies enhances skill acquisition. Acta Psychologica, 71, 217-242. Smilek, D., Dixon, M.J., & Merikle, P.M. (2006). Revisiting the category effect: The influence of meaning and search strategy on the efficiency of visual search. Brain Research, 1080, 73-90. Smilek, D., Enns, J.T., Eastwood, J.D., & Merikle, P.A. (2006). Relax! Cognitive style influences visual search. Visual Cognition, 14, 543-564. Smilek, D., Weiheimer, L., Kwan, D., Reynolds, M. & Kingstone, A. (2009). Hiding and finding: The relationship between visual concealment and visual search. Attention, Perception & Psychophysics, 71, 1793-1806. Togami, H. (1984). Affects on visual search performance of individual differences in fixation time and number of fixations. Ergonomics, 27(7), 789-799. Townsend, J. T., & Ashby, F. G. (1983). Stochastic modeling of elementary psychological processes. New York: Cambridge University Press. Tunnell, G.B. (1977). Three dimensions of naturalness: An expanded definition of field research. Psychological Bulletin, 84(3), 426-437. Watson, M.R., Brennan, A.A., Kingstone, A., & Enns, J.T. (in press). Looking versus seeing: Strategies alter eye movements during visual search. Psychonomic Bulletin & Review. PBR-BR-09-292.R1. \u00E2\u0080\u00A9 54\u00E2\u0080\u00A9 Weisbuch, M., & Ambady, N. Thin slice vision. (in press). In R. B. Adams, Jr., N. Ambady, K. Nakayama. and S. Shimojo Eds.), Social Vision, Oxford, UK: Oxford University Press. Wilson, T.D. (2009). Know thyself. Perspectives on Psychological Science, 4(4), 384-389. Wolfe, J. M. (1998). What can 1 million trials tell us about visual search? Psychological Science, 9, 33\u00E2\u0080\u009339. Wolfe, J.M., Horowitz, T.S., Van Wert, M.J., Kenner, N.M., & Kibbi, M. (2007). Low target prevalence is a stubborn source of errors in visual search tasks. Journal of Experimental Psychology: General, 136(4), 623-638. Zelinsky, G. J., & Sheinberg, D. L. (1997). Eye movements during parallel-serial visual search. Journal of Experimental Psychology: Human Perception and Performance, 23(1), 244-262. \u00E2\u0080\u00A9 55\u00E2\u0080\u00A9 APPENDICES Appendix A: Mood Grid The mood grid completed by Phase 1 (Experiment 1) participants. \u00E2\u0080\u00A9 56\u00E2\u0080\u00A9 Appendix B: Rating Scales Rating scales used in Phase 2 of Experiment 1 and 2. ***Note: Energy level ratings were only made in Experiment 1. There was a slight difference in the Search activity ratings (within the active and passive search instructions) between experiments: \u00E2\u0080\u009Cat the screen\u00E2\u0080\u009D in Experiment 1 versus \u00E2\u0080\u009Caround the room\u00E2\u0080\u009D in Experiment 2. Ratings of global attribution.\u00E2\u0080\u00A9 Energy Level: Please rate the ENERGY LEVEL of participants as they search the target: (1 = VERY LOW ENERGY; 6 = VERY HIGH ENERGY) Search Ability: Some participants were POOR searchers - they were SLOW and INACCURATE. Some participants were GOOD searchers - they were FAST and ACCURATE. Please rate the ABILITY of participants as they search the target: (1 = POOR SEARCHER; 6 = GOOD SEARCHER) Search Activity: Before beginning the experiment, each participant was instructed to search for the hidden object either ACTIVELY or PASSIVELY. The ACTIVE instructions were as follows: \u00E2\u0080\u009CThe best strategy for this task, and the one that we want you to use in this study, is to be as active as possible and to \u00E2\u0080\u009Csearch\u00E2\u0080\u009D for the target as you look [at the screen/around the room]. The idea is to deliberately direct your attention to find the target. Sometimes people find it difficult or strange to \u00E2\u0080\u009Cdirect their attention\u00E2\u0080\u009D \u00E2\u0080\u0093 but we would like you to try your best. Try to respond as quickly and accurately as you can while using this strategy. Remember, it is very critical for this experiment that you actively search for the target.\u00E2\u0080\u009D The PASSIVE instructions were as follows: \u00E2\u0080\u009CThe best strategy for this task, and the one that we want you to use in this study, is to be as receptive as possible and let the target \u00E2\u0080\u009Cpop\u00E2\u0080\u009D into your mind as you look [at the screen/around the room]. The idea is to your intuition determine how you find the target. Sometimes people find it difficult or strange to tune into their \u00E2\u0080\u009Cgut feelings\u00E2\u0080\u009D \u00E2\u0080\u0093 but we would like you to try your best. Try to respond as quickly and accurately as you can while using this strategy. Remember, it is very critical for this experiment that you let the target just \u00E2\u0080\u009Cpop\u00E2\u0080\u009D into your mind.\u00E2\u0080\u009D Please indicate which instructions you believe the participant received. \u00E2\u0080\u00A9 57\u00E2\u0080\u00A9 4 = CONFIDENT the person was instructed to search ACTIVELY 3 = GUESS that the person was instructed to search ACTIVELY 2 = GUESS that the person was instructed to search PASSIVELY 1 = CONFIDENT the person was instructed to search PASSIVELY Ratings of local behavior.\u00E2\u0080\u00A9 Head Movement: Please rate the amount of HEAD MOVEMENT made by participants as they searched for targets: (1 = NO HEAD MOVEMENT; 6 = MUCH HEAD MOVEMENT) Eye Movement: Please rate the amount of EYE MOVEMENT made by participants as they searched for targets: (1 = NO EYE MOVEMENTS; 6 = MANY EYE MOVEMENTS) Ratings of mindset. Interest: Please rate how INTERESTED participants appeared as they searched for the target: (1 = BORED; 6 = INTERESTED) Positive Emotion: Please rate the amount of PLEASURE AND SATISFACTION shown by participants upon finding the target: (1 = UNHAPPY/DISSATISFIED; 6 = VERY HAPPY/SATISFIED) \u00E2\u0080\u00A9 58\u00E2\u0080\u00A9 Appendix C: UBC BREB Certificate of Approval The Behavioral Research Ethics Board (BREB) Certificate of Approval (UBC BREB Number H09-01732). \u00E2\u0080\u00A9 59\u00E2\u0080\u00A9 Appendix D: HSP Consent Form \u00E2\u0080\u0093 Phase 1\u00E2\u0080\u00A9 Human Subject Pool Consent form for Phase 1 (Experiments 1 and 2). \u00E2\u0080\u00A9 60\u00E2\u0080\u00A9 \u00E2\u0080\u00A9 61\u00E2\u0080\u00A9 Appendix E: HSP Consent Form \u00E2\u0080\u0093 Phase 2 Human Subject Pool Consent form for Phase 2 (Experiments 1 and 2). \u00E2\u0080\u00A9 62\u00E2\u0080\u00A9 \u00E2\u0080\u00A9 63\u00E2\u0080\u00A9 Appendix F: HSP Debriefing Form Human Subject Pool Debriefing form for Phases 1 and 2 (Experiments 1 and 2). "@en . "Thesis/Dissertation"@en . "2010-11"@en . "10.14288/1.0071148"@en . "eng"@en . "Psychology"@en . "Vancouver : University of British Columbia Library"@en . "University of British Columbia"@en . "Attribution-NonCommercial-NoDerivatives 4.0 International"@en . "http://creativecommons.org/licenses/by-nc-nd/4.0/"@en . "Graduate"@en . "Person perception informs understanding of cognition during visual search"@en . "Text"@en . "http://hdl.handle.net/2429/27534"@en .