AN INTUITIVE TURN: UNDERSTANDING THE ROLES OF INTUITIVE AND RATIONAL PROCESSES IN MORAL DECISION-MAKING by CHRISTOPHER SCOTT NEWITT B.A., Simon Fraser University, 2000 M.A., The University of British Columbia, 2002 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES (Psychology) THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver) September 2009 © Christopher Scott Newitt, 2009 ii Abstract The relative contribution of reason and intuition to everyday moral decision-making is an issue that predates psychology as a distinct academic discipline. In the past several years this debate has become one of the most contentious issues in the social sciences. Although most researchers now accept that intuition plays some role in everyday moral decision-making, there is little conceptual agreement on what processes shape moral intuition. To date there have been no attempts to demonstrate convergent validity between competing measures of moral intuition. The goals of this project are to examine the convergent validity demonstrated by measures of moral intuition and to examine whether the concept of moral autonomy is a useful framework for understanding individual differences in the propensity to rely on intuition or reason when making moral decisions. This project comprises a series of three studies. Study 1 examines the relation between moral autonomy, general cognitive styles, and performance on a causal deviance task which taps intuitive judgments. Study 2 represents the first step in the search for convergent validity among measures of moral intuition; responses from the causal deviance task and the moral dumbfounding task are compared. In Study 3, two new measures of moral intuition are introduced and compared with existing measures. The results of this project suggest that the conceptualization of moral intuition differs significantly across theoretical perspectives and, as such, there is little convergent validity between measures derived from the heuristics-and-biases tradition and those from the sentimentalist tradition. A richer conception of intuition, one that captures the distinction between affective appraisals and decisions arrived at without conscious deliberation, offers the potential to bridge theoretical differences. This project represents the first attempt to demonstrate convergent validity between opposing theoretical iii conceptualizations of moral intuition. The lack of agreement between these theoretical approaches highlights the need to take a more conceptually rich view of intuition. Intuition is not simply an error, as suggested by the heuristics-and-biases approach, nor is it simply an affective response, as suggested by sentimentalists; rather, intuition is a concept characterized by non-inferential, non-deliberative understanding. iv Table of Contents Abstract ............................................................................................................................... ii Table of Contents ............................................................................................................... iv List of Tables ..................................................................................................................... vi List of Figures .................................................................................................................... vii Acknowledgements .......................................................................................................... viii Introduction and Literature Review ...................................................................................... 1 The Social Intuitionist Model ................................................................................... 3 Heuristics and Biases Theories of Moral Intuition .................................................. 17 Expertise Theories of Moral Intuition ..................................................................... 27 Study 1—Individual Differences in Preferences for Intuitive and Rational Approaches to Moral Decision-Making and their Relation to the Moral Typology ........................ 38 Method ................................................................................................................... 43 Participants ................................................................................................. 43 Measures .................................................................................................... 43 Procedure ................................................................................................... 47 Results ................................................................................................................... 47 Discussion ............................................................................................................. 56 Study 2—Are Measures of Moral Intuition Commensurable? Comparing the Causal Deviance and Moral Dumbfounding Methodologies .............................................. 62 Method ................................................................................................................... 73 Participants ................................................................................................. 73 Measures .................................................................................................... 73 Procedure ................................................................................................... 77 Results ................................................................................................................... 77 Discussion .............................................................................................................. 87 Study 3—Comparing Measures of Moral Intuition ............................................................ 97 Method ................................................................................................................. 110 Participants ............................................................................................... 110 Measures .................................................................................................. 110 Procedure ................................................................................................. 115 Results ................................................................................................................. 115 Discussion ........................................................................................................... 125 General Discussion .......................................................................................................... 134 v References ........................................................................................................................ 148 Appendix A—Moral Cognition Style Inventory ............................................................... 156 Appendix B—Ethics Approval Forms .............................................................................. 159 vi List of Tables Table 1. Logistic Multiple Regression for the Prediction of Non-Normative Action Judgments on the Causal Deviance Task (Study 1) ........................................... 54 Table 2. Logistic Multiple Regression for the Prediction of Non-Normative Character Judgments on the Causal Deviance Task (Study 1) ........................................... 55 Table 3. Correlations between Moral Intuition, Moral Autonomy, and Moral Identity .... 81 Table 4. Multiple Regression for the Prediction of Scores on the Moral Dumbfounding Measure (Study 2) ............................................................................................. 84 Table 5. Logistic Multiple Regression for the Prediction of Non-Normative Action Judgments on the Causal Deviance Task (Study 2) ............................................ 85 Table 6. Logistic Multiple Regression for the Prediction of Non-Normative Character Judgments on the Causal Deviance Task (Study 2) ............................................ 87 Table 7. Correlations between Judgments on the Moral Dumbfounding Task and the Causal Deviance Task ..................................................................................... 118 Table 8. Correlations between Moral Autonomy, Cognitive Style, and Deliberation ...... 120 Table 9. Multiple Regression for the Prediction of Scores on the Moral Dumbfounding Measure (Study 3) ........................................................................................... 123 Table 10. Logistic Multiple Regression for the Prediction of Non-Normative Action Judgments on the Causal Deviance Task (Study 3) .......................................... 124 vii List of Figures Figure 1. Percentage of Participants Responding Deviantly in Judging Actions and Character (Study 1) .......................................................................................... 49 Figure 2. Percentage of Participants Responding Deviantly in Judging Actions and Character (Study 2) .......................................................................................... 78 Figure 3. Percentage of Participants Responding Deviantly in Judging Actions and Character (Study 3) ........................................................................................ 116 viii Acknowledgements I would like to thank my supervisor, Larry Walker. Larry provided me with the intellectual freedom to pursue my own ideas. Yet, he did not leave me to sink or swim on my own; Larry was always available to provide me with insightful guidance and counsel. Larry’s tireless commitment to excellence underlies much of what is good within this manuscript. As a mentor, Larry has been the consummate model of academic excellence and integrity. I would also like to thank my committee members. Mark Schaller took the time to help me craft a dissertation that explored competing theoretical positions without being judgmental or disparaging, an achievement that I would not have been capable of on my own. Having been involved in my dissertation committee, my comprehensive examination committee, and my initiation into lecturing, Sue Birch has had significant impact on the later portion of my graduate school experience, and I have benefitted greatly from her counsel. Throughout my undergraduate and graduate educations I have had the good fortune to participate in many intellectually vibrant groups. Many of the opportunities that I have enjoyed can be traced back to the generosity of Dennis Krebs. As an undergraduate student, Dr. Krebs gave me the opportunity to conduct research, to manage research teams, and to teach small tutorial classes, opportunities that were rare for undergraduates. These opportunities paved the way for me to make it into graduate school. I am not sure what he saw in me initially, but I am thankful for the support and opportunities that he provided. I have had the pleasure of sharing office space with a number of quality people. While I was an undergraduate at SFU, Laura Mackay was part office-mate and part guidance counselor. At UBC, Justin Park was my long-term office-mate; it was discussions with ix Justin regarding the nature of moral decision-making that inspired the original research in this dissertation, and for that I am very thankful. The production of this manuscript would not have been possible without the work of SooYoun Kim. Sue had to enter several hundred thousand data points; and for that, I am sorry. I would also like to thank the 521 undergraduate students who participated in this research project. The final production of this manuscript was undertaken after I had already left Vancouver. This required me to travel back to UBC to work with Larry; this would not have been possible without the food and shelter generously provided by Pam Tovell. Finally, I would not have been able to complete this project without the emotional, intellectual, and financial support of my wife Jessica. Unfortunately, sacrifices are often necessary to complete research projects such as this. Jessica sometimes referred to herself as a dissertation widow; my son knew that his daddy had to work on his “little book” for long hours on many days. I was not the only one who sacrificed to see this project through. I do not view my doctorate as a personal achievement; in every sense my wife was my partner in this enterprise. I could not, and would not, have made this journey by myself; it is my love for Jessica, Killian, and Maggie that gives this accomplishment meaning. 1 Introduction and Literature Review The contemporary study of moral cognition has been stimulated by a renewed interest in intuitive approaches for explaining moral judgment processes. In the time since the cognitive revolution in psychology, the study of moral cognition within psychology has been dominated by a limited focus on the rational processes thought to underlie moral functioning. The cognitive-developmental approach, as embodied in the theories of Piaget (1932/1965), Kohlberg (1984a), and Turiel (1983), has been the regnant theoretical stance on moral cognition in psychology since the 1960s. This approach emphasizes the importance of conscious, deliberate, logical reflection in the process of moral judgment-making. In recent years, however, a growing body of researchers from diverse academic backgrounds has begun to argue that intuitive theories of moral judgment provide a more accurate description of the moral judgment processes that occur in people’s everyday lives. Intuitive theories of moral judgment emphasize the importance of unconscious, automatic, affectively imbued judgments. Some advocates of the intuitive approach have gone so far as to argue that all rational moral reasoning is simply post hoc justification and, as such, offers little insight in explaining people’s real-world moral judgments (Haidt, 2001). Other authors, echoing the dual-process theories prevalent in other domains of social cognition, have argued that both rational and intuitive processes play a role in everyday moral judgments (Pizarro, Uhlmann, & Bloom, 2003). Pizarro et al. agreed with Haidt that people’s initial moral judgments are most likely often intuitive, an immediate response to a given circumstance; however, they believe that people may, in certain circumstances, reflect on their judgment and override their intuitive responses. There is a considerable body of research demonstrating that there are individual differences in the rates of non-normative 2 responding on decision-making measures, this research supports the idea that some people are subject to faulty intuitions while others are not (cf., Stanovich & West, 2008). Pizarro et al. suggested that there may be individual differences in people’s preference to engage in either intuitive or rational processes in making their everyday moral judgments. This dissertation is intended as an exploration of the hypothesized individual differences in the propensity to engage in intuitive or rational moral judgment processes. One of the central issues facing the contemporary study of moral cognition is resolving the debate concerning the relative contributions of intuition and reasoning to people’s everyday moral decision- making. Evidence of individual differences in moral decision-making strategies could help to clarify how factors characterizing the individual influence the type of decision-making strategy implemented. Before presenting the empirical findings of this research project, I will discuss the intuitive turn that has occurred in the study of moral cognition within psychology over the past decade. The following account of the diverse theoretical approaches that are currently being applied to the concept of moral intuition in psychology is organized in terms of (a) theories that characterize moral intuition as primarily affective evaluations, (b) those theories that characterize moral intuition as error-prone cognitive strategies, and (c) those that characterize moral intuition as efficient cognitive strategies. One fundamental issue that complicates the discussion of moral intuition in psychology is the definition of intuition. The three different categories of moral intuition theories each characterize intuition in their own way; yet, it is not clear that any of these theories characterize intuition in a manner that corresponds to how the construct is defined in other areas of study within the cognitive sciences. Osbeck (1999) argues that the types of 3 psychological definitions of intuition in these theories of moral intuition do not reflect the philosophical history of the construct. For her, a meaningful definition of intuition should reflect “direct, noninferential apprehension” (Osbeck, 1999, p. 246). According to Osbeck, intuition is akin to perception; that is, to intuit a moral principle is to perceive it directly, as self-evident, without inference. Thus, in order to differentiate moral decisions that are the product of intuitive processes from decisions that are the product of rational reflection, measures of moral intuition must operationalize the processes that define intuition: operation outside of conscious awareness, fast and efficient decision-making, a strong affective valence, and the automatic activation of the decision-making processes. It will become increasingly evident in the course of this project that the existing measures of moral intuition do not meet this standard, and as such, new measures of moral intuition are required. The Social Intuitionist Model Haidt’s (2001) social intuitionist model challenges the fundamental assumptions underlying the dominant theories of moral functioning in contemporary psychology. Since the cognitive revolution and the related ascendancy of Kohlberg’s (1969, 1981) cognitive- developmental theory of moral development, the study of moral cognition within psychology has focused almost exclusively on rational processes. The moral agent is typically characterized as engaging in deliberative, rational reflection in order to arrive at moral judgments. The social intuitionist model, on the other hand, is based on the idea that moral judgments are intuitive, affective evaluations that occur automatically, outside of the individual’s awareness. Haidt and Bjorklund (2008a, p. 188) describe the process as “the sudden appearance in consciousness, or at the fringe of consciousness, of an evaluative 4 feeling (like–dislike, good–bad) about the character or actions of a person, without any conscious awareness of having gone through steps of search, weighing evidence, or inferring a conclusion.” With the evidence accumulated from across a diverse range of disciplines including social cognition, evolutionary psychology, anthropology, and primatology, Haidt (2001) has convincingly rekindled Hume’s (1740/2007) classic argument for the primacy of the human species’ innate moral sense. The proposed central role for affect in the processes of moral judgment in the social intuitionist model is the antithesis of the role ascribed to emotion in Kohlberg’s (1984a, p. 67) rationalistic theory: “With regard to moral emotion, then, our point of view is that the ‘cognitive’ definition of the moral situation directly determines the moral emotion the situation arouses.” Not surprisingly, Haidt’s attempts to reorient the study of moral judgment from reason to intuition have not been warmly received by advocates of rational theories of moral judgment. Nevertheless, few would argue against the claim that the social intuitionist model has played a central role in rekindling the discussion of the processes that underlie our everyday moral decisions, as well as the origin of these processes in our species; and in doing so it has encouraged many new researchers from diverse disciplines to join in the study of moral functioning. The social intuitionist model of moral judgment emphasizes the role of automatic affective evaluations in generating moral judgments; it also emphasizes the important role of social interaction in the generation and transmission of moral judgments. Conversely, the social intuitionist model downplays the importance of rational processes in moral judgment- making. The typical path of moral cognition in this model begins with some eliciting event 5 prompting a moral intuition, a process that Haidt (2001) describes as being akin to feature perception or aesthetic judgments of taste. Within this model, rational cognitive processes may be engaged to provide post hoc rationalizations in order to justify moral judgments, but they play no role in the initial formation of the moral judgment. To Haidt, the role of moral reasoning is primarily social; it is employed to teach or to persuade others through the justification of one’s own moral judgments; in essence, the social intuitionist model posits that moral reasoning is simply verbal reasoning intended to evoke moral intuitions in others (Haidt, 2008). One of the fundamental assumptions of the social intuitionist model is that the selection pressures of evolution have resulted in the human mind being equipped with the potential to manifest numerous moral intuitions. Haidt and Joseph (2004) argue for a dual- process characterization of human cognition. They claim that there are two distinct processing systems within the mind: Most cognition is carried out automatically, outside of the individual’s awareness by an intuitive system; but, the human mind, unlike that of other species, is also equipped with a second information-processing system that relies on conscious, effortful deliberation. In the social intuitionist model, the intuitive system is based on the analogy of the mind as a toolbox, equipped with many specialized mechanisms that have been selected because of their success in solving recurrent adaptive problems. Moral intuitions are thought to have evolved, in part, because they facilitate the development and maintenance of beneficial social relations (Haidt, 2004). Although some might argue that applying notions of morality to higher primates represents a form of anthropomorphization, de Waal (1991) has observed that prescriptive norms emerge and are maintained in chimp groups, even without the benefit of language. Also, converging 6 evidence from anthropology and primatology suggests that the evolutionary roots of human moral capacities are evident in higher primates. Fiske (1992) has identified four basic patterns of social interaction that are evident in all human societies: communal sharing, dominance hierarchies, direct reciprocity, and market valuing. Interestingly, de Waal has observed that chimp groups demonstrate the first three of Fiske’s social patterns. Haidt (2001) argues that the proto-moral behaviors of chimps provide tangible evidence for the phylogenesis of moral intuitions. Working from a theoretical background that emphasizes cross-cultural differences in the content of moral judgments, Haidt (2001) was faced with the difficult task of constructing a theory with the flexibility to explain the development of innate capacities that vary widely across cultures, and even within cultures. Initially, Haidt argued simply that the human mind has evolved the potential to develop numerous moral intuitions underlying various moral concerns across the domains of autonomy, community, and divinity. More recently, Haidt has argued for a more elaborated modular approach that echoes the work of evolutionary psychologists such as Krebs and Denton (2005). Haidt (Haidt & Joseph, 2004) claimed that the human mind is prepared to learn moral intuitions pertaining to four modules of moral judgment, and that each module is equipped with a characteristic emotion: suffering (compassion), hierarchy (resentment vs. respect), reciprocity (anger/guilt vs. gratitude), and purity (disgust). The most recent version of the social intuitionist model (Haidt, 2008) has seen the inclusion of a fifth module, loyalty, to explain ingroup–outgroup interactions. Haidt (2008) argues that moral judgments in all cultures and social groups can be derived from these five foundational modules. 7 In this line of reasoning, all well-formed humans are prepared at birth to learn moral intuitions from each of the five moral intuition modules. Haidt argues that maturation and the environment interact in shaping the expression of moral intuitions. The maturation of innate moral intuitions is described in Fiske’s (1991) process of externalization, in which innate cognitive models manifest themselves as a part of normal maturation. Fiske cites evidence that the four models of social behavior emerge during development in an invariant universal sequence: communal sharing in infancy, dominance hierarchies by 3 years of age, direct reciprocity by 4 years of age, and market valuing by middle to late childhood. Moreover, the expression of these intuitions occurs with an immediacy that cannot be explained by learning theories or other external pressures. It is interesting to note that the order in which the social models emerge during externalization reflects the evolutionary history of these models in primates, suggesting that ontogeny recapitulates phylogeny in the case of moral intuitions. Haidt uses the analogy of phonological development to explain the pruning of potential moral intuitions. He cites Werker and Tees’ (1984) finding that, although children are born with the ability to discriminate between hundreds of distinct phonemes, after several years of exposure to a specific language, they lose the ability to make unexercised phoneme distinctions; Haidt contends that moral intuitions develop in a similar manner. Haidt argues that the development of moral intuitions is a case of experience–expectant development. He believes that a period of neural plasticity in the prefrontal cortex that occurs between late childhood and adolescence coincides with a sensitive period for the formation of moral intuitions. 8 If we accept, as Haidt does, that cultures differ significantly in their moral norms, then we would expect children from different cultures to be exposed to significantly different patterns of moral norms. Through social interactions with their parents, with peers, and their society, children are continually confronted with the prescriptive social rules of their particular culture. Haidt argues that these experiences strengthen the particular moral intuitions that underlie the given norms of their culture, while unexercised moral intuitions are lost. Thus, Haidt’s model of the development of moral intuitions is a maintenance-loss model: Humans are born with the potential to learn numerous moral intuitions, but cultural experiences determine which intuitions are strengthened and which are lost. Haidt provides some controversial support for this contention through Minoura’s (1992) study of the children of Japanese business executives living abroad. In short, children who were not immersed in Japanese cultural values during late childhood and early adolescence failed to develop typically Japanese values, even if they had lived in Japan before and after this proposed sensitive period. However, some critics have argued that the cultural meaning systems studied by Minoura are not necessarily moral meaning systems (Narvaez, 2008). Nevertheless, the possibility of such a sensitive period is an empirical question that surely necessitates further study. Less controversial support for the notion of cultural variability is provided by research conducted by Haidt and his colleagues within Western culture. Haidt has long contended that traditional cognitive-developmental theories of moral cognition such as those of Piaget (1932/1965), Kohlberg (1981, 1984a), Gilligan (1982), and Turiel (1983) are based on restrictive liberal Western notions of morality that encompass only two of his five moral modules: reciprocity and suffering (Haidt, 2004). Cross-cultural research reported by Haidt 9 and Joseph (2004) provides compelling support for the idea that issues of purity, respect for authority and institutions, and loyalty to one’s own group, are imbued with moral significance in other cultures. In fact, such variation is apparent within Western culture. Haidt and Graham (2007) conducted an online survey that asked participants to rate the relevance of moral justifications that corresponded to the five fundamental moral intuitions. They found that participants who identified themselves as being extremely liberal indicated that they only considered justifications that reflected the suffering and reciprocity intuitions as being important in moral decisions. Those participants who described themselves as being extremely conservative, on the other hand, indicated that they considered justifications reflecting all five foundational intuitions as being important in making moral decisions. Haidt and Graham make a compelling argument that these group differences in the types of moral intuitions experienced by liberals and conservatives help to explain the often incommensurable opinions of liberals and conservatives on matters of moral significance in contemporary society. Three lines of research have been used to support the social intuitionist model: moral dumbfounding, affective manipulations, and functional neuroimaging. Perhaps the best known, and most contentious, research conducted with the social intuitionist model is based on a methodology that Haidt refers to as moral dumbfounding. Essentially, moral dumbfounding involves presenting participants with vignettes that lack any discernable instance of physical harm or injustice, yet still evoke intuitive moral judgments (Haidt, Bjorklund, & Murphy, 2000). The following is an example of a typical vignette from Haidt (2001). Julie and Mark are brother and sister. They are travelling together in France on summer vacation from college. One night they are staying alone in a cabin near the 10 beach. They decide it would interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide not to do it again. They keep that night as a special secret, which makes them feel even closer to each other. What do you think about that? Was it OK for them to make love? (p. 814) Haidt (2001) reports that his participants were nearly unanimous in their judgments that what Julie and Mark had done was morally wrong. After being reminded that nobody was hurt by their behavior and that there was little or no chance of inbreeding occurring, the participants still reported that the behaviors were wrong. When probed to explain why the act was wrong, Haidt (2001, p. 814) reported that participants responded with statements like, “I don’t know, I can’t explain it, I just know it’s wrong.” Haidt took these findings as evidence that people make their moral judgments based on intuition and not reasoning. While it is possible that the moral decisions reported to Haidt were, in fact, intuitions, his argument that they must necessarily be intuitions does not hold up to scrutiny, nor does the claim that such decisions are purely the product of affective responses. Haidt’s argument that people must be making moral intuitions on the sibling incest moral dumbfounding task because there are no rational reasons to oppose incest as it is depicted ignores the distinction between consequentialist and deontological moral decisions. For those who espouse consequentialism, basing their decisions on the potential outcomes of an act, there may not be a “reason” to find the sibling incest story wrong. However, a deontological moral choice could well be made without a reliance on affective motivation: The widely held norm in Western culture that sexual relationships with family members are always wrong requires no reflection, evaluation, or justification. In fact, one could argue that deontological principles could facilitate moral intuition without affect: Principles such as respect your elders or do not intentionally inflict pain on others provide 11 sufficient justification for making moral judgments. On the other hand, it is considerably more difficult to make the argument that moral intuition is solely the product of affective evaluations. Continuing with the sibling incest example, we can infer that this moral dumbfounding task evokes moral intuitions from the purity module; as such, moral intuitions concerning this scenario should reflect disgust according to the social intuitionist model. The question now becomes how do we get from disgust to wrong? When my infant daughter vomits on me I certainly find it disgusting, but I do not make a moral judgment that her behavior is wrong or bad. Nichols (2008) argues convincingly that sentimental theories of moral judgment, theories in which affect alone causes moral judgments, are untenable because there needs to be an evaluative mechanism to activate the affect and, in order for that evaluation to take place, there needs to be some kind of normative theory of what is appropriate. The second line of research cited by Haidt and his colleagues in support of the social intuitionist model involves the study of how manipulations of affective processes influence subsequent moral judgment. In a pair of studies, Wheatley and Haidt (2005) examined how using post-hypnotic suggestions to evoke disgust responses from participants for arbitrary non-moral words would influence subsequent moral judgments. They found that the activation of a disgust response significantly increased the severity of moral judgments relative to a control group. Schnall, Haidt, Clore, and Jordan (2008) conducted a series of studies to explore how feelings of disgust magnify moral judgments. Three interesting results emerged from their work: First, they replicated Wheatley and Haidt’s finding that inducing feelings of disgust magnified the severity of ratings when making a moral judgment. Second, they found that individual differences in their participants’ awareness of 12 their bodily sensations influenced the effect of disgust on moral judgments. And third, they found that so-called moral emotions have different effects: disgust increased ratings on moral judgments whereas sadness decreased ratings. Both of these projects highlight the importance of affect, particularly disgust, in moral judgment. However, although these findings suggest that emotional activation influences our appraisals of how good or bad people or events might be, they do not provide compelling evidence that affective responses cause moral judgment. It is equally plausible that the effect of emotional activation occurs after a judgment has been made, serving as a motivational force for the judgment (Huebner, Dwyer, & Hauser, 2009). With recent advancements in neuroimaging technology, neuropsychological approaches have become a very popular method for studying decision-making and other information-processing activities. In fact, biological evidence of mental functioning is often portrayed as being somehow more tangible than other types of data. It should be cautioned, however, that the detection of corresponding biological mechanisms and mental mechanisms does not provide any meaningful insights into the origin of either. Nevertheless, neuropsychological data can support the existence of specific localized mechanisms, as well as providing some insight into their functioning. Greene and Haidt (2002) reviewed a number of studies that looked at the biological processes underlying moral judgments. They found no indications that there was a specific location in the brain dedicated to the processes of moral judgment. However, certain moral judgment distinctions seem to result in discernibly different patterns of brain activity. Greene, Sommerville, Nystrom, Darley, and Cohen (2001) conducted a study in which they compared neural activity during two types of moral 13 judgment scenarios. They had participants respond to classic philosophical moral dilemmas, the trolley dilemma and the footbridge dilemma (Thomson, 1986), while being monitored in an fMRI scanner. The trolley dilemma presents the participant with a scenario in which a runaway trolley is headed towards a group of five people. On its present course the trolley will kill all five people; the only way to save the people is to turn a switch that will send the trolley onto another track. However, if the trolley is switched onto the other track it will kill one person. The participant is then asked whether it is permissible to cause the death of one person to save five people. The footbridge dilemma presents a scenario in which a runaway train is going to kill five people; the only way to save the five people is to push a large individual from the footbridge onto the tracks. Again, the participants are asked whether it is permissible to cause the death of one person to save five people. Greene et al. argue that these two dilemmas represent distinct types of moral judgment, impersonal and personal. A personal dilemma must meet three criteria: It has to be likely to cause personal harm, it must be directed at a specific individual, and the harm experienced to the individual is not simply the result of the deflection of an existing threat to new target (Greene & Haidt, 2002). Thus, the trolley dilemma represents an impersonal moral dilemma and the footbridge dilemma is a personal moral dilemma. Greene et al. (2001) found that participants were quick to allow the impersonal moral judgment, but much less likely to allow the personal one. Thus, two dilemmas with the exact same consequences were judged very differently. Greene et al. argue that personal moral dilemmas are treated differently by the participants because people have an evolved moral constraint against directly causing the death of a member of their group. Thus, personal 14 moral dilemmas evoke strong emotional responses which precipitate intuitive judgments that preclude transgressions (Greene, 2005). In terms of brain activity, Greene et al. (2001) found that during impersonal moral decisions there was relatively more activity in regions associated with the working memory (the inferior parietal lobe and the dorsolateral prefrontal cortex). During the personal moral judgments, however, there was increased activity in the superior temporal sulcus, a region of the brain associated with theory of mind and other forms of social cognition, as well as increased activity in two brain regions related to emotions (the posterior cingulated cortex and the medial prefrontal cortex). Thus, impersonal moral judgments demonstrated patterns of neural activity that suggested higher-order cognition, whereas personal moral judgments were characterized by affective activation. Thus, the neuroimaging data provide some support for Greene et al.’s (2001) position in that the personal moral dilemmas were related to strong affective activation. Haidt (2007) and Greene et al. (2001) argue that the neural activity observed during the personal moral dilemmas provides evidence that moral intuition is primarily constituted from affective responses. Huebner et al. (2009) argue that this interpretation goes beyond the data. Given that the data are merely correlational, Huebner et al. argue that what we can take from this line of neuroimaging research is simply that deontological judgments are often accompanied by affective activation. Nichols and Mallon (2006) conducted a series of studies designed to test whether rule systems play a role in resolving the trolley dilemma and the footbridge dilemma. In a catastrophe version of the footbridge dilemma (the train is carrying a deadly virus that, if leaked, will kill billions of people; however, if the protagonist throws the large man off the 15 bridge the train will not derail and billions will be saved), 68% of the participants reported that the protagonist had broken a moral rule, but 66% reported that all things considered the protagonist had done the right thing. These findings suggest that even personal moral dilemmas are subject to normative rules of behavior and cost–benefit analyses. Thus, the results from Greene et al. (2001) and Nichols and Mallon (2006) taken together suggest that affect, normative values or rules, and utilitarian analyses all interact in the constitution of moral judgments. The social intuitionist model of moral judgment is praiseworthy in many respects: Haidt’s theory has illuminated the importance of moral intuition in the process of real-life moral judgment and, in doing so, has played a significant role in invigorating the field of study. The social intuitionist model reminds Western psychologists to consider notions of morality beyond those to which we commit ourselves. And finally, the social intuitionist model reaffirms the important role of affect in the moral judgment process. But we need to consider critically the relative contribution of purely affective moral intuitions in the broader context of people’s real-life moral judgments. Haidt (2001, p. 817) characterizes moral judgment “as evaluations (good vs. bad) of the actions or character of a person.” Clearly such a definition represents an overly restrictive characterization of moral decision-making. Moral decision-making encompasses a number of different types of judgments. Boyd (1977) suggests that there are at least three general types of moral judgments: those that concern evaluations of good versus bad, as in Haidt’s moral intuitions; those that concern evaluations of right versus wrong; and those that concern praising and blaming others. Purely affective intuitions would seem to be ill-equipped to engage in the types of decision- making processes required for the latter two forms of moral decision-making. One could 16 also argue that Haidt’s definition of intuition is itself overly narrow. To define intuition as simply unconscious affective judgments neglects much of the conceptual understanding of intuition that has accumulated across the history of Western philosophy. Osbeck (1999, p. 246) argues that an informed psychological notion of intuition should emphasize “direct, noninferential apprehension,” not simply affective or automatic judgments. Historically, intuition has been construed as the direct perception of first principles, of the self-evident truths from which reasoning begins (Osbeck, 1999). Haidt’s conception of intuition as purely affective judgment disregards much of what is the proper domain of intuition. Perhaps not surprisingly, if Haidt had adopted a more conceptually rich definition of intuition, the social intuitionist model would address a broader scope of everyday moral cognition. Given the standing of rational theories of moral cognition within psychology, it is not surprising that one of the fundamental criticisms of the social intuitionist model is that it disregards the role of reasoning in everyday moral functioning. As noted earlier, Haidt (2001, 2004, 2008) contends that moral reasoning functions primarily in the interpersonal context of trying to persuade others of the value of one’s moral judgments. Narvaez (2008) argues that when one considers the full range of moral cognition including such processes as evaluating meaningful alternative actions, assessing past choices, monitoring one’s progress towards morally relevant goals, as well as setting personal goals, it is difficult to see how simple affective intuitions would be sufficient to carry out all of these processes. Narvaez argues that although intuition likely plays some role in all of these processes, it is unrealistic to entirely dismiss the contribution of reflective reasoning. 17 In their most recent writings Haidt and Bjorklund (2008b) have acknowledged the weakness of their original definition of moral intuition which equated moral intuition with moral judgment. The distinction is important in that it allows for moral judgments to be arrived at from various processes, both intuitive and rational; and, it recognizes that affective responding is a part of the process of moral intuition, not an outcome or end-point. Of course, this revision also allows for the possible inclusion of decision rules in the social intuitionist model of moral judgment. Given this subtle shift in position, the next iteration of the social intuitionist model has the potential to rectify many of the criticisms lodged against it. Heuristics and Biases Theories of Moral Intuition The discussion will now turn to theorists who highlight the significance of cognition in making moral intuitions. The argument put forth by Sunstein (2003, 2005) is that people’s everyday moral functioning is often dictated by moral heuristics—moral shortcuts or rules of thumb—which can lead to systematic errors in judgment. In particular, Sunstein is interested in the effects that moral heuristics have on legal and political decision-making. Sunstein’s (2003, 2005) discussion of moral heuristics is based on Kahneman and Tversky’s (1984) Nobel-Prize-winning research on heuristics and biases in human cognition. The most recent rendering of Kahneman and Tversky’s theory is presented within the framework of a dual-process theory of human cognition. Dual-process theories are based on the idea that within the human mind there are two distinct, yet interactive, cognitive systems that correspond to intuition and reasoning (Kahneman, 2003). Although there is considerable consensus among researchers regarding the characteristics of these two systems (e.g., Chaiken & Trope, 1999), the two systems have been given various labels across the 18 literature. For the sake of clarity I will adopt Stanovich and West’s (1999) taxonomy and refer to the systems as System 1 and System 2. System 1 is characterized as being fast, automatic, and affectively charged; whereas System 2 is characterized as being slow, effortful, and controlled (Kahneman, 2003). According to Kahneman, the two systems are functionally interactive: Most judgments and intentions are initially generated by System 1, and are involuntary and automatic, similar to perceptual processes; System 2 monitors and corrects the judgments generated by System 1; however, in everyday life, relatively few of the intuitive judgments generated by System 1 are corrected or overridden by System 2. Thus, although Kahneman and Tversky’s classic research emphasized people’s use of heuristics under conditions of uncertainty, the current formulation of their theory proposes that System 1 is the primary decision-making process in people’s everyday lives. Given that Sunstein’s moral heuristics are based primarily on System 1 processes, it would seem prudent to explore the proposed functioning of this system in greater detail. At the core of System 1 processes are the heuristics that shape intuitive judgment. Kahneman and Frederick (2002) describe three general-purpose heuristics thought to underlie intuitive judgments: availability, representativeness, and affect. The simple efficiency of these heuristics is thought to derive from the fact that they capitalize on the evolved competencies of the human mind. The availability heuristic is thought to capitalize on efficient memory retrieval processes; according to the availability heuristic, more easily remembered stimuli will be judged as more frequent than harder to remember stimuli (Tversky & Kahneman, 1974). For instance, when asked to consider the relative frequency of steroid use among baseball players and hockey players, under the availability heuristic the saliency of episodes such as Rafael 19 Palmeiro’s vehement denial of steroid use and subsequent positive test for steroid use would likely lead people to judge that steroid use is more prevalent in baseball than it is in hockey. The representativeness heuristic, on the other hand, exploits pattern recognition mechanisms; under the representativeness heuristic people make judgments of similarity based on superficial characteristics (Tversky & Kahneman, 1974). For example, when asked if they think Barry Bonds uses steroids, people employing the representativeness heuristic will compare Bonds with their prototype of a typical steroid user. The affect heuristic is the most recent addition to the list of general purpose heuristics. Kahneman and Frederick (2002) argue that every stimulus evokes an affective evaluation, and when these initial automatic evaluations guide our judgments they serve as heuristics. For example, when we encounter an athlete who has been labelled a “cheater,” the affective valance evoked by the label will likely shape our subsequent judgments of that individual. Across the past three decades, considerable data have been accumulated supporting the existence of these heuristics and their effects on human decision-making (see Gilovich, Griffin, & Kahneman, 2002); however, only recently have Kahneman and colleagues described the processes that are thought to underlie the functioning of System 1. In the early work of Tversky and Kahneman (1974), the operation of heuristics was explained by the notion that when people are confronted with a complex decision, they often respond by unknowingly answering a simpler question. Take the earlier example concerning the relative rate of steroid use among baseball players and hockey players: When an individual uses the availability heuristic they are answering the question, “In which sport are incidents of steroid use more easily recalled?” instead of the more difficult question of “What are the relative rates of steroid use in baseball and hockey?”. 20 Kahneman and Frederick (2002) have recently provided a more thorough description of the process that underlies intuitive judgments, a process they have termed attribute substitution: “Judgment is said to be mediated by a heuristic when the individual assesses a specific target attribute of a judgment object by substituting another property of that object—the heuristic attribute—which comes more readily to mind” (p. 53). In the previous example the actual rates of steroid use in baseball and hockey form the target attribute; given that very few people would know the actual rate of steroid use in either of these sports, it is likely that for most people the recall of salient steroid-related events in the media would come to mind more readily; and thus, the saliency of such events to memory recall would serve as the heuristic attribute. Kahneman and Frederick (2002) explain that the intent to assess any given attribute initiates a cognitive search for a reasonable value; often this search will come to a quick end with obvious values to such assessments as “How old am I?” coming readily to mind. However, in situations where the target value is relatively inaccessible, a highly accessible, semantically related attribute may be substituted (Kahneman & Frederick, 2002). Kahneman and Frederick discuss a number of factors thought to determine the accessibility of given attributes, a discussion that is beyond the scope of this dissertation. Another central aspect of Kahneman and Tversky’s (1984) research program has been the examination of systematic cognitive biases caused by the use of heuristics. Kahneman and Frederick (2002) explain that since the target attribute and the heuristic attribute are different, attribute substitution inevitably introduces systematic biases in judgment. An extensive literature documenting numerous examples of bias has accumulated over the years (see Kahneman & Tversky, 1984; Kahneman & Frederick, 2002). Within this research 21 tradition, a bias is defined as a systematic deviation from a normative theory; much of the research conducted by Kahneman and Tversky has examined biases in probabilistic judgments. A classic example of this research is a study in which participants are asked to rate the relative frequency of words ending in “ing” to words with “n” as the second to last letter (Kahneman & Tversky, 1973). The apparent relative availability of examples of words ending in “ing” led the participants to judge words ending in “ing” as being significantly more frequent than those with “n” as the penultimate letter, an obvious logical error. The emphasis on biases in the literature has been the source of much controversy. Many scholars have argued that intuitive judgments are far more accurate in natural settings than they appear to be in artificial laboratory settings (cf., Gigerenzer, Czerlinski, & Martignon, 2002). Kahneman and Frederick (2002) argue that, given sufficient information, thus increasing the accessibility of the target attribute, intuitive judgments will often be accurate, or will be corrected by System 2; but, in situations where there is limited information on which to base judgments, heuristics can lead to systematic biases. Having outlined Kahneman and Tversky’s heuristics-and-biases theory, I now turn to the task of explaining Sunstein’s application of heuristics and biases to the processes of moral judgment. The thesis of Sunstein’s discussion is straightforward: People’s everyday moral judgments are often guided by general-purpose heuristics and that the use of these heuristics can lead to serious systematic errors. Sunstein argues that when confronted with moral, political, or legal dilemmas, people often engage in attribute substitution. For example, when faced with a difficult moral question a person may ask themselves what a trusted religious leader would do in similar circumstances; or they may try to recall a similar situation they have experienced, and apply 22 the solution used for the past problem to the current situation (Sunstein, 2005). Sunstein explains that moral heuristics are often simply generalizations from a range of problems to which they originally provided efficient solutions. Errors arise then when these generalizations are treated as universal principles and applied in contexts where their original justifications no longer apply. For example, “it is wrong to steal” is a very general moral rule, but in a context wherein theft may save another’s life it becomes permissible, if not obligatory, to steal. Of course, as Sunstein notes, when Kahneman and Tversky studied biases, they were examining factual errors, so determining some deviation from a normative theory was relatively easy. The situation in the moral domain is quite different; there is no universally accepted moral truth that one can use to assess the accuracy of intuitive judgments. In order to undertake some assessment of moral heuristics, Sunstein adopts a weak form of utilitarianism (which he refers to as weak consequentialism) as a normative theory of moral judgment. Since one of Sunstein’s goals is to study how moral heuristics influence legal and political judgments, weak consequentialism provides a useful comparison because it holds that the intended goals that shape a policy or law should serve as the normative theory. Having characterized moral heuristics as a System 1 mechanism, Sunstein (2003, 2005) provides several examples of how moral heuristics function in the real world, and how these moral heuristics come to influence social policy and legal judgments. Sunstein (2005) describes a number of moral heuristics across a range of content domains including punishment, sexuality and reproduction, and risk regulation. One promising moral heuristic examined by Kahneman, Schkade, and Sunstein (1998) is the outrage heuristic. The outrage heuristic is thought to underlie people’s punishment decisions. Mapping neatly onto the idea 23 of affective heuristics, the outrage heuristic posits that people base their punitive intent on the level of outrage evoked by a given situation (Kahneman et al., 1998). In many instances the notion of punishment being proportional to the transgression is reasonable and effective, but in some contexts, Sunstein believes the outrage heuristic leads to significant errors. In looking at jury decisions, Kahneman et al. (1998) concluded that jurists often misapply the outrage heuristic when making decisions regarding corporate wrongdoing. Jurists, it seems, punish corporations as if they are individuals: Stiff fines are meted out to corporate wrongdoers to punish them for their misdeeds. Sunstein (2005) argues that this application of the outrage heuristic is clearly a systematic bias because, on reflection, it seems obvious that fines will not “hurt” corporations, but they may cause serious negative repercussions for innocent parties through higher costs, loss of employment, or lower wages. Although the outrage heuristic is a reasonable candidate for a moral heuristic, Sunstein’s other candidate heuristics seem less viable. Take, for example, the “do not tamper with natural processes for human reproduction” heuristic. It is suggested that this heuristic underlies people’s powerful intuitive opposition to human cloning (Sunstein, 2005). However, it would seem that if such a heuristic did in fact exist, there would be significant opposition to the production and dissemination of reproductive technologies such as in vitro fertilization which, of course, there isn’t. Sunstein argues that this heuristic is a specific case of a more general “do not tamper with nature” heuristic. This more general heuristic is thought to explain people’s seemingly irrational preference for things natural over those human-made; such as people’s preference for natural water over processed water even though they have identical chemical structures (Rozin & Nemeroff, 2002). A case can 24 certainly be made for the existence of the more general heuristic, but one can hardly consider it a moral heuristic. A potential weakness of Sunstein’s discussion of moral heuristics would seem to be that his heuristics do not clearly pertain to morality at all. Granted, Sunstein is primarily concerned with the impact of so-called moral heuristics on social and legal policies, and one could argue that such issues reflect individual and cultural values, but issues pertaining to corporate litigation, emissions trading, or cost–benefit analyses are not the moral issues typically considered in the purview of psychology. Within psychology, discussions of morality are typically constrained to notions of justice, welfare, personal rights, and care in the context of interpersonal relations. From the perspective of social domain theory (Turiel, 1983), the regnant theory of moral functioning in contemporary psychology, social and legal policies are often arbitrary; their meaning is defined by the social system in which they are constructed; their influence arises from the consensus that upholds them; and as such, they are more properly considered to be issues of social convention rather than issues of morality. That being said, the role of affect heuristics, such as the outrage heuristic, in everyday moral functioning, seems a promising avenue of exploration. Another apparent short-coming of Sunstein’s discussion arises from his treatment of the origins of his so-called moral heuristics. Granted, Sunstein’s focus was not the phylogeny or ontogeny of moral heuristics, but given that he raised the issue of their origins, this question deserves a more thorough treatment. Sunstein (2005) explained that many moral heuristics may have evolutionary origins, but that social learning and cascade effects could also facilitate the development of moral heuristics. Some elaboration on these points may be helpful. 25 Given the controversy concerning the origins of System 1 processes (Stanovich, 2004), it would be interesting to read an account of how and why some moral heuristics evolved while others developed through the mechanisms of social learning, and whether the origin of a given heuristic has any bearing on its functioning. Moreover, Sunstein’s (2005) model of moral heuristics fails to specify the role of System 2 in the moral domain. Sunstein’s initial description of attribute substitution in the moral domain, wherein individuals asks themselves what a trusted authority figure might do in the same situation, would appear to reflect the deliberate processing that characterizes System 2. Nevertheless, it is apparent that System 2 has some role in moral cognition. Recent research by Pizarro et al. (2003) suggests that System 2 rational processes can correct faulty intuitive moral judgments, but it is unclear under what circumstances this process occurs in everyday life. In the end, Sunstein’s discussion of moral heuristics is characterized by four weaknesses: First, it lacked a discussion of the heuristics that underlie people’s everyday moral intuitions; second, it failed to provide an explanation or description of the interaction of moral heuristics and System 2 processes; third, the theoretical framework lacked any meaningful explanation of the origins of the moral heuristics; and fourth, following in the tradition of the heuristics-and-biases approach, Sunstein construed moral heuristics as a suboptimal, error-prone alternative to rational reflection. That being said, one of Sunstein’s stated goals was to encourage further exploration of the processes that underlie moral intuition, and in that respect he probably has been successful. Moreover, several aspects central to the heuristics-and-biases literature seem to be important considerations for any meaningful account of people’s real-life moral judgment processes, in particular the functions of the dual-process model of human cognition and the affect heuristic. 26 Pizzaro et al. (2003) adopted a dual-process approach in their explanation of attributions of moral responsibility. They were interested in how people attribute moral responsibility for causally deviant acts. The causally deviant acts consisted of vignettes in which a protagonist intends to cause some goal (either moral or immoral), acts in such a way as to cause the goal, but something intercedes to directly achieve the protagonist’s original goal. Across a series of studies, Pizzaro et al. found that their participants deviated from normative standards in their attribution of the moral responsibility of the protagonist in the causal deviance conditions. The actions of the protagonists in causally deviant scenarios were judged to be less wrong than the actions of the protagonists in the causally normal scenarios, even though the intentions and the actions of the two protagonists were exactly the same. In their fourth study, Pizzaro et al. set out to examine a dual-process explanation for these deviations in the attribution of moral responsibility. Following the methodology of Epstein, Lipson, Holstein, and Huh (1992), explicit instructions to either “think rationally” or “give a gut response” were used to prime the rational and intuitive processes respectively. Epstein et al. (1992) found that such directions were sufficient to evoke different cognitive approaches. In Pizarro et al.’s fourth study, participants were asked to make both an intuitive judgment and a rational judgment; the order of these judgments was randomized. Participants who were asked to first make a rational attribution of moral responsibility gave equal attributions of moral responsibility for the protagonists in the causally deviant and the causally normal scenarios; yet, when subsequently asked to make an intuitive judgment these participants attenuated moral responsibility for the protagonist in the causally deviant scenario. Participants who were asked to make intuitive judgments first also attenuated 27 moral responsibility for the protagonist in the causally deviant scenario; however, many participants also attenuated moral responsibility in response to the subsequent rational instructions. At first glance these findings might seem strange but, according to Epstein et al. (1992), once the intuitive system is primed by the first set of intuitive instructions, the rational system is recruited to generate post hoc justifications for the intuitive decisions. Pizarro et al. argue that the data from their research support the notion that moral decisions are often intuitive; however, their data also seem to suggest that System 2 processes can, in some circumstances, correct intuitive errors. No matter whether you look at the moral heuristics’ glass as half-empty, as Sunstein does, or half-full, as optimistic heuristics researchers such as Gigerenzer et al. (2002) do, the reality is that moral heuristics, as construed within the heuristics-and-biases framework, are based on the notion that they are systematically error-prone. For many within the heuristics- and-biases tradition, this assertion makes sense as there is a long history of research demonstrating the systematic errors that occur through the application of the availability heuristic, the representative heuristic, and a number of other cognitive shortcuts that people sometimes use (Gilovich et al., 2002). The question then becomes: Are moral intuitions fundamentally prone to systematic error in the same way that heuristics are? If they are the product of general-purpose cognitive shortcuts then this might be the case. Of course, the accuracy of moral intuitions is an empirical issue that is difficult to resolve given the absence of a consensus on absolute moral truth. Expertise Theories of Moral Intuition There is growing evidence in the psychological literature supporting the notion that people rely on their intuitions to make moral judgments in their everyday lives. Walker, 28 Pitts, Hennig, and Matsuba (1995) found that a significant proportion of their participants reported that their real-life moral judgments were guided by gut feelings rather than by rational reflection. Now a cynic might argue that just because people report relying on intuitions in their daily lives, it does not support the idea that those judgments are effective, accurate, or right. However, Colby and Damon’s (1992) study of moral exemplars suggests that moral intuitions may well be the product of a different kind of decision rule. The moral exemplars in Colby and Damon’s study reported that they didn’t need to engage in deliberate reflection in order to make moral decisions; they reported that they just seemed to know the right thing to do. Bartsch and Cole Wright (2005, p. 546) argue that moral heuristics likely resemble moral rules like “always keep your promises.” Such rules, they argue, do not necessarily lead to errors; errors arise when the moral rules are interpreted rigidly or misapplied. If these are the types of principles that people apply to moral issues, then contrary to Sunstein’s argument, it would appear that moral heuristics need not result in systematic error. Bartsch and Cole Wright also argue that intuitive moral decisions need not be based on heuristics; for those with experience in moral matters, they may in fact reflect accumulated procedural knowledge. These authors subscribe to an expertise model of moral maturity; that is, they believe that novice moral agents are more likely to rely on moral heuristics to make their decisions, but morally mature individuals are likely to respond intuitively as a product of their accumulated experiences. Bartsch and Cole Wright believe that moral development constitutes a shift from the rigid application of moral principles to an intuitive reliance on the practices that have benefitted the individual in similar situations in the past. Both lines of reasoning from Bartsch and Cole Wright converge on the point that 29 moral intuition should not be conceived of as inherently error-prone. Other theorists have adopted a similar stance; the discussion will now turn to two perspectives that emphasize the relation between moral intuition and moral expertise. Accessibility and expertise are fundamental elements that underlie Narvaez and Lapsley’s (2005) social-cognitive theory of moral personality. The social-cognitive theory of moral personality endorses a view of automatic cognition with established roots in psychology, one Wegner and Bargh (1998, p. 459) described as, “activities frequently and consistently engaged in require less and less conscious effort over time.” Narvaez and Lapsley (2005) identified three potential types of automatic moral cognition derived from the work of Bargh (1994): preconscious, postconscious, and goal-directed. Bargh (1994) describes preconscious automatic cognition as the unintentional evaluation of a stimulus that occurs before, and without, conscious deliberation. Narvaez and Lapsley (2005) argue that preconscious automaticity explains the involuntary activation of behavioral scripts and schemas; some schemas, they argue, are more chronically accessible than others. Thus, if one frequently accesses constructs that govern appropriate behaviors or moral duties, then those schemas will become more likely to be activated preconsciously. Postconscious automaticity, on the other hand, describes the temporary activation of constructs as the result of the activation of related constructs, a form of spreading activation (Bargh, 1994). An example of postconscious automaticity might be when one is asked to think about Gandhi, schemas related to honesty, integrity, and responsibility are also temporarily activated. Goal- dependent automaticity reflects well practiced skills that one employs intentionally to solve a familiar problem, much in the same way that driving a car begins as an effortful process but 30 becomes progressively more automatic through experience (Bargh, 1994). Narvaez and Lapsley (2005) construe automatic moral cognition as the product of moral expertise. Narvaez, Lapsley, Hagele, and Lasky (2006) argue that the chronic accessibility of moral constructs results from their frequent and consistent activation. In this way, “moral chronicity accounts for the fact that many moral dispositions are automatically engaged by individuals for whom moral categories are chronically accessible” (p. 969). Narvaez et al. (2006) examined how moral chronicity influenced performance on decision-making tasks. Participants were classified as being moral chronics or moral non-chronics using the methodology outlined by Higgins and Brendl (1995): Participants were asked to identify the characteristics of people they like, dislike, seek out, and avoid. For Narvaez et al., participants who consistently identified moral traits as being important in their evaluation of others were classified as moral chronics; participants who did not employ moral traits in their evaluations were classified as moral non-chronics. On spontaneous trait inference tasks, moral chronics were more efficient in recalling target sentences when given morally relevant dispositional cues than non-moral semantic cues, the opposite pattern was observed for moral non-chronics (Narvaez et al., 2006). These results seem to support the notion that there are individual differences in the potential for morally relevant information to activate moral schemas within individuals. Narvaez et al. also compared the reaction times of moral chronics and moral non-chronics on a lexical decision task examining stories with moral themes. They found that moral chronics were significantly faster in responding to stories wherein the protagonist failed to help a relative. The work of Narvaez, Lapsley, and their colleagues expands the understanding of how many aspects of moral cognition may become automated beyond the simple application of 31 heuristics; and, of course, the notion that automatic moral cognition is a product of moral expertise reinforces the notion that moral intuition need not be construed as suboptimal or error-prone. The notion that moral maturity or expertise may be related to intuitive moral functioning is not new to the study of moral cognition within psychology. Haidt (2008) argues that the reinvigorated emphasis on intuitive moral judgment in contemporary psychology can be contrasted with the rational approach advocated by cognitive- developmental theorists such as Kohlberg. However, it would appear that Kohlberg was well aware of individual differences in decision-making strategies. Although Kohlberg is best known for his stage theory of cognitive development, a theory that emphasizes the structural sophistication of the individuals’ moral reasoning, he and his collaborators also proposed a moral typology in order to capture differences in moral functioning related to the concept of moral autonomy (Kohlberg, Levine, & Hewer, 1983). Through the ongoing structural refinement of Kohlberg’s scoring criteria for his Moral Judgment Interview (MJI), a process that emphasized the distinction between the structure and the content of moral reasoning, the aspects of moral judgment content that signified moral autonomy were discarded from the scoring criteria. Kohlberg sought to recapture the concept of moral autonomy by introducing the notion of substages to his structural model. Kohlberg hypothesized that within each stage there were two functional substages that were interposed between content and structure; he referred to these as Substages A and B. According to the substage model, when individuals make the transition from one stage to the next they enter the new stage at Substage A with an unequilibrated or heteronomous understanding of the justice structure (Kohlberg et al., 1983). When the justice structure becomes equilibrated, Kohlberg argued that the individual 32 demonstrated autonomous moral reasoning characterized by prescriptive, universalized choices. Kohlberg dropped the substage approach when empirical studies failed to support the developmental trajectory from Substage A to B within stages (Tappan, Kohlberg, Schrader, Higgins, Armon, & Lei, 1987). The theoretical concept of moral types was introduced by Kohlberg to replace the substage model (Tappan et al., 1987). Based on the construct of an ideal type as elaborated by Weber (1949), Kohlberg’s more recent conception of moral autonomy contrasts an ideal autonomous moral type with an ideal heteronomous moral type. In keeping with the substage designations, the heteronomous moral type is referred to as Type A, and the autonomous moral type is referred to as Type B. The essence of Kohlberg’s moral types is conveyed in Colby’s (1978) discussion of the moral substages. Judgments at substage A tend to stress external considerations or literal interpretations of roles, duties, or rules, and tend to be unilateral and particularistic rather than generalized or universal in orientation. Judgments at substage B, while remaining within the same sociomoral perspective, have developed within that perspective toward greater reversibility, universality, and generalizability, and toward a deeper comprehension of the “spirit rather than the letter” of the rules and roles. (p. 94) Although addressing the moral substages, Colby’s observations capture the essential features of moral autonomy. For Kohlberg (1984b), moral autonomy was characterized by an intuitive understanding of moral principles. Morally autonomous individuals were able to intuit the appropriate moral principles in decision making, yet they lacked the cognitive sophistication to reason why the principles held true. In fact, Kohlberg (1984b) originally referred to the autonomous moral type as the intuitive moral type. For Kohlberg (1984b), then, one component of moral autonomy related to the capacity to appreciate the relevant moral principles in a given situation, another important component of moral autonomy is the manner in which some theorists believe it influences the 33 processes of everyday moral decision-making. According to Davidson and Youniss (1991), moral autonomy is akin to identity. They believe that, through mutual and reciprocal interactions with others, the individual constructs autonomous moral principles; these principles represent the basis of the individual’s autonomous identity. Davidson and Youniss (1991) argue that spontaneous moral judgments and the type of moral theorizing tapped by Kohlberg’s moral dilemmas are products of distinct processes. They believe that the structure of the autonomous identity influences everyday moral functioning in a reflexive, habitual manner, whereas the cognitive structures emphasized in Kohlberg’s stage model influence those rare occasions when an individual is required to actively reflect on a moral dilemma. Following this line of reasoning, when considered together, Kohlberg’s moral typology and his stage theory of moral development appear to describe a dual-process view of moral cognition. It seems very clear that within the past several years the study of moral cognition within psychology has taken a decidedly intuitive turn. There seems to be a consensus now that intuitive decision-making processes play an important role in everyday moral functioning; beyond this general acknowledgement, however, there seems to be little agreement amongst the various theoretical camps. There are some researchers, such as Haidt (2001, 2007, 2008), who argue that all moral judgments are the product of intuitive processes, and that moral reasoning serves only to buttress intuitive judgments and to evoke intuitive judgments in others. However, it appears that most researchers in this area believe that both intuition and reason play a role in everyday moral decision-making (cf., Narvaez, 2008; Pizarro et al., 2003). If this is the case, then one significant issue that must be addressed is determining what factors influence whether intuitive or rational processes are 34 applied to a given problem. One potentially fruitful avenue for exploration is potential individual differences in the potential to engage in intuitive or rational processing. Pizarro et al. suggest such a difference variable when they argue that perhaps some people are more likely to trust their intuitions. There is considerable empirical evidence to support the existence of individual differences in rational and intuitive decision-making processes in the general heuristics-and-biases research literature (cf., Stanovich & West, 2008). One goal of this project is to explore these potential individual differences in people’s propensity to engage in intuitive or rational decision-making processes in making moral judgments. This examination should provide some insight into a factor that may have a significant influence on determining whether people make intuitive judgments or rely on their reasoning processes. For the purpose of this project, moral autonomy, as characterized in Kohlberg’s moral typology, will serve as the construct that embodies individual differences in moral decision-making styles. There are a number of reasons for choosing Kohlberg’s moral typology. Much of the current emphasis on moral intuition in psychology is cast as a reaction to the emphasis on reasoning processes in cognitive-developmental theories, and in Kohlberg’s theory in particular (Haidt, 2008). However, the moral typology demonstrates that Kohlberg was well aware of the role of intuitive processes in moral decision-making. Moral autonomy represents a style of moral functioning that is reflexive and non-deliberative; Kohlberg’s moral typology is intended to capture individual differences in moral autonomy; thus, Kohlberg’s moral typology should characterize individual differences in reflexive, non- deliberative moral decision-making. Moreover, within Kohlberg’s typology, moral 35 autonomy reflects a more mature moral understanding; as such, moral intuition is not characterized as being suboptimal or necessarily error-prone. A second issue that arises from the review of the literature concerns the very nature of moral intuition. Haidt (2001, 2008), Greene (2005), and other sentimentalist theorists have characterized moral judgment as an affective response, like an aesthetic judgment. Other theorists, such as Sunstein (2005), have described moral intuition as a cognitive process. And yet other theorists argue that moral intuition is a product of both affect and cognition (Hauser, 2006; Nichols, 2008; Pizarro et al., 2003). As discussed earlier, research conducted by Nichols and Mallon (2006) provided compelling evidence that affect, deontological rules, and rational reflection may all play a role in moral judgment-making. The entanglement of affect and cognition raises some interesting and important questions. Given the diverse characterization of the intuitive process across the different theoretical perspectives, the construct validity of moral intuition is still somewhat tenuous. Is there convergent validity between cognitive and affective measures of moral intuition? Are there cognitive and affective moral intuitions? And if so, are there individual differences in the types of moral intuitions that individuals make? All of these are questions that need to be addressed in order to bring coherence to the study of moral intuition. A second goal of this project is to explore the convergent validity of measures of moral intuition, as it is operationalized in contemporary psychology. Before we can resolve the dispute between those theorists who emphasize the role of affect and those theorists who emphasize the role of cognition in moral intuition, we need to ensure that both camps are in fact studying the same construct. 36 This project comprises a series of three studies. Study 1 examines the relation between moral autonomy, general cognitive styles, and performance on the causal deviance task. Within the heuristics-and-biases paradigm, non-normative responding on the causal deviance task represents System 1, intuitive decision-making; as such, if moral autonomy does reflect a non-deliberative style of moral decision-making, then moral autonomy should predict intuitive reasoning on the causal deviance task. Study 2 represents the first step in the search for convergent validity among measures of moral intuition. Although there is much debate among the adherents of the various theories of moral intuition, there has yet to be any meaningful attempt to establish that the competing measures of moral intuition are actually measuring the same construct. In Study 2, responses from Pizarro et al.’s (2003) causal deviance task and Haidt’s (2001) moral dumbfounding task are compared. If both measures capture moral intuition, then we should expect scores derived from them to be strongly correlated. As in Study 1, moral autonomy is employed as an individual-differences variable in order to explain patterns of moral intuition on the causal deviance task and the moral dumbfounding task. In Study 3, two new measures of moral intuition are introduced. These new measures are intended to capture a more conceptually rich operationalization of moral intuition. The first measure, the Moral Cognition Style Inventory, is a self-report measure intended to capture self-reported reliance on reasoning, affective intuition, and principled intuition in everyday moral decision-making. The second measure involves adding probes to the Sociomoral Reflection Measure to capture cognitive effort and deliberative elaboration. In Study 3, moral autonomy is decomposed into the fundamental elements that underlie Gibbs, Basinger, and Fuller’s (1992) operationalization of the construct: conscience, fundamental 37 valuing, and balancing. It is proposed that each of these elements is related to a different problem-solving approach: Conscience elements are related to affective intuitions, fundamental valuing elements are related to principled intuition, and balancing elements are related to rational deliberation. 38 Study 1 Individual Differences in Preferences for Intuitive and Rational Approaches to Moral Decision-Making and their Relation to the Moral Typology For many psychologists, the debate between adherents of intuitive and rational approaches to moral decision-making began when Haidt (2001, p. 815) famously pronounced that moral reasoning was nothing more than “post hoc justifications.” And while there is considerable evidence to support the idea that much of our everyday moral decision-making is reflexive or non-deliberative (Walker et al., 1995), there is also considerable conceptual (Narvaez, 2008) and empirical (Nichols & Mallon, 2006) support for the role of reasoning processes in everyday moral functioning. Clearly, both intuition and reason inform our moral decision-making processes. A crucial question that must then be addressed is why people engage in intuitive or deliberative processes in order to make moral judgments. Research conducted within the dual-process paradigm suggests that there are individual differences in the use of intuitive or rational strategies (Stanovich & West, 2008); perhaps there are similar individual differences that influence moral decision-making strategies. The first goal of Study 1 is to examine potential individual differences in people’s preferences for engaging in either rational or intuitive moral cognition. The study of individual differences in rational and intuitive cognitive styles is a relatively new area of research. One prominent paradigm currently generating such research is the Cognitive- Experiential Self-Theory (CEST; Epstein, 1998, 2008). Based on the classic dual-process conception, CEST characterizes the functioning of the mind as the interplay of two parallel, yet interactive cognitive systems. CEST differs from many dual-process models in that it emphasizes individual differences in the relative use of experiential–intuitive processes 39 versus analytical–rational processes. Some interesting findings have emerged from the CEST paradigm. Pacini and Epstein (1999) found that scoring high on a measure of rational thinking style was positively related to measures of openness to experience, conscientiousness, emotional stability, and ego strength; whereas scoring high on intuitive thinking was positively related to measures of extraversion, agreeableness, conscientiousness, and to a lesser extent, openness to experience. The authors downplayed the fact that preferences for both rational and intuitive thinking styles were significantly positively related to openness to experience and conscientiousness. One could speculate that the different types of cognitive style could be related to different facets of the openness trait; however, Pacini and Epstein’s measure of the Big-Five lacked the precision necessary to measure dispositional traits at the level of facets. Moreover, the link between individual differences in cognitive style and performance on actual judgment-making tasks proved somewhat tenuous. Pacini and Epstein’s data revealed that the experiential thinking style was not related to performance on laboratory judgment tasks, and the rational thinking style was only related to judgments in contexts in which the incentive to function at optimal levels was high. Pacini and Epstein argue that their findings support the contention that rational processes function to correct non-optimal responding. Subsequent research by Shiloh, Salton, and Sharabi (2002), examining the relation between individual differences in rational and intuitive thinking styles and participants’ response to risky-choice scenarios, also failed to find a direct relation between individual differences in thinking styles and actual judgments. Shiloh et al. (2002) found that participants high in both rational and intuitive thinking styles and low in both rational and 40 intuitive thinking styles tended to make non-normative, heuristic judgments, but no other significant relations were reported. They conjectured that scoring low on both thinking styles represented poor cognition, which could explain non-normative responding; scoring high on both thinking styles, on the other hand, represented a lack of a clear cognitive style, which the authors believe made these people susceptible to situational cues. Thus, there is some support for the use of the CEST framework to characterize individual differences in decision-making strategies. Although Epstein (2008) argues that the intuitive decision-making process does not constitute cognitive short-cutting or lazy thinking, in keeping with the heuristics-and-biases tradition, much of the research within the CEST framework conceptualizes non-optimal responding as being the product of the intuitive cognitive system. Epstein et al. (1992) primed their participants to make intuitive or rational decisions by simply instructing them to be intuitive or rational. Pizarro et al. (2003) attempted to study attributions of moral responsibility using the same approach. Pizarro et al. employed a 2 × 2 design in which the within-subjects variable was a comparison of the effect of rational and intuitive instructions, and the between-subjects variable was the order in which the instructions were presented. They asked their participants to make rational and intuitive judgments of the permissibility of actions and the global character of the protagonists in paired causally deviant and causally normal vignettes. Pizarro et al. assumed that since the intention of each of the protagonists was the same (to kill another individual), that the normative response to the questions concerning the actions and character of the protagonists should be that the actions of the protagonists were equally wrong and that the protagonists were equally bad. As such, any deviation from equal attributions of permissibility or character would be defined as intuitive responding. 41 Pizarro et al. (2003) reported that when their participants were asked to make rational judgments first, very few people deviated from equal attributions of moral responsibility. However, when those same participants were asked to make intuitive judgments on their next pair of vignettes, the data revealed a statistically significant deviation from the normative standard. Pizarro et al. argue that these results support Epstein et al.’s assertion that one can activate the intuitive or rational cognitive system by simply asking people to make either type of decision. When their participants were asked to make intuitive judgments first, their judgments also deviated from the normative standard; when those same participants were subsequently asked to make rational judgments their judgments again deviated from the normative standard. Pizzaro et al. interpreted these findings as support for Haidt’s (2001) claim that deliberative reasoning is often the slave of intuitive judgments. That is, once an intuitive judgment is made, reasoning processes are applied to justify the initial intuitive decision. By employing the causal deviance measure, I will be able to address the question of whether there are individual differences in people’s propensities to make intuitive or rational moral decisions. However, given the relative dearth of measures of moral intuition in this burgeoning field, it seems important to evaluate whether the causal deviance methodology represents a meaningful measure of intuitive judgments of moral responsibility. The second goal of this study is to evaluate whether Kohlberg’s (1984b; Tappan et al., 1987) moral typology could serve as a framework for describing these individual differences. Within Kohlberg’s typology, the autonomous type is often described as having an internal moral orientation, a feeling of moral obligation, a description that seems to reflect an intuitive approach to moral reasoning. The heteronomous type, on the other hand, reflects 42 an emphasis on rules, seemingly a more rational approach to moral judgment. In fact, in some of his writings, Kohlberg (1984a, p. 261) referred to the autonomous moral type as the intuitive type. It should be noted that Kohlberg’s understanding of moral intuition was of the conceptually rich variety. Kohlberg (1984a) initially argued that the morally autonomous individual is more attuned to the central issues and principles underlying moral conflicts. Nevertheless, subsequent moral theorists such as Gibbs et al. (1992) have argued that the autonomous moral type is characterized by moral ideals that are felt from within, as a matter of conscience. In one sense this definition suggests that moral ideals are more central to the identity of a morally autonomous individual; however, the notion that moral ideals are “felt from within” also suggests that morally autonomous individuals are more influenced by affective responses to moral stimuli. The relation between moral autonomy and an intuitive approach to moral cognition is also reflected in Davidson and Youniss’ (1991) suggestion that moral autonomy represents a reflexive, rather than reflective, style of moral functioning. Given the long-standing theoretical association between moral autonomy and moral intuition, but lack of empirical assessment, it would seem that when considering variables associated with potential individual differences in moral cognitive style, moral autonomy is an obvious candidate. Study 1 is intended to examine five research questions: (a) Are there individual differences in participants’ propensity to engage in intuitive and rational decision-making processes on the causal deviance task? (b) Are these individual differences related to the general cognitive styles? (c) Is moral autonomy related to the general cognitive styles? (d) Is moral autonomy related to intuitive moral decisions made on the causal deviance task? (e) 43 And finally, is it possible to construct a model that significantly predicts moral intuition as defined in the causal deviance task? Method Participants Participants in this study were 90 undergraduate students, 65 women and 25 men, with a mean age of 20.8 years (SD = 2.9). Of this sample, 47 were born in North America, 37 were born in East Asia, 3 were born in Europe, and 3 were born in South Asia. All participants were recruited through a university research participant pool, and received course-credit for their participation. Tappan et al. (1987) suggest that the developmental transition from heteronomous moral type to autonomous moral type occurs in late adolescence or early adulthood for those who experience this transition. As such, the undergraduate population represents the ideal population in which to measure the cognitive styles associated with the moral typology. Measures Rational–Experiential Inventory (REI; Pacini & Epstein, 1999). The REI is a 40-item self-report measure. Participants are asked to indicate on a 5-point scale, from 1 (definitely not true of myself) to 5 (definitely true of myself) the extent to which sentence stems describing different reasoning preferences relate to themselves. The REI provides indices of an individual’s propensity to engage in rational and experiential reasoning processes. Some examples of typical sentence stems from the rational subscale are, “I enjoy intellectual challenges” and “I am much better at figuring things out logically than most people.” Examples of typical sentence stems from the experiential subscale would be, “I believe in trusting my hunches” and “I like to rely on my intuitive impressions.” Psychometric 44 evaluation of the REI indicated that its two subscales demonstrated strong discriminant validity, and that each scale contributed significantly to a variety of measures beyond the Big-Five traits (Pacini & Epstein, 1999). Sociomoral Reflection Measure–Short Form (SRM-SF). The SRM-SF (Gibbs et al., 1992) is a pencil-and-paper production task designed to measure moral maturity within the context of the first four stages of Kohlberg’s stage model. The format of the SRM-SF is well suited to group administration. The SRM-SF consists of 11 brief contextual statements based on the moral norms from Colby and Kohlberg’s (1987) scoring manual. The values reflected in these statements emphasize truth, contract, the value of life, property and law, affiliation, and legal justice respectively (Basinger, Gibbs, & Fuller, 1995). For example, the truth scenario asks participants to respond to the question, “In general, how important is it for people to tell the truth?”; and the value of life question asks participants to consider, “Let’s say a friend of yours needs help and may even die, and you’re the only person who can save him or her. How important is it for a person (without losing his or her own life) to save the life of a friend?” For each statement, participants must first indicate whether the value in the statement is “very important,” “important,” or “not important;” and then they are asked to justify their evaluations. The SRM-SF coding scheme also allows for the scoring of Kohlberg’s moral typology. Gibbs et al.’s scoring scheme measures moral type by identifying moral judgments that exhibit (a) balanced perspective-taking, (b) fundamental valuing, and (c) aspects of conscience. Balanced perspective-taking is characterized by taking into consideration the perspective of others; for example, responses to the truth scenario would be scored as demonstrating balanced perspective-taking if they argued, “you should treat others the way 45 you would want them to treat you.” Fundamental valuing is reflected in moral decisions that are based on universal principles that generalize beyond the present circumstances to include all people such as, “promises are precious or priceless.” Conscience elements are characterized by judgments that have a strong affective motivation; for example, “you would feel rotten, terrible, ashamed, bad about yourself, or guilty, or you could have emotional problems or become depressed,” or “for the sake of self-respect, or one’s integrity, dignity, honor, consistency, or sense of self-worth.” To be categorized as the autonomous moral type, an individual must demonstrate at least two of these three criteria in their protocols, otherwise they are scored as the heteronomous moral type. For the purposes of data analysis, those participants classified as the heteronomous type were scored as 0, and those classified as the autonomous type were scored as 1. This labeling reflects a nominal, not an ordinal scale. The SRM-SF has demonstrated acceptable test-retest and split-half reliability (Gibbs et al., 1992). The coding scheme for the SRM-SF has demonstrated high interrater reliability even with inexperienced coders (r = .94; Gibbs et al., 1992). Basinger et al. (1995) found high concurrent validity between the SRM-SF and the Moral Judgment Interview (Colby & Kohlberg, 1987). In the present study, interrater reliability for moral type categorization, based on a random sample of 20 questionnaire packages, was κ = .84. Causal Deviance Measure. Participants were presented with two sets of hypothetical moral vignettes, constructed by Pizarro et al. (2003), and designed to evoke either rational or intuitive judgment processes. Each vignette contrasted two immoral episodes: in the first (the causally normal story), the protagonist intends some immoral act (e.g., killing their spouse), then acts on this intent and carries out the immoral act, achieving his/her goal; in the 46 second story (the causally deviant story), a protagonist intends to cause some immoral goal, acts in such a way as to cause the goal, but something intercedes to directly achieve the protagonist’s original goal. In one set of vignettes, the causally normal story involves one man throwing a knife at another man, intending to kill him, and indeed succeeding; in the causally deviant story, the one man throws a knife, intending to kill the other man, the other man sees the knife coming and dies of a heart attack, but the knife would have killed him had he not fallen to the floor. In the other set of vignettes, the causally normal story involves a woman poisoning her husband during dinner at a restaurant; in the causally deviant scenario the woman attempts to poison her husband, but her efforts only make his food unpalatable, so he orders another meal which unbeknownst to him contains food to which he is allergic, thus killing him. The vignettes were counterbalanced so that each vignette was randomly presented with instructions to make rational or intuitive decisions. For each set of episodes, participants are asked to rate which actor’s actions were more morally wrong, and which actor is of worse moral character. For each set of questions, participants are asked either to make their most rational decision or their intuitive gut feeling. The scale ranges from (1) A is much worse than B, to (2) A is a little worse than B, to (3) A is equal to B, to (4) B is a little worse than A, and finally to (5) B is much worse than A. According to Pizarro et al. (2003), any deviation from the two actors being rated as equally bad is considered an incidence of intuitive judgment. The attenuation of moral responsibility in these causally deviant vignettes is considered to be a deviation from the normative standard that moral responsibility should be determined by intentions and actions, rather than simply by consequences. So, in this methodology, intuitive judgment is defined 47 as a deviation from a rationally determined normative standard. Procedure Participants completed a package of three questionnaires in sittings of up to five participants. The ordering of the questionnaires within the package was held constant; participants first completed the causal deviance task, then demographic information, the SRM-SF, and finally the REI. The set ordering of the questionnaires was intended to avoid priming effects that may have resulted from completing the SRM-SF before completing the causal deviance task. On the causal deviance measure, participants were randomly assigned to either rational instructions first or intuitive instructions first. Each sitting lasted up to 1 hour. Results The data analytic strategy for Study 1 was to determine whether a propensity to engage in intuitive or rational moral decision-making processes was related to a general preference for rational or experiential approaches, and whether these preferences were related to the notion of moral autonomy as operationalized in the moral typology. The first step in the analysis was to analyze the data from the causal deviance task. For each vignette, the participants are asked to make a judgment about the permissibility of the action carried out by the protagonist in the story, and a second judgment regarding the moral character of the protagonist. Pizarro et al. (2003) summed the scores from the two types of judgments in their analysis of the causal deviance task. However, judgments regarding the moral quality of actions and judgments regarding moral character are conceptually distinct types. As such, the data analysis treats each type of judgment independently, in order to reveal whether judgments are indeed convergent or not. 48 In the heuristics-and-biases tradition, any deviation from a normative standard is defined as heuristic/intuitive/experiential decision-making. Pizarro et al. (2003) quantified moral intuition by measuring the absolute deviation from equal attributions of the permissibility of actions and of moral character; thus their analysis emphasized the degree to which people deviate from normative standards given rational or intuitive directions. With this operationalization of moral intuition, the potential range of deviation was from 0 to 2 for each judgment; this restricted range of responses could have significant effects on attempts to analyze the data. Given the definition of moral intuition in the heuristics-and-biases tradition, it is more meaningful to compare the participants who deviate from the normative standards with those participants who do not, rather than examining the magnitude of deviation. In order to examine how the type of judgment (action vs. character), type of instructions (rational vs. intuitive), and the order of instruction presentation (rational instructions first vs. intuitive instructions first) affected whether or not people deviated from normative standards, a nonparametric analysis was conducted with the data from the causal deviance task. In order to facilitate the nonparametric analysis, the data were re-coded into binary, categorical data (no deviation vs. deviation). The percentage of participants who deviated from equal judgments of permissibility and character is depicted in Figure 1. Recall that the CEST framework suggests that the intuitive or rational systems can be activated by explicit instructions. After collapsing across instruction order, when prompted to make an intuitive judgment regarding the actions of the protagonists, 27% of participants deviated from the normative response; when asked to make a rational judgment regarding the actions of the protagonists, 18% of participants deviated 49 from the normative standard. In general, the rate of deviation in response to the intuitive instructions on the action judgments was quite low; and in relative terms, only 9% more participants deviated after the intuitive instructions than after the rational instructions. The rates of deviant responding were even lower for the judgments of character. When asked to make an intuitive judgment regarding the character of the protagonist, 12% of participants deviated from the normative standard; when asked to make a rational judgment regarding the character of the protagonist, 5% deviated from the normative standard. The data suggest that simply asking people to make intuitive decisions has a relatively small influence on deviant responding. When collapsing across instruction type and instruction order, 33% of participants deviated on judgments of actions and 15% of participants deviated on judgments of character. This suggests that people are more likely to make deviant judgments for judgments of actions than they are of character. Figure 1. Percentage of Participants Responding Deviantly in Judging Actions and Character (Study 1) 0 5 10 15 20 25 30 Rational First: Actions Rational First: Character Intuitive First: Actions Intuitive First: Character Type of Judgment % o f P ar tic ip an ts w ho De v ia te Rational Instructions Intuitive Instructions 50 McNemar tests were conducted to test the proportion of participants who deviated in judgments of the permissibility of actions when given instructions to be rational or to be intuitive. For the group that received the rational instructions first, the analysis indicated that a significantly larger proportion of participants deviated in their judgments of the permissibility of actions in response to the intuitive instructions compared to the rational instructions, χ2(1, N = 40) = 5.44, p = .04, φ = .37. For judgments of character, on the other hand, there was no difference in the proportion of participants who deviated in response to rational or intuitive instructions, χ2(1, N = 40) = 1.79, p = .35, φ = .20. Given intuitive instructions, relatively more participants deviate from equal judgments of the permissibility of actions than when given instructions to be rational in their decisions; and the type of instruction did not have significant effect on the proportion of participants who deviated from equal judgments of character. For the group that received the intuitive instructions first, the McNemar tests indicated that there were no significant differences in the proportion of participants who deviated in their judgments of actions, χ2(1, N = 50) = 0.08, p = 1.0, φ = .04; or in their judgments of character, χ2(1, N = 50) = 0.14, p = .75, φ = .05, regardless of the type of instructions given. Thus, when given intuitive instructions first, the participants were equally likely to deviate from the normative standard for judgments of the permissibility of actions or character. The nonparametric analysis of the causal deviance task seems to clarify the effect of instructions on judgments concerning the permissibility of actions. When rational instructions are presented first, the participants are less likely to deviate from the normative standard. When intuitive instructions are presented first, the participants are equally likely to 51 deviate in response to the intuitive instructions as well as subsequent judgments made in response to rational instructions. Pizarro et al. (2003) observed a similar pattern of responses; they suggested that once the intuitive system is activated, the rational system is recruited to generate judgments that support the initial intuitive judgment. Having analyzed the data from the causal deviance measure, the analysis will now turn to identifying the factors that predict non-normative responding. Within the CEST framework, the experiential style describes a general cognitive style characterized by intuitive decision-making preference, whereas the rational style describes a general cognitive style that is characterized by a reliance on conscious deliberation and reflection in making decisions (Pacini & Epstein, 1999). Independent samples t-tests were conducted to compare the REI cognitive styles scores of the individuals who had demonstrated non-normative responding on the causal deviance task with those who had not. When comparing scores from the experiential scale, there was no statistically significant difference between the participants who had deviated on the judgments regarding the protagonists’ actions and those who did not, t(88) = -1.69, p = .10, d = .36; in comparing scores on the experiential scale for those who had deviated or not on the judgments of character, there was also no statistically significant difference, t(88) = -1.18, p = .24, d =.25. This result suggests that a general intuitive cognitive style is not strongly related to moral intuition as it is operationalized in the causal deviance task. When the scores on the rational scale of the REI were compared for normative and non-normative responders on the action judgments of the causal deviance task, the difference was small and not statistically significant, t(88) = -1.26, p = .21, d = .27; when the scores on the rational scale of the REI were compared for normative and non-normative responders on 52 the character judgments of the causal deviance task, again the difference between groups was small and not statistically significant, t(88) = 1.15, p = .25, d = .24. Again, the rational cognitive style measured by the REI is not related to non-normative responding on the causal deviance task. The next issue to be addressed is whether or not moral autonomy as operationalized in moral typology scores from the SRM-SF is related to a general intuitive or rational cognitive style. An independent samples t-test was conducted to measure differences between the autonomous and heteronomous moral types on intuitive cognitive style scores from the REI; the difference was small and not statistically significant, t(88) = 0.08, p = .94, d = .02. The difference between the autonomous and heteronomous moral types on scores from the rational cognitive style on the REI was also small and not statistically significant, t(88) = -1.24, p = .21, d = .26. Thus, it does not appear that moral autonomy as operationalized on the SRM-SF is related to either of the general cognitive styles measured with the REI. In order to compare the responses of the autonomous and heteronomous moral types on the causal deviance task, two Mann-Whitney U tests were conducted: the first examined the judgments of actions, the second examined judgments of character. The first test revealed that those categorized as being the morally autonomous type on the SRM-SF were significantly more likely to deviate from the normative standard for action judgments on the causal deviance task, U (n1 = 60, n2 = 30) = 637, p = .003, θ = .35. The second test indicated the autonomous type was also more likely to deviate from normative standards when making judgments regarding character, U (n1 = 60, n2 = 30) = 783, p = .04, θ = .44. Thus, the 53 operationalization of the autonomous moral type on the SRM-SF is related to non-normative moral decisions on the causal deviance task. The next stage of the data analysis process involves identifying the variables that predict intuitive responding on the causal deviance task. As with the preceding analyses, separate regression analyses were conducted for judgments of actions and judgments of character. Table 1 presents the data for a hierarchical logistic multiple regression analysis that was intended to identify the variables that best predict non-normative deviations for judgments regarding the permissibility of actions. The dependent variable for the regression was a binary, categorical variable: those participants who did not demonstrate deviation on the action judgments of the causal deviance task (n = 60) and those participants who deviated at least once in response to the action questions posed in the causal deviance task (n = 30). The regression was conducted in two steps: in the first step, the between-subjects variable of instruction order was entered into the analysis to control for the effect of order of instruction on non-normative responding; in the second step, moral type, experiential cognitive style, and rational cognitive style were added as predictors. The difference in the fit of predictive equation in moving from Step 1 to Step 2 was significant, and the size of the effect was medium, χ2(3, N = 90) = 11.89, p = .008, φ = .36. The Hosmer and Lemeshow chi-square goodness-of-fit test indicated that Step 2 of the logistic regression adequately fits the data, χ2(8, N = 90) = 4.34, p = .83, φ = .22. The most meaningful approach to evaluating the efficacy of the predictors in a logistic regression is to examine the odds ratios. In Step 2 of the regression model, two of the independent variables were identified as exhibiting significant odds ratios. When experiential cognitive style, rational cognitive style, and instruction order are held constant, 54 those categorized as being the autonomous moral type on the SRM-SF are 3.70 times more likely to deviate from normative standards on the causal deviance task. The logistic regression provides strong support for the hypothesis that moral autonomy is related to moral intuition, or at least moral intuition as it is operationally defined on the causal deviance task. Table 1. Logistic Multiple Regression for the Prediction of Non-Normative Action Judgments on the Causal Deviance Task (Study 1) Variable B SE Exp (B) df p Step 1 Instruction order Step 2 Moral type Experiential style Rational style Instruction order RN2 = .18 .41 1.31 .05 .03 .23 .46 .49 .03 .02 .49 1.51 3.70 1.05 1.03 1.25 1 1 1 1 1 .37 .01 .06 .29 .65 When moral type, rational cognitive style, and instruction order are held constant, each increase of one point on the Experiential Cognitive Style Scale of the REI increases the likelihood of deviating on the causal deviance task by 1.05 times. To put this in context, the average score on the Experiential Cognitive Style scale was 65.4, and the standard deviation was 9.2. Thus, in relative terms, moral type was a significantly more powerful predictor of deviations on the causal deviance task. Table 2 presents the data for a hierarchical logistic multiple regression analysis that was intended to identify the variables that best predict non-normative deviations for judgments regarding the character of the protagonists. The dependent variable for the 55 regression was a binary, categorical variable: those participants who did not demonstrate deviation on the character judgments of the causal deviance task (n = 76) and those participants who deviated at least once in response to the character questions posed (n = 14). The regression was conducted in two steps: in the first step, the between-subjects variable of instruction order was entered into the analysis to control for the effect of order of instruction on non-normative responding; in the second step, moral type, experiential cognitive style, and rational cognitive style were added as predictors. The difference in the fit of predictive equation in moving from Step 1 to Step 2 was not statistically significant, the size of the effect was small, χ2(3, N = 90) = 7.06, p = .07, φ = .28. The Hosmer and Lemeshow chi- square goodness-of-fit test indicated that Step 2 of the logistic regression adequately fits the data, χ2(8, N = 90) = 3.81, p = .87, φ = .21. Table 2. Logistic Multiple Regression for the Prediction of Non-Normative Character Judgments on the Causal Deviance Task (Study 1) Variable B SE Exp (B) df p Step 1 Instruction order Step 2 Moral type Experiential style Rational style Instruction order RN2 = .14 .38 1.27 .04 -.04 .29 .60 .62 .03 .03 .64 1.45 3.55 1.04 .96 1.33 1 1 1 1 1 .53 .04 .28 .17 .65 Given that only 15% of the participants deviated in their judgments of the protagonist’s character, perhaps it is not surprising that it was not possible to construct a 56 robust equation to predict such deviations. The only significant predictor that emerged from the regression analysis was moral type. Those who were categorized as the morally autonomous type on the SRM-SF were 3.55 times more likely to deviate on judgments of character than were those categorized as heteronomous type. Discussion Study 1 was designed to test five hypotheses: (a) Are there individual differences in participants’ propensity to engage in intuitive and rational decision-making processes on the causal deviance task? (b) Are these individual differences related to the general cognitive styles measured by the REI? (c) Is moral autonomy, as operationalized with SRM-SF, related to the general cognitive styles measured by the REI? (d) Is moral autonomy, as operationalized with SRM-SF, related to intuitive moral decisions made on the causal deviance task? (e) And finally, is it possible to construct a model that significantly predicts moral intuition as defined in the causal deviance task? The results of the nonparametric analysis of the causal deviance task extend Pizarro et al.’s (2003) research on moral intuition. In their analysis, Pizarro et al. (2003) combined judgments of actions and judgments of character because the responses were moderately correlated (r = .48, p = .01). Conceptually, however, judgments regarding the permissibility of actions and judgments regarding the character of individuals are very different; and there are some empirical data to suggest that people approach these judgments differently. Cushman (2008) argued that people engage in distinct mental processes when they are evaluating the permissibility of actions and evaluating the blameworthiness of a moral agent. Across a series of studies, Cushman (2008) reported that judgments regarding the permissibility of actions were based primarily on the mental states of the agent: whether they 57 intended the actions and whether they believed that their actions would lead to the eventual outcome. Judgments of blame, on the other hand, are based on both the mental state of the actor and the causal connection between the actor and the outcome. Of particular interest to the current study, Cushman reported that, in causally deviant scenarios, participants attenuated the moral responsibility of the protagonist when making judgments concerning blaming or punishing the actor, but not when making judgments regarding the permissibility of the protagonists’ actions. Cushman referred to this phenomenon as blame blocking; he argued that blame blocking occurs when the evaluation of causation blocks the consideration of the protagonists’ malicious intent, which subsequently leads to the attribution of less moral responsibility for the protagonist. At first glance, the results from Study 1 appear quite different from Cushman’s findings; in the current project, 33% of participants attenuated moral responsibility for the judgments of actions and only 15% attenuated moral responsibility for judgments of character. The apparent differences might have resulted from differences in the wording of the questions presented to the participants. In the causal deviance task, participants are asked to judge which actor’s actions are more blameworthy and which actor is a worse person; whereas in Cushman’s task, the participants are asked to judge the wrongness of the action and how blameworthy the individual is. Thus, in the current study, blame is attached to the action and in Cushman’s research blame is attached to the actor. Moreover, Cushman asks participants to rate how blameworthy an individual is and how much punishment they deserve, judgments that appear to emphasize actions rather than character; in the causal deviance task, participants are asked to make a global judgment regarding the protagonist’s character (e.g., who is a worse person?). It seems that we cannot equate the judgments made 58 about actions and character in the causal deviance task and Cushman’s methodology. This highlights the difficulties that emerge when one tries to compare findings across studies in the field of moral intuition. The lack of common terminology complicates the generalization and comparison of results. When considering judgments regarding actions and judgments regarding character separately, the current study found a significant main effect for the type of instruction for judgments concerning actions: Participants who were instructed to make intuitive judgments deviated more from the normative standard of equal blame. This finding provides some support for the Pacini and Epstein’s (1999) proposal that one can evoke intuitive judgments by asking participants to be intuitive. However, the majority of participants did not deviate in their judgments, which suggests that the effect of the instructions was relatively weak. As noted, far fewer participants deviated in their judgments of the moral character of the protagonists, which suggests that the type of judgment being made also influences whether participants are willing to make non-normative decisions. Simply asking people to be intuitive does not seem to alter decision-making strategies for most people. The nonparametric analysis of the causal deviance task represents a new approach to the methodology, an approach that is arguably more consistent with the definition of intuition within the heuristics-and-biases tradition. The analysis revealed that when participants are presented with instructions to make a rational decision about the protagonists’ actions first, they are significantly less likely to deviate from the normative standard then they are when subsequently asked to make an intuitive judgment about the actions of protagonists in the next vignette. When subjects are asked to make an intuitive judgment about actions first, there is no difference in the proportion of participants who deviate in response to the intuitive 59 instructions and in response to the subsequent rational instructions. This pattern seems to suggest that once the intuitive system is primed it undermines subsequent rational instructions. Pizarro et al. (2003) observed a similar pattern; they suggested that perhaps once activated the intuitive system recruits the rational system to justify non-normative responding. Although it is impossible at present to pinpoint the precise mechanisms that underlie this pattern, it is clear that being asked to be intuitive has an effect on subsequent decision-making. The general cognitive styles operationalized with the REI were not related to non- normative responding on the causal deviance task. To date, the extant data provide mixed support for the contention that the cognitive styles derived from the REI relate to decision- making behavior in a predictable manner. Pacini and Epstein (1999), Alonso and Fernandez- Berrocal (2003), and Bjorklund and Backstrom (2008) have all found that those participants who score higher on the rational style subscale from the REI make fewer errors on classic decision-making tasks like the Asian disease problem, but there are no data that demonstrate the predictive validity of the experiential style subscale for non-normative responding. However, Witteman, van den Bercken, Claes, and Godoy (2009) have reported preliminary data that suggest that those who score higher on the experiential style subscale show faster response times when making decisions. Although this response-time research has the potential to link the experiential thinking style with intuitive decision-making, the data presented by Witteman et al. are preliminary and incomplete so any meaningful discussion is premature. The rational and experiential cognitive styles operationalized with the REI were not strongly related to the moral typology as operationalized with the SRM-SF, or non-normative 60 responding on the causal deviance task. The autonomous moral type was, however, significantly related to non-normative responding on the causal deviance task. This suggests that moral autonomy and cognitive style are independently related to non-normative responding on the causal deviance task. Employing a hierarchical logistic multiple regression, it was possible to construct an equation that predicted non-normative responding on the action judgments from the causal deviance task. As such, the data provide support for the hypothesis that moral autonomy, as operationalized by the SRM-SF, would be significantly related to intuitive decisions on the causal deviance task. However, this finding also raises some important questions. The hypothesized relation between moral autonomy and moral intuition was derived from Kohlberg’s (1984b) description of the autonomous moral type as intuitively understanding moral principles and feeling an inner compulsion to act in response to those principles, and from Davidson and Youniss’ (1991) description of moral autonomy as a reflexive, habitual mode of moral functioning. As such, the moral intuition associated with moral autonomy should reflect a lack of conscious deliberation when making moral decisions and some felt commitment to moral principles. Moral intuition, as operationalized in the causal deviance task, represents a deviation from a normative standard, a mistake. Although it seems plausible that a lack of deliberation could be related to errors in decision-making, moral autonomy is thought to reflect a more mature moral capacity, not a deficiency. The obvious next step for this project is to compare the causal deviance task with the other prominent measure of moral intuition in the psychological literature, Haidt’s (2001) moral dumbfounding task. Surprisingly, with the rancor between adherents of the various theories of moral intuition, to date no attempts have been made to demonstrate convergent 61 validity among the various measures proposed to tap moral intuition. In order to make meaningful comparisons among the competing theoretical accounts of moral intuition, it seems prudent to ensure that all parties are measuring the same construct. 62 Study 2 Are Measures of Moral Intuition Commensurable? Comparing the Causal Deviance and Moral Dumbfounding Methodologies The title of this dissertation, An Intuitive Turn, is a reference to the shift that has occurred in the study of moral cognition within psychology; in the span of less than a decade, the study of moral intuition has become one of the most vibrant and contentious areas of research in the social sciences. Scholars from a diverse range of academic backgrounds, such as primatology, cognitive neuroscience, linguistics, decision-making, evolutionary psychology, and cultural psychology, have turned their efforts towards developing our understanding of how people come to make moral decisions. While the diversity of perspectives has enriched our understanding of the processes that underlie moral cognition, it has also fostered disagreement. Many authors agree that intuition plays an important role in moral decision-making, but there is little consensus regarding the processes that characterize moral intuition. As noted in the literature review, some scholars, such as Haidt (2001), characterize moral intuition as an affective evaluation, whereas others, such as Sunstein (2005) and Pizarro et al. (2003), characterize moral intuition as heuristic cognition. The primary goal of Study 2 is to explore the convergent validity between these two prominent conceptions of moral intuition. Haidt (2008) argues that the disagreement evident among those who study moral cognition within psychology is a product of the differences between the two general perspectives that he believes characterize the field. According to Haidt (2007), the dominant perspective in the study of moral cognition within psychology in the time since the cognitive revolution has been the cognitive-developmental perspective; as such, Haidt refers to it as the 63 mainline in moral psychology. With an intellectual lineage from Piaget (1932/1965) through Kohlberg (1984a) and on to Turiel (2006), the cognitive-developmental perspective emphasizes a rational, deliberative approach to moral decision-making. According to Haidt (2008), the past two decades have witnessed a shift in the study of human cognition across a number of disciplines; there has been a broader recognition of the role of automatic, unconscious, and intuitive processes in human cognition. Scholars from these varied disciplines began to apply their theoretical frameworks and empirical findings to human social functioning, cultivating a renewed interest in moral intuition within psychology. Haidt (2008) refers to this pan-disciplinary convergence of interest in human moral intuition as the new synthesis in moral psychology. Given the rancor evident amongst those who study moral intuition, there is ample reason to question whether the renewed interest in moral intuition actually represents a synthesis of ideas. According to Haidt (2008), the new synthesis in moral psychology reflects the widespread recognition that the evolved capacity to make important social decisions automatically without deliberation is the primary decision-making system in the human mind, and that the much more recently evolved capacities for language and higher-order reasoning are often slaves to this intuitive system. Haidt (2008) credits E. O. Wilson as the progenitor of the new synthesis. Wilson (1975, p. 562) argued that morality had evolved from instincts and, in criticizing the mainline of moral psychology, argued that perhaps it was time for “ethics to be removed from the hands of the philosophers and biologicized.” Although Wilson’s comments had little immediate effect on the study of morality within psychology or the dominance of the cognitive-developmental perspective, in time sociobiology, reborn as evolutionary psychology, would come to significantly influence how 64 psychologists construe the human mind. Haidt (2008) argues that the proliferation of dual- process theories of cognition in social psychology played a key role in the development of the new synthesis. Many dual-process theories are based on the idea that the human mind is composed of two distinct, yet functionally interactive systems that evolved independently. The operationalization of moral intuition employed in Study 1 was based on a dual-process model of cognition, Epstein’s (2008) CEST approach. The primary goal of Study 2 is to examine whether measures of moral intuition from different theoretical perspectives from within the new synthesis in moral psychology demonstrate convergent validity. It is remarkable that, to date, there have been no published attempts to demonstrate convergent validity between such measures. Given that those who study moral intuition are unable to agree on basic questions such as whether intuition is the product of affect, cognition, or both, determining whether the various theoretical camps are, in fact, studying the same construct would seem to be the most crucial task for the field today. Epstein (2008) makes the case that the experiential system of the Cognitive- Experiential Self-Theory (CEST) approach captures all of the various characteristics that have been attributed to intuition: It operates outside of conscious awareness, it is rapid, and it is affective charged. On these points it would seem that the CEST approach corresponds quite well with Haidt’s (2001, 2004) Social Intuitionist Model (SIM). Recall that the SIM explains moral judgment as “the sudden appearance in consciousness, or at the fringe of consciousness, of an evaluative feeling (like–dislike, good–bad) about the character or actions of a person, without any conscious awareness of having gone through steps of search, weighing evidence, or inferring a conclusion” (Haidt & Bjorklund, 2008a, p. 188). Both 65 approaches emphasize a lack of conscious deliberation and cognitive speediness as aspects of intuition, but they disagree on the role of affect. Where the CEST model emphasizes affectively charged cognition, the SIM approach focuses on the role of affective evaluations. This difference is reflected in the different methodologies that the adherents of these approaches employ to operationalize intuition. Adherents of the SIM and CEST approaches to intuition measure the construct in very different ways. Much of the early research with the SIM employed the moral dumbfounding methodology. The moral dumbfounding methodology involves the presentation of a vignette in which there is no direct evidence of psychological, social, or physical harm, but which evokes strong moral judgments from most people. Perhaps the most well-known moral dumbfounding vignette is Haidt’s (2001) sibling incest story. Julie and Mark are brother and sister. They are travelling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide it would interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide not to do it again. They keep that night as a special secret, which makes them feel even closer to each other. What do you think about that? Was it OK for them to make love? (p. 814) Participants are asked to rate whether or not the behavior in the vignette was wrong, and to provide some justification for their judgments. Although Haidt’s (2001) argument that all judgments on the moral dumbfounding task are necessarily moral intuitions is flawed, it is still possible that many of the judgments captured with the moral dumbfounding task are, in fact, moral intuitions. The metric of the moral dumbfounding task is how strongly one believes in one’s judgment. Haidt and his colleagues have continued to use strength of conviction as a metric to quantify moral judgment (cf., Schnall et al., 2008; Wheatley & Haidt, 2005). 66 The most common measure used within the CEST approach is the Rational Experiential Inventory (REI). The REI is a self-report measure that generates scores for two general cognitive styles: rational and experiential (Epstein, 2008). However, the REI is not a production measure; the participants simply describe their typical problem-solving approach. Those who rely on the REI need to employ other measures to capture in vivo decision- making. In recent years researchers have compared people’s scores on the REI with their performance on classic heuristics-and-biases decision-making tasks such as the Linda problem (Epstein, Denes-Raj, & Pacini, 1995), the ratio–bias phenomenon (Pacini & Epstein, 1999), the Asian disease problem (Bjorklund & Backstrom, 2008), and framing effects (Shiloh et al., 2002). In each of these examples, intuitive decision-making was defined by irrational or non-normative choices. Epstein (2008) argues that the experiential system within CEST is not characterized by lazy thinking, or merely the application of heuristic decision rules; yet, the bulk of the research conducted from the CEST perspective defines intuitive judgment as fundamentally irrational. As such, the causal deviance task represents a logical extension of the CEST framework to the study of moral intuition. On the causal deviance task, any deviation from equal attributions of blameworthiness or quality of character is irrational and, as such, is considered to be an intuitive decision. Both the moral dumbfounding task and the causal deviance task are characterized as measures of moral intuition, but are they really measuring the same construct? This is a crucial empirical question that must be resolved in order to further our understanding of moral intuition. If there is a strong relation between these measures, then we can accept that both theoretical accounts of moral intuition hold some validity. If, however, there is no meaningful relationship between these measures of moral intuition, then we will need to 67 consider the implications for the construct of moral intuition and the operationalization of it. Given that the moral dumbfounding task measures moral intuition in the metric of strength of commitment, and the causal deviance task defines moral intuition as irrational or non-optimal decision-making, there is reason to speculate that there will be minimal convergence between these measures of moral intuition. Of course, attempting to identify a reasonable measure of moral intuition is only one part of this project. This project was initiated to examine the relation between moral autonomy and intuitive moral decision-making. Two theoretical approaches from within the cognitive-developmental tradition inspired this interest, Davidson and Youniss’ (1991) discussion of moral autonomy as moral identity and Kohlberg’s (Colby & Kohlberg, 1987) notion of moral types. The second goal of Study 2 was to examine whether moral identity is related to moral autonomy, and to measure the extent to which each construct is related to intuitive moral decision-making. Davidson and Youniss (1991) argued that moral autonomy is akin to moral identity, and that the autonomous moral identity is characterized by reflexive, non-deliberative moral functioning. Working from the Piagetian tradition, Davidson and Youniss argued that, through mutual and reciprocal interactions with peers, one may develop an autonomous moral identity. Davidson and Youniss’ notion of the autonomous moral identity is conceptually related to Kohlberg’s autonomous moral type. In this study, one goal was to examine whether moral autonomy as operationalized in Gibbs et al.’s (1992) measure of the moral typology was related to moral identity. One of the difficulties in studying moral identity over the past several decades has been the relative dearth of measures available to 68 operationalize the construct. In this study I will employ a self-report measure of moral identity constructed by Aquino and Reed (2002). Aquino and Reed undertook the construction of a measure of moral identity in order to rectify a perceived lack of valid measures of the construct. They argue that moral identity is an important construct in that it is intrinsically related to moral behavior. In their framework, moral identity is defined as a “self-conception organized around a set of moral traits” (Aquino & Reed, 2002, p. 1487). Aquino and Reed’s measure of moral identity produces two factors that they propose underlie moral identity: internalization and symbolization. Internalization reflects the degree to which moral traits are central to the individual’s self-concept; symbolization, on the other hand, reflects the degree to which people’s real-world behaviors reflect their moral traits. Previous research suggests that the moral typology is related to identity. Newitt and Walker (2002) found that participants who were classified as being the autonomous moral type scored lower on the less mature ego-identity statuses of foreclosure and diffusion. How this previous research might translate to Aquino and Reed’s measure of moral identity is somewhat unclear. However, since higher scores on the internalization and symbolization factors are thought to reflect a more mature moral identity, it seems reasonable to speculate that these scales should be positively correlated with the autonomous moral type. However, there is some question as to how best to characterize moral autonomy. Krettenauer and Edelstein (1999) make a convincing argument that Gibbs et al.’s (1992) measure of moral autonomy, the Sociomoral Reflection Measure Short-Form (SRM- SF), confounds at least two different theoretical conceptions of moral autonomy. They argue that Kohlberg’s (Tappan et al., 1987) initial conceptualization of moral autonomy 69 incorporated elements of Piaget’s (1932/1965), Baldwin’s (1898), and Kant’s (1785/1948) notions of moral autonomy, and that this fusion is reflected in the fundamental elements employed to score moral autonomy on the SRM-SF: conscience, fundamental valuing, and balancing perspectives. Conscience elements on the SRM-SF are thought to reflect the notion that moral ideals are felt from within; that is, moral values are prescriptive, they reflect what one believes and feels they ought to do, and they are integrated into the self- structure of the individual (Gibbs et al., 1992). Examples of judgments with conscience elements contain references to the need to act in a particular way in order to maintain one’s personal integrity, self-esteem, or to avoid feelings of regret or guilt. Krettenauer and Edelstein argue that the conscience elements in the SRM-SF reflect the Kantian notion of prescriptiveness. Both Baldwin (1898) and Kant (1785/1948) emphasized the universality of autonomous moral judgments. The fundamental valuing elements of the SRM-SF are intended to capture the universality of autonomous moral judgments. In the scoring manual for the SRM-SF, a moral judgment is said to reflect fundamental valuing when “it extends or generalizes values such as life to all humanity, and not just to those in particular relationships or societies” (Gibbs et al., 1992, p. 25). Heteronomous moral judgments, on the other hand, ascribe moral values only to particular people or circumstances. For example, the justification that one should save a friend’s life simply because that is what friends do would be a heteronomous judgment; on the other hand, if one were to argue that one should save a friend’s life because all life is precious, that would be an autonomous judgment because the value of life is generalized. 70 Balancing of perspectives is the third fundamental element that underlies moral autonomy on the SRM-SF. Krettenauer and Edelstein argue that balancing of perspectives represents a Piagetian notion of moral autonomy. Piaget (1932/1965) argues that moral autonomy develops from the individual’s ability to engage in mutual and reciprocal perspective-taking with their social partners. That is, morally autonomous individuals are careful to consider other perspectives and points of view when making their moral decisions. An example of a moral judgment demonstrating balancing of perspectives would be an appeal to treat others the way that you would want them to treat you. In order to be categorized as the autonomous moral type on the SRM-SF, an individual needs to produce moral judgments that contain examples of two of the three fundamental elements. Given that the fundamental elements used to score moral autonomy on the SRM-SF are derived from three relatively distinct theories of moral autonomy, it is likely that the operationalization of moral autonomy on the SRM-SF conflates distinct theoretical approaches. Krettenauer and Edelstein (1999) sought to remedy this situation by measuring moral autonomy in terms of universality and prescriptiveness separately. They found that by decomposing autonomy into its core elements, they significantly increased the predictive validity of the construct in terms of real-world behavior. In the current study, it may be fruitful to decompose moral autonomy into the three scores used to measure it on the SRM-SF (conscience, fundamental valuing, and balancing of perspectives). The three fundamental elements seem to reflect very different forms of moral functioning: Conscience seems to reflect moral judgments based on an inner compulsion or feeling, fundamental valuing reflects moral judgments based on the application of universal moral rules, and balancing of perspectives reflects moral judgments derived from the consideration of other 71 perspectives. It is possible that the three fundamental elements of moral autonomy scored with the SRM-SF could demonstrate unique relations with each of the measures of moral intuition and the two scales on Aquino and Reed’s measure of moral identity. Given that conscience judgments have a strong affective component, it is possible that they could relate to the moral dumbfounding task as it is also based on a notion of moral judgment that reflects affective evaluation. On the other hand, irrational responding on the causal deviance task defines moral intuition on that measure, it is possible that emotional responding related to the notion of conscience is the source of such irrational choices. Moreover, given Gibbs et al.’s (1992) contention that the conscience element of the moral typology reflects the integration of moral principles into the individual’s self-definition, we should expect a significant positive relation between the measures of the conscience element of the moral typology and the internalization scale of Aquino and Reed’s moral identity measure. It is somewhat less clear how fundamental valuing and balancing perspectives will be related to the moral dumbfounding task, the causal deviance task, or Aquino and Reed’s moral identity measure. If Bartsch and Cole Wright (2005) are correct and moral heuristics are simply general, universal, moral rules such as do not have sex with your sibling, then fundamental valuing, which reflects the application of universal moral rules, could be related to performance on the moral dumbfounding task. If there is a relation between the moral dumbfounding task and fundamental valuing elements, this would suggest that the moral judgments made on the task are not exclusively affective. How fundamental valuing might be related to the causal deviance task is more difficult to predict. However, given that moral intuition on the causal deviance task is defined by the misapplication of moral rules and 72 fundamental valuing represents the mature application of moral rules, one should expect fundamental valuing scores to be negatively correlated with performance on the causal deviance task. As higher scores on fundamental valuing and the internalization and symbolization scales of Aquino and Reed’s moral identity measure are all thought to reflect moral maturity, then we should expect to find a positive relation. The same argument can be made for the relation between balancing of perspective scores and scores on the internalization and symbolization scales. As balancing of perspectives entails conscious deliberation, it should be negatively correlated with any measure of moral intuition. To summarize, then, Study 2 will examine the convergent validity between the moral dumbfounding task and the causal deviance task, two well-known measures of moral intuition. Given that the moral dumbfounding task measures moral intuition in the metric of strength of commitment, and the causal deviance task defines moral intuition as irrational or non-optimal decision-making, there is reason to suspect that there will be little convergence between these measures of moral intuition. Having argued the merits of decomposing moral autonomy into the fundamental elements that define it on the SRM-SF, it would be appropriate to examine the relations between the elements of conscience, fundamental valuing, and balancing of perspectives, and the measures of moral intuition and moral identity. Given that conscience judgments have a strong affective component, it is predicted that conscience scores will be positively related with responses on the moral dumbfounding task and to irrational responses on the causal deviance task. Also, a significant positive relation is expected between the measures of the conscience element of the moral typology and the internalization scale of Aquino and Reed’s moral identity measure. And finally, given that moral intuition on the causal deviance task is 73 defined by the misapplication of moral rules, and fundamental valuing represents the mature application of moral rules, it is predicted that fundamental valuing scores will be negatively correlated with performance on the causal deviance task. Method Participants Participants in this study were 70 undergraduate students (51 women, 19 men), with a mean age of 19.9 years (SD = 1.4). Of these participants, 29 reported being born in Canada, 36 reported being born in East Asia, 4 reported being born in South Asia, and 1 reported having been born in Europe. All participants received course credit for their contribution. Measures Moral identity. The explicit moral identity measure, constructed by Aquino and Reed (2002), is a 10-item pencil-and-paper measure in which participants indicate on a 5-point Likert scale from 1 (strongly agree) to 5 (strongly disagree) the degree to which the items reflect themselves. The measure generates two factors thought to underlie moral identity. The first factor, symbolization, is thought to reflect how individuals’ actions reflect their moral traits. A typical symbolizing item would be, “I often wear clothes that identify me as having these characteristics” (Reed & Aquino, 2003, p. 1286). The second factor, internalization, is thought to reflect the degree to which moral traits are considered central to the individuals’ self-concept. A typical internalization item would be, “It would make me feel good to be a person who has these characteristics” (Reed & Aquino, 2003, p. 1286). Aquino and Reed’s moral identity measure has demonstrated strong construct validity and meaningful relations with moral cognition, and the internalization factor has been shown to significantly predict prosocial behavior. 74 Sociomoral Reflection Measure–Short Form (SRM-SF). The SRM-SF (Gibbs et al., 1992) is a pencil-and-paper production task designed to measure moral maturity within the context of the first four stages of Kohlberg’s stage model. The format of the SRM-SF is well suited to group administration. The SRM-SF consists of 11 brief contextual statements based on the moral norms from Colby and Kohlberg’s (1987) scoring manual. The values reflected in these statements emphasize truth, contract, the value of life, property and law, affiliation, and legal justice, respectively (Basinger et al., 1995). For example, the truth scenario asks participants to respond to the question, “In general, how important is it for people to tell the truth?” and the value of life question asks the participants to consider, “Let’s say a friend of yours needs help and may even die, and you’re the only person who can save him or her. How important is it for a person (without losing his or her own life) to save the life of a friend?” For each statement, participants must first indicate whether the value in the statement is “very important,” “important,” or “not important,” and then they are then asked to justify their evaluations. The SRM-SF coding scheme also allows for the scoring of Kohlberg’s moral typology. Gibbs et al.’s scoring scheme measures moral type by identifying moral judgments that exhibit (a) balanced perspective-taking, (b) fundamental valuing, and (c) aspects of conscience. Balanced perspective-taking is characterized by taking into consideration the perspective of others; for example, responses to the truth scenario would be scored as demonstrating balanced perspective-taking if it were argued that “you should treat others the way you would want them to treat you.” Fundamental valuing is reflected in moral decisions that are based on universal principles that generalize beyond the present circumstances to include all people such as, “promises are precious or priceless.” Conscience elements are 75 characterized by judgments that have a strong affective motivation; for example, “you would feel rotten, terrible, ashamed, bad about yourself, or guilty, or you could have emotional problems or become depressed,” or “for the sake of self-respect, or one’s integrity, dignity, honor, consistency, or sense of self-worth.” To be categorized as the autonomous moral type, an individual must demonstrate at least two of these three criteria in their protocols, otherwise they are scored as the heteronomous moral type. The SRM-SF has demonstrated acceptable test-retest and split-half reliability (Gibbs et al., 1992). The coding scheme for the SRM-SF has demonstrated high interrater reliability even with inexperienced coders (r = .94; Gibbs et al., 1992). Basinger et al. (1995) found high concurrent validity between the SRM-SF and the Moral Judgment Interview (Colby & Kohlberg, 1987). In the present study, interrater reliability for each of the fundamental elements of moral autonomy, based on a random sample of 20 questionnaire packages, was κ = .88 for conscience, κ = .92 for fundamental valuing, and κ = .68 for balancing of perspectives. Fundamental element scores that will be used for the data analyses represent the total number of instances of each type of element in the participants’ responses across the 11 questions in the measure. Causal Deviance Measure. Participants were presented with two sets of hypothetical moral vignettes, constructed by Pizarro et al. (2003), and designed to evoke either rational or intuitive judgment processes. Each vignette contrasted two immoral episodes: in the first (the causally normal story), the protagonist intends some immoral act (e.g., killing their spouse), then acts on this intent and carries out the immoral act, achieving his/her goal; in the second story (the causally deviant story), a protagonist intends to cause some immoral goal, acts in such a way as to cause the goal, but something intercedes to directly achieve the 76 protagonist’s original goal. In one set of vignettes, the causally normal story involves one man throwing a knife at another man, intending to kill him, and indeed succeeding; in the causally deviant story, the one man throws a knife, intending to kill the other man, the other man sees the knife coming and dies of a heart attack, but the knife would have killed him had he not fallen to the floor. In the other set of vignettes, the causally normal story involves a woman poisoning her husband during dinner at a restaurant; in the causally deviant scenario, the woman attempts to poison her husband, but her efforts only make his food unpalatable, so he orders another meal which unbeknownst to him contains food to which he is allergic, thus killing him. The vignettes were counterbalanced so that each vignette was randomly presented with instructions to make rational or intuitive decisions. For each set of episodes, participants are asked to rate which actor’s actions were more morally wrong, and which actor is of worse moral character. Participants are asked either to make their most rational decision or their intuitive gut feeling for each set of questions. The scale ranges from (1) A is much worse than B, to (2) A is a little worse than B, to (3) A is equal to B, to (4) B is a little worse than A, and finally to (5) B is much worse than A. According to Pizarro et al. (2003), any deviation from the two actors being rated as equally bad is considered an incidence of intuitive judgment. The attenuation of moral responsibility in these causally deviant vignettes is considered to be a deviation from the normative standard that moral responsibility should be determined by intentions and actions, rather than simply by consequences. So, in this methodology, intuitive judgment is defined as a deviation from a rationally determined normative standard. 77 Moral dumbfounding. Adapted from Haidt’s (2001) moral dumbfounding methodology, participants were asked to make a moral judgment concerning Haidt’s sibling incest story. The moral dumbfounding methodology involves presenting participants with vignettes that lack any discernable instance of physical harm or injustice, yet still evoke intuitive moral judgments (Haidt et al., 2000). The sibling incest story is an example of a typical vignette from Haidt (2001), and is the story employed for Study 2. Julie and Mark are brother and sister. They are travelling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decide it would interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love, but they decide not to do it again. They keep that night as a special secret, which makes them feel even closer to each other. What do you think about that? Was it OK for them to make love? (p. 814) Having read the sibling incest story from Haidt (2001), participants were then asked to indicate on a 7-point scale the permissibility of siblings engaging in sexual intercourse (1 = strongly agree, it was okay; 7 = strongly disagree, it was wrong). Procedure Each questionnaire sitting was conducted with up to five participants. Participants had up to 2 hours to complete the package. Results The data analysis strategy for Study 2 follows the structure laid out for Study 1. First, there will be a nonparametric analysis of the causal deviance measure. Then the results from the causal deviance measure will be compared with the results from the moral dumbfounding task to determine if there is convergent validity between the two measures of moral intuition. The analysis will then turn to the hypotheses made regarding the relation between the conception of moral autonomy on the SRM-SF and the two scales generated with Aquino and 78 Reed’s measure of moral identity. And finally, multiple regression analyses will be conducted in order to construct equations to predict moral intuition as measured by the moral dumbfounding task and the causal deviance task. The percentage of participants who deviated from equal judgments of permissibility and character is depicted in Figure 2. Recall that the CEST framework suggests that the intuitive or rational systems can be activated by explicit instructions. After collapsing across instruction order, when prompted to make an intuitive judgment regarding the actions of the protagonists, 27% of participants deviated from the normative response; when asked to make a rational judgment regarding the actions of the protagonists, 7% of participants deviated from the normative standard. In general, the rate of deviation in response to the intuitive instructions on the action judgments was quite low; however, in relative terms, 20% more Figure 2. Percentage of Participants Responding Deviantly in Judging Actions and Character (Study 2) 0 5 10 15 20 25 30 35 Rational First: Actions Rational First: Character Intuitive First: Actions Intuitive First: Character Type of Judgment % o f P a rti c ip a n ts w ho D e v ia te Rational Instructions Intuitive Instructions 79 participants deviated after the intuitive instructions than after the rational instructions. The rates of deviant responding were even lower for the judgments of character. When asked to make an intuitive judgment regarding the character of the protagonist, 9% of participants deviated from the normative standard; when asked to make a rational judgment regarding the character of the protagonists, 3% deviated from the normative standard. The data suggest that simply asking people to make intuitive decisions has a relatively small influence on deviant responding. When collapsing across instruction type and instruction order, 30% of participants deviated on judgments of actions and 10% of participants deviated on judgments of character. This suggests that, in general, people are more likely to make deviant judgments for judgments of actions than they are for judgments of character. McNemar tests were conducted to test the proportion of participants who deviated in judgments of the permissibility of actions when given instructions to be rational or to be intuitive. For the group that received the rational instructions first, the analysis indicated that a significantly larger proportion of participants deviated in their judgments of the permissibility of actions in response to the intuitive instructions compared to the rational instructions, χ2(1, N = 40) = 6.33, p = .006, φ = .40. For judgments of character, on the other hand, there was no difference in the proportion of participants who deviated in response to rational or intuitive instructions, χ2(1, N = 40) = 0.92, p = .25, φ = .15. Given intuitive instructions, relatively more participants deviate from equal judgments of the permissibility of actions than when given instructions to be rational in their decisions; and the type of instruction did not have significant effect on the proportion of participants who deviated from equal judgments of character. 80 For the group that received the intuitive instructions first, the McNemar tests indicated that there were no significant differences in the proportion of participants who deviated in their judgments of actions, χ2(1, N = 30) = 3.76, p = .06, φ = .35, or in their judgments of character, χ2(1, N = 30) = 0.09, p = .95, φ = .02, regardless of the type of instructions given. Thus, when given intuitive instructions first, the participants were equally likely to deviate from the normative standard for judgments of the permissibility of actions or character. The next step in the analysis of the data was to examine the relation between the two measures of moral intuition (moral dumbfounding and causal deviance tasks). Moral dumbfounding scores (M = 5.85, SD = 1.70) were unrelated to responses to either the action questions from the causal deviance task, rpb(68) = -.15, p = .22, or the character questions, rpb(68) = -.16, p = .20. These findings indicate that the two measures of moral intuition do not converge on a common construct of moral intuition. The analysis will now examine the hypotheses concerning the relation between the fundamental elements of moral autonomy from the SRM-SF and the scales from Aquino and Reed’s measure of moral identity. Table 3 displays the correlations among the measures of moral intuition and the indices of moral autonomy and moral identity. Given that the conscience element and the internalizing scale are both described by their respective authors as reflecting the integration of moral principles into the self-structure, it was predicted that the scores from these measures should be highly related. The data, however, suggest that the number of conscience judgments an individual produces on the SRM-SF is not related to their internalizing score from Aquino and Reed’s moral identity measure, r(68) = -.11, p = .35. 81 Since higher scores on the conscience elements on the SRM-SF and the two scales of the moral identity measure are thought to reflect more mature moral functioning, it was predicted that there should be a strong positive relation between the number of conscience judgments produced on the SRM-SF and scores on the symbolization scale of the moral identity measure. However, the number of conscience judgments produced on the SRM-SF was, in fact, negatively related to scores on the symbolization scale, r(68) = -.35, p = .003. Thus, neither of the predicted relations between the conscience element of the moral Table 3. Correlations between Moral Intuition, Moral Autonomy, and Moral Identity 2 3 4 5 6 7 8 Moral intuition 1. Moral dumbfounding .15 .16 .18 .07 .05 -.07 -.22 2. Action judgment — .47*** .35** -.19 -.23 .16 -.04 3. Character judgment — .27* -.17 -.12 .11 .03 Moral autonomy 4. Conscience — -.10 -.13 -.11 -.35** 5. Fundamental valuing — .28* -.22 -.30** 6. Balancing perspectives — -.23 .08 Moral identity 7. Internalization — .20 8. Symbolization — * p < .05. ** p < .01. *** p < .001. 82 typology and the measure of moral identity was supported. Since higher scores on the fundamental elements on the SRM-SF and the two scales of the moral identity measure are thought to reflect more mature moral functioning, it was predicted that there should be a strong positive relation between the number of judgments demonstrating fundamental valuing produced on the SRM-SF and scores on the internalizing and symbolization scales of the moral identity measure. The relation between the number of examples scored as fundamental valuing and scores on the internalizing scale was weak and not statistically significant, r(68) = -.22, p = .06. The relation between the number of examples scored as fundamental valuing and scores on the symbolization scale was statistically significant, however the relation was in the opposite direction from what had been predicted, r(68) = -.30, p = .01. So, again the symbolization scale is negatively related to the production of elements of mature moral judgment. Again, since higher scores on the balancing of perspectives elements on the SRM-SF and the two scales of the moral identity measure are thought to reflect more mature moral functioning, it was predicted that there should be a strong positive relation between the number of balancing of perspectives judgments produced on the SRM-SF and scores on the internalizing and symbolization scales of the moral identity measure. The number of judgments scored as demonstrating balancing of perspectives was not significantly related to scores on the internalizing scale of the moral identity measure, r(68) = -.23, p =.06; nor was the number of judgments scored as demonstrating balancing of perspectives significantly related to scores on the symbolization scale of the moral identity measure, r(68) = .08, p = .52. In other words, none of the predicted relations between the fundamental elements of the SRM-SF and the two scales on the moral identity measure was supported. 83 Several other interesting findings are illustrated in Table 3. Past research with Aquino and Reed’s objective measure of moral identity has demonstrated a statistically significant relation between the scores from the internalization and symbolization scales of the measure (r = .32; Reed & Aquino, 2003). Yet, in the current study, the relation between the scales was not evidenced, r(68) = .20, p = .10. Note, however, that even in Reed and Aquino’s (2003) study, the relationship between the scales was weak. Of note, as well, are the relations among the elements of moral autonomy derived from the SRM-SF. The conscience element of moral autonomy was not related to fundamental valuing element, r(68) = -.10, p = .41; nor to the balancing perspectives element, r(68) = -.13, p = .28. On the other hand, the fundamental valuing element was positively related to the balancing perspectives element, r(68) = .28, p = .02. These findings provide some support for Krettenauer and Edelstein’s (1999) contention that the elements of moral autonomy measured by the SRM-SF are not homogenous. The observed relation between fundamental valuing and balancing perspectives scores may reflect the fact that both of these elements are primarily cognitive, whereas the conscience element is primarily affective. The relation between the measures of moral intuition and those tapping moral identity and moral autonomy will be examined in the following multiple regression analyses. The bivariate correlations among these measures are found in Table 3. The analysis will now turn to constructing equations to predict intuitive moral decision-making on the moral dumbfounding task and the causal deviance task. In order to predict responding on the moral dumbfounding task, a forced-entry multiple regression analysis was conducted. Table 4 details the results of the analysis; the scores for the 84 fundamental elements of the SRM-SF (conscience, fundamental valuing, and balancing perspectives), and the scores from the internalizing and symbolization scales from the measure of moral identity were entered into the analysis to predict participants’ responses on the moral dumbfounding task. The regression analysis did not result in an equation that would predict scores on the moral dumbfounding task significantly better than simply relying on the mean, F(5, 69) = 0.97, p = .44, f2 = .08. When considered independently, none of the variables entered into the equation was a significant predictor of moral dumbfounding scores. Table 4. Multiple Regression for the Prediction of Scores on the Moral Dumbfounding Measure (Study 2) Variable B SE β p Conscience Fundamental valuing Balancing perspectives Internalizing Symbolization R2 = .07 .20 -.09 .17 .00 -.12 .21 .30 .21 .05 .08 .13 -.04 .10 -.002 -.20 .35 .77 .44 .98 .17 Table 5 presents the data for a hierarchical logistic multiple regression analysis that was conducted to identify the variables that best predict non-normative deviations for judgments regarding the actions of the protagonists on the causal deviance task. The dependent variable for the regression was a binary, categorical variable: participants who did not demonstrate deviation on the action judgments from the causal deviance task (n = 49) and those who deviated at least once in response to the action questions posed (n = 21). The 85 regression was conducted in two steps: in the first step, the between-subjects variable of instruction order was entered into the analysis to control for the effect of order of instruction on non-normative responding; in the second step, conscience, fundamental valuing, balancing of perspectives, internalization, and symbolization were added as predictors. The change in the fit of predictive equation in moving from Step 1 to Step 2 was statistically significant, the change corresponds to a medium-sized effect, χ2(5, N = 70) = 14.26, p = .01, φ = .45. The Hosmer and Lemeshow chi-square goodness-of-fit test indicated that Step 2 of the logistic regression adequately fits the data, χ2(8, N = 70) = 7.94, p = .43. There is little agreement among statisticians on a measure of the variability explained in logistic regression, but many authors favor the Nagelkerke R2; for Step 2 of the current analysis, the Nagelkerke R2 = .30. Table 5. Logistic Multiple Regression for the Prediction of Non-Normative Action Judgments on the Causal Deviance Task (Study 2) Variable B SE Exp (B) df p Step 1 Instruction order Step 2 Conscience Fundamental valuing Balancing perspectives Internalizing Symbolization Instruction order RN2 = .30 -.88 .76 -.51 -.54 .08 .02 .81 .56 .30 .58 .42 .06 .12 .66 .42 2.15 .60 .59 1.08 1.01 .44 1 1 1 1 1 1 1 .12 .01 .38 .20 .21 .89 .21 86 Only one of the predictors entered into the regression analysis was statistically significant: the number of conscience elements that the participant produced. The odds ratio indicates that when all of the other predictors are held constant, each conscience element that was scored on an individual’s SRM-SF increased the likelihood that they would respond non- normatively to the action probe on the causal deviance task 2.15 times. Table 6 presents the data for a hierarchical logistic multiple regression analysis that was conducted to identify the variables that best predict non-normative deviations for judgments regarding the character of the protagonists on the causal deviance task. The dependent variable for the regression was a binary, categorical variable: participants who did not demonstrate deviation on the character judgments from the causal deviance task (n = 64) and those who deviated at least once in response to the action questions posed (n = 6). The regression was conducted in two steps: in the first step, the between-subjects variable of instruction order was entered into the analysis to control for the effect of order of instruction on non-normative responding; in the second step, conscience, fundamental valuing, balancing of perspectives, internalizing and symbolization were added as predictors. The change in the fit of predictive equation in moving from Step 1 to Step 2 was statistically significant, the change corresponds to a medium-sized effect, χ2(5, N = 70) = 12.99, p = .02, φ = .43. The Hosmer and Lemeshow chi-square goodness-of-fit test indicated that Step 2 of the logistic regression adequately fits the data, χ2(8, N = 70) = 8.12, p = .42. For Step 2 of this multiple regression analysis, the Nagelkerke R2 = .39. As with the regression analysis for the action judgments, the analysis revealed only one statistically significant predictor of non-normative character judgments on the causal deviance task: conscience elements. The odds ratio for the conscience elements indicates 87 that each additional judgment scored as reflecting a conscience element on the SRM-SF increases the likelihood that the individual will deviate from equal judgments of character by more than 6 times. Discussion Study 2 was designed to accomplish three related tasks: to examine the convergent validity between the causal deviance task and the moral dumbfounding task, to reduce the confounding of different theoretical conceptions of moral autonomy within the SRM-SF, and to determine which aspects of moral autonomy predict responses on the causal deviance task and the moral dumbfounding task. The rates of non-normative responding on the causal deviance task in Study 2 were very similar to those observed in Study 1. In general, 27% of the participants made non- normative decisions in response to the intuitive instructions on the judgments concerning the Table 6. Logistic Multiple Regression for the Prediction of Non-Normative Character Judgments on the Causal Deviance Task (Study 2) Variable B SE Exp (B) df p Step 1 Instruction order Step 2 Conscience Fundamental valuing Balancing perspectives Internalizing Symbolization Instruction order RN2 = .39 -.44 1.85 -.20 -2.97 .08 .29 1.42 .90 .82 1.22 1.70 .11 .26 1.29 .64 6.34 .82 .05 1.08 1.33 .24 1 1 1 1 1 1 1 .62 .03 .87 .08 .46 .28 .27 88 permissibility of the protagonists’ actions, whereas only 7% of participants made such decisions in response to the rational instructions; likewise, for the judgments regarding the character of the protagonist, 9% of participants made non-normative decisions in response to the intuitive instructions, whereas only 3% made non-normative responses to the rational instructions. Two things seem relatively clear from this pattern. First, people are more willing to make non-normative judgments regarding actions than they are about character. And second, although more people respond in a non-normative fashion to the intuitive instructions than to the rational instructions, the actual rate of non-normative responding on the causal deviance task is very low. Why are people more willing to make non-normative judgments regarding the actions of protagonists than on judgments of the character of the protagonist? Hume (1740/2007) argued that, unless an action can be shown to be the direct product of a character deficiency, that the person cannot be held responsible for the act. Actions are by their very nature temporary and perishing; and where they proceed not from some cause in the character and disposition of the person, who perform’d them, they infix not themselves upon him, and can neither redound to his honour, if good, nor infamy, if evil. The action itself may be blamable; it may be contrary to all the rules of morality and religion: But the person is not responsible for it; and as it proceeded from nothing in him, that is durable or constant, and leaves nothing of that nature behind it, ’tis impossible he can, upon its account, become the object of punishment or vengeance. (p. 411) Hume argues that, although we may find specific actions abhorrent, the individual is not necessarily responsible for them, unless they are the product of some enduring aspect of their character. Thus, according to Hume, it is easier to judge the quality of an action than to determine the actor’s responsibility for the act. Perhaps this explains why participants are more likely to make non-normative judgments regarding actions than character: Judgments of character require more consideration than judgments of actions. And, in the case of the 89 causally deviant scenarios, despite the intentions of the protagonist, ultimately something intercedes to kill their intended victim. Hume’s distinction between judgments of responsibility and blaming is reflected in Shaver’s theory of the attribution of blame. Shaver (1985) argued that blaming is the product of a sequence of attributions. The first step in the process of ascribing blame is a causal attribution. In Shaver’s theory, the cause of a behavior or event is the antecedent conditions. A judgment of responsibility is the product of an evaluation of the situational forces that may have influenced the actor, the actor’s knowledge, and the outcome that the actor intended. According to Shaver, the assignment of blame requires a further step, an evaluation of potential mitigating circumstances. Shaver did not believe that this sequence of steps was universal; he did believe, however, that when this order is violated the likelihood of errors increases significantly. On the causal deviance task the participants are asked to judge which actions are more blameworthy before they are asked to judge the character of the protagonist; so, if Shaver is correct regarding the effect of violating the sequence of steps in the attribution of blame, this could explain why participants are more likely to make non-normative judgments on the action questions. Another possible explanation for these differences arises from the distinct processes that are thought to underlie judgments of responsibility and blame. Miller, Burgoon, and Hall (2007) argue that, within Shaver’s theory, judgments of responsibility are more cognitive, whereas attributions of blame are more affective. The finding that participants who demonstrate more conscience elements on the SRM-SF are significantly more likely to make non-normative attributions of blame on the causal deviance task suggests that a propensity to rely on affective evaluations may explain non-normative, 90 irrational decisions, particularly when attributing blame for the protagonists’ actions. As in Study 1, only a minority of participants demonstrated non-normative decision- making on the causal deviance task. The causal deviance task is based on a methodology introduced by Epstein et al. (1992); they proposed that an experimenter can evoke either rational or intuitive responses from their participants simply by varying the instructions given. In accordance with the heuristics-and-biases tradition, intuitive responding was defined by a deviation from a normative standard. Epstein et al. (1992) found that there was less non-normative responding on a decision-making task with rational instructions than there was with intuitive instructions. One issue of significant importance is whether we can really evoke rational or intuitive judgments simply by directing participants to make particular types of decisions. If intuitive judgments are automatic and occur outside conscious awareness, then it seems that it would be impossible to simply ask someone to be intuitive. Topolinski and Strack (2008) tested this logic in a series of experiments examining semantic coherence; they sought to examine whether intentionally activating the processes that are thought to facilitate automatic information-processing would affect task performance. They found that it was not possible to intentionally activate processes such as spreading activation, and that attempting to do so impairs performance on a variety of tasks. In light of these findings, one might be tempted to ask how Epstein et al. (1992) managed to evoke intuitive judgments from their participants; the answer to that question raises further doubts regarding the validity of their methodology. On closer inspection, Epstein et al. (1992) did not ask their participants to make intuitive judgments, or to go with their gut instincts; instead, they asked their participants to respond as a foolish person would respond. Clearly, asking people to behave as fools 91 explains the increased level on non-normative responding; their participants were simply intentionally making irrational choices, intuition played no part in their decisions. Given that the experiential system was operationalized as a fool’s approach, Epstein’s (2008) claim that experiential processes are not construed as suboptimal in the CEST approach rings hollow. That being said, in the causal deviance task, Pizarro et al. (2003) did ask their participants to make intuitive decisions, which is clearly less pejorative than asking their participants to think like fools. The crucial question for the causal deviance task is what is the effect of asking people to make an intuitive decision? It would appear that given Topolinski and Strack’s (2008) findings, the best case scenario for the validity of the causal deviance task is that, in asking participants to make an intuitive judgment, the naturally occurring intuitive processes are undermined and thus error rates increase. However, it is also possible that people make non- normative decisions in response to the intuitive instructions because when contrasted with instructions to be rational, intuition connotes irrational. Perhaps being asked to make intuitive judgments is perceived by some participants as akin to being foolish. And this can have significant consequences on performance; after all, Bry, Follenfant, and Meyer (2008) demonstrated that priming negative stereotypes, like the dumb-blonde stereotype, can significantly impair cognitive performance. If people do perceive being intuitive as being irrational or error-prone, then it is possible that the instructions on causal deviance task either directly or indirectly evoke errors from participants. That, of course, is an empirical question that should be addressed in future research. As predicted, participants’ responses on the causal deviance task were not related to responses on the moral dumbfounding task. At first glance this might be seen to represent a 92 significant problem for these measures of moral intuition: Two prominent measures of moral intuition currently being employed to study the construct are unrelated. However, given that the causal deviance task defines moral intuition as non-normative decision-making and the moral dumbfounding task measures moral intuition in the metric of commitment to a judgment, it is really no surprise that the scores from these two measures are unrelated. Moreover, from a social intuitionist perspective, we should not expect these measures to be in agreement. Recall, that the social intuitionist model of moral judgment is a domain-specific approach. Haidt and Bjorklund (2008a) argue that all moral intuitions can be derived from five basic domains of moral intuition: respect for hierarchies, aversion to the suffering of others, reciprocity and fairness, purity, and in-group loyalty. The sibling incest story taps the domain of purity, whereas the causal deviance task taps the domain of aversion to the suffering of others. Since the two measures tap different domains of moral intuition, a social intuitionist would not expect the two measures of moral intuition to be highly related. What is troubling, is that neither the moral dumbfounding task nor the causal deviance task appears to actually measure moral intuition. At the heart of the lack of agreement between the moral dumbfounding measure and the causal deviance measure is the operational definition of moral intuition. Topolinski and Strack (2008) identified four central features of intuition that are studied by cognitive scientists: first, intuitions operate outside of conscious awareness; second, intuitions are fast and efficient decision-making processes; third, intuitions are affectively imbued; and fourth, intuitions are automatic, they are not intentionally employed. These four features are forefront in Epstein’s (2008) and Haidt’s (2001) discussions of moral intuition, but they are clearly lacking in their operationalization of the construct. Although it would be 93 unreasonable to suggest that a measure of moral intuition must include all of these features of moral intuition, it would seem that such a measure should tap at least one of these features. Neither the causal deviance task nor the moral dumbfounding task explicitly captures any of these four features. The causal deviance task appears to measure non-normative responding, not moral intuition; and participants completing the moral dumbfounding task may or may not be making moral judgments; but the strength of conviction to a belief is not an index by which one can discriminate intuition from rational reflection. It would appear that what the study of moral intuition lacks is an operationalization of intuition that captures one of the central features of the process. The lack of agreement between Aquino and Reed’s (2002) measure of moral identity and Gibbs et al.’s (1992) measure of moral autonomy was surprising. Given that higher scores on the internalization and symbolization scales of Aquino and Reed’s measure of moral identity are thought to represent more mature moral identities, the negative correlations with the elements of the moral typology were unexpected. Perhaps the significant negative correlations between the symbolization scale and the moral typology and its fundamental valuing and conscience elements is not really all that surprising given the nature of the symbolization scale. The symbolization scale is constructed from a series of probes that ask participants to rate the degree to which they wear certain clothes or buy certain magazines to convey to others the idea that they manifest important moral personality characteristics. This would seem to be a very superficial means of tapping moral identity at best; at worst, the symbolization scale would seem to capture one’s unauthentic attempts to appear moral to others; certainly not what one would conceive of as being genuinely morally mature. 94 Also surprising was the lack of a relation between the internalization scores and the conscience scores. Both scores are thought to convey the notion of the integration of moral principles into the individual’s self-concept, yet the present data suggest the two scores are not meaningfully related. It would appear that Aquino and Reed’s operationalization of moral identity and Gibbs et al.’s operationalization of the moral typology do not generate converging notions of moral maturity. Although the potential relation between moral identity and moral intuition has yet to be meaningfully examined, precursors to this examination must be a more careful consideration of the fundamental nature of moral intuition as well as moral identity. Attempting to predict moral intuition from the measures of moral autonomy and moral identity produced mixed results. Aquino and Reed’s (2002) measure of moral identity was not related to scores on the moral dumbfounding task or the causal deviance task. Following Krettenauer and Edelstein’s (1999) approach, three relatively distinct conceptions of moral autonomy were used to predict moral intuition on the moral dumbfounding task and the causal deviance task. None of the three fundamental elements of moral autonomy were related to performance on the moral dumbfounding task, which was disappointing because responses on the moral dumbfounding task have some possibility of being moral intuitions. The decomposition of moral autonomy into the three fundamental elements did provide some clarity as to why, in Study 1, moral autonomy was a significant predictor of non-normative decision-making. The fundamental element conscience was the best predictor of non-normative responding on judgments regarding the permissibility of actions and the character of the protagonists in the causal deviance task. Conscience elements on the SRM-SF are thought to 95 capture the integration of moral values into the self-structure of the respondent (Gibbs et al., 1992). Some conscience judgments, such as when one argues that one ought to behave in a particular manner for the sake of one’s self-respect or integrity, do reflect this integration; but other judgments that can be scored as conscience elements, such as arguing that one should behave in a particular manner so that one will feel better inside, seem to focus more on the affective consequences of the judgment. Perhaps a proclivity to make these affectively driven decisions is related to non-normative decisions on the causal deviance task. People who produce more conscience elements might also be more motivated to conform to demand characteristics that ask them to be irrational. Regardless, what has been clarified by the decomposition of moral autonomy is that the more cognitive aspects of moral autonomy, balancing of perspectives and fundamental valuing, do not predict non-normative responding on the causal deviance task. Having completed two studies in the project, several important issues have emerged. First, it is clear that there is no convergent validity between two of the more prominent measures of moral intuition currently being employed in the field today. It would appear that neither the causal deviance task nor the moral dumbfounding task explicitly measures a meaningful aspect of moral intuition. Several problems have been identified with the causal deviance task in particular. Given that many theories of cognitive functioning propose that automatic, intuitive processes are the default processes in the human cognitive system, it should be troubling that only one-third of participants demonstrate so-called intuitive decision-making on the task. However, on closer inspection the low rates of deviant responding are understandable. The causal deviance task is derived from a theoretical perspective that characterizes intuitive decisions as being the opposite of reasoned decisions; 96 that is, it equates the irrational with intuition. Clearly, the causal deviance task may measure non-normative responding, or the respondents’ susceptibility to demand characteristics, but it is not measuring moral intuition. Although people may rely on their intuition to make moral judgments on the moral dumbfounding task, there is no way to determine whether their judgments are the product of intuition or reason. The proposed logic that there are no rational reasons for the sibling incest in Haidt’s vignette to be morally wrong, thus, if one objects to the actions of the protagonists, then one must be making an intuition is flawed. For the study of moral intuition to progress, we must conceive of measures that explicitly tap the central features of intuition. In order to accomplish this it will be necessary to flesh out a more conceptually rich characterization of intuition than simply the opposite of reasoning. 97 Study 3 Comparing Measures of Moral Intuition The main goal of Study 3 is to begin the process of fleshing out a more conceptually sophisticated notion of moral intuition. As discussed earlier, Haidt (2001) defines intuition as simply an automatic, unconscious, affective evaluation, whereas Pizzaro et al. (2003) define intuition as a deviation from a rationally derived normative standard. Osbeck (1999) argues that these types of psychological definitions of intuition do not reflect the philosophical history of the construct. For her, a meaningful definition of intuition should reflect “direct, noninferential apprehension” (Osbeck, 1999, p. 246). According to Osbeck, intuition is akin to perception; that is, to intuit a moral principle is to perceive it directly, as self-evident, without inference. In this view, intuition is not conceived of as irrational, as it is typically operationalized in the heuristics-and-biases literature; instead it is considered to be the starting point for the rational process. Osbeck believes that dual-process models that portray intuitive and rational processes as being in conflict do not accurately reflect the philosophical heritage of intuition. To Osbeck, intuition should be construed as the initial understanding through which rational processes may proceed: Intuition identifies the core values upon which the rational processes may act. One should note that this definition of intuition does not entail the sentimentalism inherent in Haidt’s definition of intuition. This is not to say that intuition cannot be imbued with affect; the point here is that intuition is not purely affective responding. Of course, the obvious question then becomes, what is it that we intuit? Haidt argues that we simply intuit the moral quality, the goodness or badness, of some particular person or action. As discussed earlier, it is difficult to conceive of how such simple evaluations could underlie the broad 98 scope of judgments defined as moral judgments. A more comprehensive conceptualization of moral intuition is found in the work of MacNamara (1991) who bemoaned the antiquated notions of cognitive development prevailing in the study of moral cognition at that time. His argument was relatively straightforward: Domain-general learning mechanisms are insufficient to explain people’s moral reasoning capacities, and neo-Piagetian cognitive- developmental theories of moral development underestimate the complexity of children’s moral understanding. MacNamara believed that all people were born with evolved, innate moral rules, what he referred to as ideal elements, which serve to canalize moral development. Such core values would likely reflect issues such as fairness, aversion to the suffering of others, respect for authority, personal responsibility and other elements necessary for the maintenance of an adaptive society. MacNamara’s vision is clearly reflected in the contemporary theories that adopt the approach of linguists and attempt to define the processes that underlie moral judgment in terms of an innate universal moral grammar (Hauser, 2006; Mikhail, 2007). In these models, the definition of moral intuition entails the direct, non-inferential, apprehension of the core moral values. Of course, one can adopt such a definition of moral intuition without committing to the nativist back-story; Narvaez and Lapsley’s (2005) moral expertise model or Narvaez et al.’s (2006) moral chronicity model could also explain the automatic application of deontological moral rules. In this way of thinking, it is still logically coherent to conceive of the mind as a dual- process system; it is simply the case that the two processes are fundamentally complimentary. The core moral principles lie at the heart of System 1, they inform our intuitive judgments, and in doing so they canalize potential subsequent rational cognition. It 99 is possible to conceive of several potential individual differences within such a system. Most apparent for my present purposes would be an individual difference in the reliance on moral intuition. Recall that Kohlberg (1984a) argued that the autonomous moral type reflects an intuitive moral sense. Although those scored as being morally autonomous lack the rational complexity to reason at a principled level, they are able to accurately intuit the core moral principles that characterize Kohlberg’s classic moral dilemmas. So, I would argue that Pizzaro et al. (2003) were correct in hypothesizing an individual difference in the propensity to engage in intuitive versus rational moral cognition; it was simply that their definition of moral intuition was problematic. Having laid out the framework for a more complex notion of moral intuition, the question now turns to how to go about measuring it. Apparently, this is an issue that has been mulled over for some time among philosophers interested in folk moral intuitions. Having perused the literature, it seems that for many years philosophers have been arguing about how lay people come to make moral decisions. Some philosophers have now decided that they and their compatriots have discussed the phenomena sufficiently, and that the time has come to ask lay people how they come to make their moral decisions (Nahmias, Morris, Nadelhoffer, & Turner, 2005). This insight helped to shape my approach to measuring moral intuition. I have constructed two means of measuring moral intuition: one a production measure of moral judgment and the other a cognitive-style battery. The ultimate goal of this project is to expand our understanding of the cognitive processes that characterize people’s everyday moral decision-making. In order to accomplish this goal, it is important to employ a measure of moral judgment that is high in ecological validity. If we want to measure how people make decisions in their real lives, we either need 100 to conduct field experiments or we need to operationalize our constructs with measures that will generalize to real world situations. It is very interesting that the intuitive turn in the study of moral cognition has been attributed to a perceived need to develop a more accurate representation of how people make everyday moral decisions (Haidt, 2001), yet the means by which many of the researchers in this movement operationalize moral judgment bears little relation to the real world. Very few people are called upon to judge the permissibility of the actions of mutually consenting incestuous siblings; so, the moral dumbfounding task represents a moral judgment that one could possibly face, but one that is very improbable. The same can be said for the trolley dilemma, the footbridge dilemma, and catastrophe dilemmas; thankfully, very view of us will ever need to decide whether we ought to push a large man in front of a train to save five other people. That being said, the foregoing moral intuition methodologies are at least possible; the causal deviance task, on the other hand, is best described as absurd. Consider the comparison of the causally normal vignette in which Dirk throws a knife at Nathan and kills him with the causally deviant scenario in which Zeke throws a knife at Allan, but Allan sees the knife coming and has a heart attack and dies before the knife can strike him. In discussing this scenario with colleagues the same question emerges time and again, is that even possible? Could Allan see the knife, have a heart attack, and fall out of the way before the knife strikes him? If the knife is being thrown from a sufficient distance to allow this process to be possible, one must ask why Allan wouldn’t just move to avoid the knife. The scenario is absurd, and the decision that the participant must make between Dirk and Zeke is, to be kind, extremely unlikely to happen in the real world. The appropriateness of using hypothetical dilemmas to measure moral cognition has been the subject of much disagreement in moral 101 psychology. Kohlberg’s highly successful and influential approach to measuring moral reasoning was the Moral Judgment Interview (MJI). The MJI measures moral reasoning complexity by having participants respond to a number of hypothetical moral dilemmas (Colby & Kohlberg, 1987). The use of hypothetical moral dilemmas, however, became a contentious issue; a number of researchers questioned whether moral reasoning on hypothetical moral dilemmas would generalize to moral reasoning in real-life situations (Baumrind, 1978; Haan 1975). According to Walker et al. (1995), two new types of moral dilemmas evolved out of the concerns regarding hypothetical dilemmas: actual moral dilemmas and real-life moral dilemmas. In research with actual moral dilemmas, participants are asked to make moral judgments concerning moral issues that other people have actually faced. While enhancing the generalizability of moral reasoning research, research conducted with actual moral dilemmas has also enhanced the understanding of how situational pressures influence people’s moral reasoning processes (Haan, 1975; Krebs, Vermeulen, Carpendale, & Denton, 1991). However, Walker, de Vries and Trevethan (1987) point out that the actual moral dilemmas are still generated by the researcher and, as such, there is no guarantee that participants will interpret the dilemma as a moral issue. Walker et al. (1987) proposed that having participants discuss their own real-life moral dilemmas would be a more effective approach in order to improve the generalizability and validity of moral reasoning research. Subsequent research conducted by Walker and his colleagues suggests that real-life moral dilemmas are significantly better predictors of actual moral behavior than are hypothetical moral dilemmas (Trevethan & Walker, 1989; Walker et al., 1987; Walker & Moran, 1991). 102 Thus, it seems that, for a measure of moral cognition to generalize beyond the laboratory, it should reflect the real-life experiences of the participants, and it should evoke moral judgments, and not some other type of decision. The accumulated research clearly supports the notion that the best approach for collecting information regarding moral reasoning is having participants recount their own experiences; but, it is less clear how this approach could be applied to moral intuition. In the present project, the focus is on non-deliberative moral decision-making; asking people to recount moral decisions from their past seems incommensurable with this goal. The challenge then is to have a measure of moral decision-making in which participants make moral judgments that reflect their real-life moral decisions. With some minor revisions, the SRM-SF could serve as a useful starting point. Unlike the causal deviance task, the moral dumbfounding task, or Kohlberg’s hypothetical moral dilemmas, the probes in the SRM-SF are very general in focusing on moral values rather than the context or situation. For example, in order to evoke reasoning regarding the value of honesty, the SRM-SF simply asks, “In general, how important is it for people to tell the truth?” (Gibbs et al., 1992, p. 151). The generality of the questions allows the participants to situate the moral values in the context of their own lives. Moreover, the SRM-SF contains questions regarding a number of moral values that people may face in their daily lives including affiliation, honesty, life, property, justice, and punishment. One of the issues Walker et al. (1995) raised concerning actual moral dilemmas was the interpretation of the issue as a moral one. The SRM-SF asks participants to indicate the importance of the value in question as either very important, important, or not important. This judgment roughly corresponds to Haidt’s moral dumbfounding task and its use of commitment as a 103 measure of moral intuition. As with Haidt’s methodology, this question is not a valid measure of moral intuition, but it does provide an index of the importance of the different moral values. However, the SRM-SF needs to be revised in order to measure moral intuition. Haidt and Bjorklund (2008a, p. 188) defined moral intuition as an evaluative feeling that occurs “without any conscious awareness of having gone through steps of search, weighing evidence, or inferring a conclusion.” The emphasis on moral intuition as a process that happens outside of awareness of the individual converges with Topolinski and Strack’s (2008) first aspect of intuition (that it occurs outside of awareness), and Osbeck’s (1999) definition of intuitive cognition as the direct apperception of knowledge. The central role of awareness of deliberation in the definition of intuition suggested that it should be meaningful to ask people how much thought they put into a given decision. As such, the SRM-SF was altered to include after each moral judgment probe two follow-up questions that ask participants to rate on a 7-point scale how much thought they had put into the previous judgment, and whether or not they had considered alternatives before coming to their decision. Although somewhat obvious, this seems to be a promising and appropriate means for tapping self-perceived cognitive effort and conscious reflection, widely recognized as central aspects of intuition. The two probes added to the SRM-SF allow for the examination of the relation between the fundamental elements of moral autonomy and the conscious deliberation of moral decisions. It will be very interesting to test whether participants’ average cognitive effort score or average alternative answers considered score predicts their performance on either the causal deviance task or the moral dumbfounding task. 104 In order to distinguish between affective intuition as advocated by Haidt (2001) and principled intuition as advocated by McNamara (1991), it was necessary to construct a second measure of moral intuition; the second measure is the Moral Cognition Style Inventory (MCSI). The Moral Cognition Style Inventory is based on the Rational– Experiential Inventory (Pacini & Epstein, 1999). The goal of constructing this measure was to tap participants’ self-reported reliance on affective intuition, principled intuition, or rational reflection across a number of moral situations or tasks that reflect Boyd’s (1977) three types of moral decisions: those that concerned evaluations of good versus bad, those that concerned evaluations of right versus wrong, and those that concern praising and blaming others. In particular, the questions focus on interpersonal conflict resolution, understanding one’s obligations and duties, evaluating the moral character of others, evaluating behaviors, and understanding the issues that underlie moral conflicts. The expectation is that individuals will demonstrate differences in their preferences for engaging in affective intuition, principled intuition, or rational reflection. Although it is possible that individuals may demonstrate task-specific preferences; for example, relying on affective intuition when resolving interpersonal conflicts but engaging in rational reflection when judging actions, there are no theoretical reasons to expect a predictable systematic pattern of results based on the content of the dilemmas. Data collected with the SRM-SF will allow for the examination of the relation between the constituent elements of the moral typology (balancing perspectives, conscience, and fundamental valuing) with the moral cognition styles measured with Moral Cognition Style Inventory (affective intuition, principled intuition, and rational reflection). Given the multifaceted definition of moral intuition employed in this study, it is possible that the 105 elements of the moral typology will be meaningfully related to the different aspects of moral intuition. It seems very likely that the conscience element will be related to affective intuitions, fundamental valuing will be significantly related to principled intuition, and balancing perspectives will be related to a preference for reasoning. The fundamental valuing element of the moral typology has not been related to intuitive judgment processes in the previous two studies in this project; it is possible that this is because fundamental valuing reflects an appreciation of fundamental moral principles, a notion that was lacking in the moral dumbfounding and causal deviance methodologies. With respect to the conscious deliberation probes added to the SRM-SF, if the moral cognition styles do accurately represent decision-making styles, then there should be meaningful differences in the amount of deliberation related to the different moral cognition styles. By definition, affective intuition and principled intuition scores should predict less cognitive effort and fewer alternative answers considered in making moral decisions. On the other hand, scores on the reasoning moral cognition scale should predict more cognitive effort and the consideration of more alternatives when making moral decisions. Study 3 also represents a bridge between the existing measures of moral intuition (examined in Study 2) and the new measures of moral intuition intended to capture a more sophisticated notion of moral intuition. Although Study 2 provided compelling evidence that there is little convergent validity between the moral dumbfounding and causal deviance measures of moral intuition, there were some questions concerning the data that deserve subsequent treatment. The data generated with the causal deviance measure in Study 2 suggested that perhaps the participants were simply deviating from normative standards, making errors in their reasoning, in response to the instructions to make intuitive decisions. 106 If this is the case, then it is possible that the causal deviance task is simply measuring people’s propensity to follow instructions, and to interpret the word “intuitive” to mean error- prone. Thus, one goal of Study 3 is to examine the effects of the instructions on participants’ decision-making strategies. Here, two versions of the causal deviance task are compared: one version with explicit instructions to make intuitive decisions, the other with instructions to be accurate. One of the difficulties in assessing this measure in Study 1 and Study 2 was the within-subjects design; that is, each participant responded to both intuitive and rational instructions. In Study 3, the effect of instructions will be examined between subjects: one group will receive instructions to be intuitive, the other group will receive instructions to be accurate. It is anticipated that the group that receives the intuitive instructions will demonstrate significantly more non-normative responding on the causal deviance task. Another goal of Study 3 is to test the logic that underlies the moral dumbfounding task. In the past, Haidt (2001) has explained the moral dumbfounding methodology as the evaluation of a clear violation of a moral standard where no obvious physical or psychological harm comes to the protagonists. So, by this logic, the participants’ response to the moral dumbfounding task must be an intuition because there is no rational reason for them to object to the siblings’ behavior due to the safeguards taken. But, what if there are no safeguards, do participants respond differently? Of course, this is an empirical question. In Study 3, two versions of Haidt’s (2001) sibling incest moral dumbfounding task will be compared: one version will retain the original version of the task, the other will eliminate the safeguards of birth control, protection from sexually transmitted disease, the description of the sex act as an isolated event, and the promise to keep the occurrence private. If there are 107 no differences in responses to the two versions of the task presented in Study 3, then we will have to question whether or not Haidt’s characterization of the task is accurate or meaningful. Of course, it won’t really be clear from our findings whether participants simply ignore the safeguards that minimize the potential sources of harm that could result from sibling incest or whether participants are simply making deontological judgments regarding the wrongfulness of sibling incest regardless of its consequences. If there are significant differences in the responses to the two versions of the task, then I will certainly need to explore how teleological moral judgments relate to notions of moral intuition. Perhaps this notion is somewhat premature, but conceptually it would seem coherent that principled intuition should relate to deontological judgments whereas affective intuition should relate to teleological judgments. The first set of research questions for Study 3 revolve around the measures of moral intuition that were compared in Study 2. Since the logic that underlies the moral dumbfounding task defines responses on the task as intuition because there is no rational reason to object to the siblings’ incestuous behaviors, I want to explore the effects of removing the safeguards. If there are potential negative consequences (so-called reasons for the behaviors to be construed as wrong), will participants’ judgments differ from the case when there are no consequences? Thus, participants’ responses on two versions of the moral dumbfounding task will be compared: a safe version and an unsafe version. The premise is that the potential consequences of the behaviors will not have a significant influence on judgments and thus there is no expectation of any noticeable differences between the scores on the two measures. Two versions of the causal deviance task will also be compared to determine the 108 effects of the instructions on non-normative responding. Unlike the previous studies in this project, Study 3 will involve a between-subjects test. One group will complete a version of the causal deviance task that asks them to make an intuitive version; the other group will complete a version of the task that asks them to make accurate decisions. It is anticipated that significantly more participants will make non-normative judgments when prompted to be intuitive. Given the collection of data for the both the causal deviance task and the moral dumbfounding task, the convergent validity of these measures of moral intuition will again be assessed. It is anticipated that the findings from Study 2 will be replicated with no relation being found between scores on the moral dumbfounding task and the causal deviance task. The next set of research questions focus on the examination of the new measures of moral intuition being introduced in Study 3. This analysis will examine the relations between the fundamental elements of moral autonomy from the SRM-SF, the measures of cognitive styles measured on the MCSI, and the new cognitive deliberation probes from the SRM-SF. It is hypothesized that the affective intuition score on the MCSI will be significantly related to the conscience element of the moral typology; the principled intuition score on the MCSI will be significantly related to fundamental valuing element of the moral typology; and the reasoning score from the MCSI will be significantly related to the balancing perspectives element of the moral typology. Given that the conscience and fundamental valuing elements from the SRM-SF are hypothesized to reflect intuitive decision-making, it is expected that those who demonstrate more instances of these elements will report less cognitive effort and fewer alternative answers considered. 109 The final set of research questions concern the examination of agreement between the new measures of moral intuition introduced in Study 3 with the existing measures previously examined in Study 2. Given the lack of agreement demonstrated in Study 2, it might be tempting to abandon the moral dumbfounding and causal deviance measures altogether; however, simply dismissing the existing measures and introducing a new production measure of moral intuition will not help to resolve the fundamental lack of theoretical or methodological agreement in this area of study. In fact, it would only serve to exacerbate the situation by adding one more distinct approach to the pastiche that currently characterizes the study of moral intuition. Thus, the goal of Study 3 is to attempt to find agreement between the existing measures of moral intuition and the new production measure and cognitive-style measure of moral intuition. Multiple regression analyses will be conducted to assess the best predictors of moral intuition from the data collected in Study 3. With the addition of the two measures of cognitive deliberation, there are now five production measures, in total, of moral intuition: the moral dumbfounding scores, non-normative responding on the action judgment of the causal deviance task, non-normative responding on the character judgment on the causal deviance task, the cognitive effort score from the revised SRM-SF, and the alternative answers considered score from the revised SRM-SF. It is hypothesized that a combination of fundamental elements from the SRM-SF, the cognitive styles from the MCSI, and the cognitive deliberation scores from the SRM-SF will predict scores on the moral dumbfounding task. It has been argued that the causal deviance task does not actually reflect moral intuition, so there is no expectation that the cognitive deliberation scores or the moral cognition styles score will predict non-normative responding on the causal deviance task; however, given the results of Study 2, it is expected that the 110 conscience scores from the SRM-SF may predict non-normative responding on the causal deviance task. Method Participants The sample was comprised of 121 undergraduate psychology students (79 female, 42 male) with a mean age of 20.3 years (SD = 2.4). Of these participants, 65 reported having been born in Canada, 36 subjects reported being born in East Asia, and the remaining 20 subjects reported having been born in South Asia, Europe, the United States, Australia, New Zealand, or the Caribbean. All participants were recruited through a university research subject pool, and received course credit for their participation. Measures Moral Cognition Style Inventory (MCSI). The Moral Cognition Style Inventory is a 33-item self-report measure (see Appendix A) tapping individual preferences for affective intuition, principled intuition, or rational reflection. Like the REI (Pacini & Epstein, 1992) employed in Study 1, the MCSI is intended to measure self-reported cognitive styles. Unlike the REI, however, the MCSI makes the distinction between principled intuition and affective intuition. Moreover, whereas the REI asks participants to describe their general cognitive style, the questions on the MCSI pertain specifically to moral issues. The MCSI asks participants to rate how well each of the 33 statements reflects themselves on a 7-point Likert scale from 1 (not well at all) to 7 (very well). Statements reflect each of the three proposed moral cognition styles: affective intuition (e.g., “I get a sense from people; I can usually feel whether people are good or bad”); rational reflection (e.g., “When I am in a conflict with someone, I usually try to think about the situation from the other person’s perspective before 111 I decide on an appropriate solution”); and principled intuition (e.g., “When dealing with a problem, I am good at perceiving the dynamics of the situation”). There are 11 questions for each of the cognitive scales. In order to reduce demand characteristics, the MCSI items were buried within Baron-Cohen and Wheelwright’s (2004) 60-item Empathy Quotient questionnaire. A pilot study was conducted on a sample of 241 undergraduate psychology students to begin an assessment of the psychometric properties of the MCSI. Although the data are still being examined, preliminary results support the distinction between the affective intuition scale, principled intuition scale, and rational reflection scale. A factor analysis conducted on the results from the MCSI for this sample resulted in a three-factor solution that reflected principled intuition, affective intuition, and rational reflection. This factor analysis did not result in the elimination of any of the original scale items. The Cronbach’s αs for the items from each of these scales demonstrated reasonable reliability: for the principled intuition scale, α = .88; for the affective intuition scale, α = .81, and for the rational reflection scale, α =.96. Although the current study’s sample size of 121 participants is relatively small for assessing the psychometric properties of a new measure, the internal consistency of the items intended to capture the three different decision-making strategies was promising indeed: the Cronbach’s αs for the items tapping the three strategies were found to be .70 for affective intuition, .76 for principled intuition, and .74 for the rational reflection items. Sociomoral Reflection Measure–Short Form (SRM-SF). The SRM-SF (Gibbs et al., 1992) is a pencil-and-paper production task designed to measure moral maturity within the context of the first four stages of Kohlberg’s stage model. The format of the SRM-SF is well 112 suited to group administration. The SRM-SF consists of 11 brief contextual statements based on the moral norms from Colby and Kohlberg’s (1987) scoring manual. The values reflected in these statements emphasize truth, contract, the value of life, property and law, affiliation, and legal justice, respectively (Basinger et al., 1995). For example, the truth scenario asks participants to respond to the question, “In general, how important is it for people to tell the truth?” and the value of life question asks the participants to consider, “Let’s say a friend of yours needs help and may even die, and you’re the only person who can save him or her. How important is it for a person (without losing his or her own life) to save the life of a friend?” For each statement, participants must first indicate whether the value in the statement is “very important,” “important,” or “not important,” and then they are then asked to justify their evaluations. The SRM-SF coding scheme also allows for the scoring of Kohlberg’s moral typology. Gibbs et al.’s scoring scheme measures moral type by identifying moral judgments that exhibit (a) balanced perspective-taking, (b) fundamental valuing, and (c) aspects of conscience. Balanced perspective-taking is characterized by taking into consideration the perspective of others; for example, responses to the truth scenario would be scored as demonstrating balanced perspective-taking if they argued, “you should treat others the way you would want them to treat you.” Fundamental valuing is reflected in moral decisions that are based on universal principles that generalize beyond the present circumstances to include all people such as, “promises are precious or priceless.” Conscience elements are characterized by judgments that have a strong affective motivation; for example, “you would feel rotten, terrible, ashamed, bad about yourself, or guilty, or you could have emotional problems or become depressed,” or “for the sake of self-respect, or one’s integrity, dignity, 113 honor, consistency, or sense of self-worth.” To be categorized as the autonomous moral type, an individual must demonstrate at least two of these three criteria in their protocols, otherwise they are scored as the heteronomous moral type. In this version of the SRM-SF, two probes were added that are intended to measure the participants’ self-reported reflection prior to their decision. For the first probe, participants were asked to indicate on a 7-point scale how difficult it was to make the judgment (1 = It just came to mind; 7 = I had to think really hard). For the second probe, participants were asked to indicate on a 7-point scale (1 = none; 7 = many alternatives) whether or not they considered any alternative answers in making their moral judgment. For both the cognitive-effort probe and the alternatives-considered probe, participants’ average score across the 11 moral judgments on the SRM-SF was used for data analyses. The SRM-SF has demonstrated acceptable test-retest and split-half reliability (Gibbs et al., 1992). The coding scheme for the SRM-SF has demonstrated high interrater reliability even with inexperienced coders (r = .94; Gibbs et al., 1992). Basinger et al. (1995) found 7high concurrent validity between the SRM-SF and the MJI (Colby & Kohlberg, 1987). In the present study, interrater reliability for each of the fundamental elements of moral autonomy, based on a random sample of 30 questionnaire packages, was κ = .90 for conscience, κ = .91 for fundamental valuing, and κ = .70 for balancing of perspectives. Fundamental element scores that will be used for the data analyses represent the total number of instances of each type of element in the participants’ responses across the 11 questions in the measure. Moral Dumbfounding. This measure was adapted from Haidt’s (2001) moral dumbfounding methodology. Participants were asked to make a moral judgment regarding 114 the permissibility of the sibling incest story. They were asked to indicate on a 7-point scale if they believed that it was okay for the siblings to engage in sexual intercourse (1 = strongly agree, it was okay; 7 = strongly disagree, it was wrong). As noted earlier, two versions of this task were employed: the original version from Haidt (2001) and a new version with the safeguards of contraception, protection from sexually transmitted diseases, secrecy, and an isolated occurrence removed. Causal Deviance Measure. Participants were presented with two sets of hypothetical moral vignettes constructed by Pizarro et al. (2003) designed to evoke either rational or intuitive judgment processes. Each vignette contrasted two immoral episodes: in the first (the causally normal story), the protagonist intends some immoral act (e.g., killing their spouse), they act on this intent and carry out the immoral act, achieving their goal; in the second story (the causally deviant story), a protagonist intends to cause some immoral goal, acts in such a way as to cause the goal, but something intercedes to directly achieve the protagonist’s original goal. For each story, participants are asked to rate which actor’s actions were more morally wrong, and which actor is of worse moral character. The scale ranges from (1) A is much worse than B, to (2) A is a little worse than B, to (3) A is equal to B, to (4) B is a little worse than A, and finally to (5) B is much worse than A. According to Pizzaro et al. (2003), any deviation from the two actors being rated as equally bad is considered an incidence of intuitive judgment. The attenuation of moral responsibility in these causally deviant vignettes is considered to be a deviation from the normative standard that moral responsibility should be determined by intentions and actions, rather than simply on consequences. So, in this methodology, intuitive judgment is defined as a deviation from a rationally determined normative standard. 115 Two versions of this task were employed here and administered between-subjects, one with the instructions to make an intuitive judgment, the other with instructions to make accurate decisions. Procedure For this study, participants completed the questionnaire packages in groups of up to ten. All participants completed the questionnaires within 2 hours. Results The analysis of the data will follow the pattern set out in the statement of the research hypotheses. Thus, the analysis will begin by examining the moral dumbfounding task and the causal deviance task. When comparing the scores on the two versions of the moral dumbfounding task, an independent samples t-test indicated that there was no difference between the scores on the safe version of the task (M = 5.63, SD = 1.75) and the unsafe version (M = 5.88, SD = 1.79), t(119) = -.76, p = .45, d = .14. This finding suggests that the safeguards against harm in the original version of the moral dumbfounding task have no meaningful influence on the participants’ responses. The rationale for the moral dumbfounding measure is that judgments on the task must necessarily be intuitions because there is no rational reason for the behaviors to be judged as wrong when the safeguards are in place; however, there is no difference in the participants’ responses when there are many rational reasons to judge the behaviors as wrong. This does not preclude the possibility that the participants are making moral intuitions in response to both versions of the task; it simply demonstrates that the inclusion of the safeguards does not significantly influence the participants’ responses and, as such, they do not necessarily evoke moral intuitions exclusively. Since there is no difference between the groups, subsequent analyses will 116 collapse across the moral dumbfounding conditions. The rates of non-normative responding on the causal deviance task were much lower in Study 3 than they were for Study 1 or Study 2. It is very likely that this decline in overall non-normative responding is a consequence of the between-subjects design. In both Studies 1 and 2 there were order effects in which the participants who were presented with intuitive instructions first also made non-normative responses to the subsequent instructions to make rational choices. With the current between-subjects design, the influence of intuitive instructions cannot inflate non-normative responding on the accurate-instructions version of the task; thus, overall non-normative responding has decreased. Figure 3 depicts the percentage of participants who responded non-normatively to the action or character judgments. Of the 11 participants who deviated on the action judgment, 10 of them deviated on both of their action judgments (the knife-throwing vignette and the Figure 3. Percentage of Participants Responding Deviantly in Judging Actions and Character (Study 3) 0 2 4 6 8 10 12 14 16 Action Judgment Character Judgment Type of Judgment % o f P ar tic ip an ts w ho De v ia te Accurate Instructions Intuitive Instructions 117 food-poisoning vignette). All three participants who made non-normative character judgments deviated on both of their character judgments. As such, for the purpose of data analysis, the two judgments made by each participant will be collapsed. So, each participant will be scored as either 0 (no deviation for the action or character judgment) or 1 (indicating a deviation from the normative standard on either an action or character judgment). A chi-squared test was conducted to determine whether participants presented with instructions to make intuitive decisions would be more likely to make non-normative responses for the action judgments on the causal deviance task than those instructed to make accurate decisions. The test determined that those who were presented with instructions to make intuitive decisions were more likely to make non-normative responses; the size of the effect was small, though still statistically significant, χ2(1, N = 121) = 3.87, p =.049, φ = .18. Another chi-squared test was conducted to determine whether participants presented with instructions to make intuitive decisions would be more likely to make non-normative responses for the character judgments on the causal deviance task than those with instructions to make accurate decisions. The test determined that the type of instructions presented to participants had no effect on the rates of non-normative responding for the character judgments on the causal deviance task, χ2(1, N = 121) = 0.61, p =.44, φ = .07. Thus, the type of instructions presented to the participants does seem to have a weak effect on the rate of non-normative responding for the action judgments on the causal deviance task, but not for the character judgments. This suggests that judgments regarding the permissibility of actions are more susceptible to the influence of instructions than are judgments regarding the character of the protagonists. Across each of the three studies in this project, the rates of non-normative responding have been lower on the judgments of 118 character; a further exploration of this finding is clearly warranted. Having examined the existing production measures of moral intuition independently, the analysis will now assess the level of agreement between the scores generated with the measures. In Study 2 there was no relation between scores from the moral dumbfounding task and scores from the causal deviance task. In Study 3, this relationship was again assessed. Table 7 presents the point-biserial correlations between moral dumbfounding scores and responses on the causal deviance task. The significant negative correlation between moral dumbfounding scores and non- normative character judgments, rpb(65) = -.30, p = .02, indicates that those who responded non-normatively to the character judgment in the accurate instructions version of the causal deviance task were also likely to judge the incestuous behaviors of the siblings in the moral dumbfounding task as being less wrong. One could speculate that this response pattern reflects faulty decision-making skills; however, it is unreasonable to attempt to generalize this result as the rate of non-normative responding is low. The individual who deviated on Table 7. Correlations between Judgments on the Moral Dumbfounding Task and the Causal Deviance Task 1 2 3 1. Moral dumbfounding task — -.03 -.30* 2. Causal deviance task – action judgment .05 — .14 3. Causal deviance task – character judgment -.16 .19 — Note. Above the diagonal are correlations for the causal deviance task with instructions to be accurate; below the diagonal are correlations for the causal deviance task with instructions to be intuitive. * p < .05. 119 the character judgment with accurate instructions and rated the incestuous behaviors as being less wrong is clearly an outlier. Table 7 also presents the point-biserial correlations between non-normative responding on the intuitive instructions version of the causal deviance and judgments made on the moral dumbfounding task. Given the methodological lineage of the causal deviance task, the intuitive instructions version of the task should reflect the functioning of the experiential system and, as such, should be the crucial test of convergent validity with moral dumbfounding task. Replicating the findings from Study 2, there are no significant relations between these two supposed measures of moral intuition. Given the results from Study 2, and their replication now in Study 3, there is convincing evidence to support the contention that the causal deviance task and the moral dumbfounding task lack convergent validity. The analysis will now turn to the relations between the fundamental elements of moral autonomy measured on the SRM-SF, the cognitive styles measured on the MCSI, and the measures of conscious cognitive deliberation added to the SRM-SF. Table 8 presents the correlations among the measures. Few of the hypothesized relations materialized. The significant, but moderate, positive correlation between affective intuition scores and principled intuition scores from the MCSI, r(119) = .49, p < .001, suggests that affective intuition and principled intuition are related, but still distinct. A similar argument can be made for the two measures of conscious cognitive deliberation that were added to the SRM- SF. Not surprisingly, the participants’ responses to the questions regarding how hard they had to think to come to a decision and how many alternative answers they considered in making their decisions were also significantly related, r(119) = .62, p < .001. This relation is strong, but not perfect; so it is possible that cognitive effort and the consideration of 120 Table 8. Correlations between Moral Autonomy, Cognitive Style, and Deliberation 2 3 4 5 6 7 8 Moral autonomy 1. Conscience (SRM-SF) .15 .17 .03 -.01 -.05 -.15 -.01 2. Fundamental valuing (SRM-SF) — .48*** .07 .10 -.04 .06 -.13 3. Balancing perspectives (SRM-SF) __ .09 -.01 .11 .09 -.02 Cognitive style 4. Affective intuition (MCSI) — .49*** .09 -.08 -.13 5. Principled intuition (MCSI) — .05 -.24** -.17 6. Rational reflection (MCSI) __ .02 -.09 Deliberation 7. Alternatives considered (SRM-SF) — .62*** 8. Cognitive effort (SRM-SF) — * p < .05. ** p < .01. *** p < .001. alternatives will be independently related to other measures of moral intuition. The significant negative correlation between the principled intuition scores from the MCSI and the alternatives considered score from the SRM-SF, r(119) = -.24, p = .007, indicates that those who reported that they typically just knew the right thing to do also reported considering fewer alternatives when making moral judgments on the SRM-SF. 121 There was also a significant positive correlation between the fundamental valuing elements scores and balancing of perspectives elements scores from the SRM-SF, r(119) = .49, p = .001. If, as hypothesized, fundamental valuing elements are related to intuitive processes and balancing perspective elements are related to deliberative reasoning processes, this relation may reflect the relation between intuition and reason theorized by Osbeck (1999). A number of hypotheses were not supported by the data. The first set of hypotheses concerned the relation between the elements of moral autonomy from the SRM-SF and the cognitive style scales from the MCSI. Given the affective nature of the conscience element of the SRM-SF, it was predicted that it would be related to the affective intuition scale from the MCSI; however, the data indicated that the two measures were unrelated, r(119) = .03, p = .74. It was also hypothesized that scores from the principled intuition scale of the MCSI would be related to the fundamental valuing scores from the SRM-SF; again, this relation was not revealed by the data, r(119) = .10, p = .27. Since both the rational reflection scale from the MCSI and the balancing perspectives element from the SRM-SF were thought to reflect deliberate cognition, it was hypothesized that they would be related; and yet again the data demonstrated no such relation, r(119) = .11, p = .23. A second set of hypotheses concerned the proposed relation between the conscience and fundamental valuing elements of moral autonomy and the cognitive deliberation probes added to the SRM-SF. It was predicted that conscience element scores would be significantly related to the cognitive deliberation scores because it was thought that the conscience elements reflected affective intuition. However, the conscience element of moral autonomy was not related to cognitive effort, r(119) = -.01, p = .91; nor was it related to the 122 consideration of alternative answers, r(119) = -.15, p = .10. Similarly, it was proposed that the fundamental valuing elements of moral autonomy reflect principled intuition and thus would be related to the cognitive deliberation scores. But, fundamental valuing scores were not related to cognitive effort scores, r(119) = -.13, p = .15; nor were they related to the consideration of alternative answers, r(119) = .06, p = .51. Thus, taken together, the elements of moral autonomy from the SRM-SF were not related to the cognitive deliberation probes added to the SRM-SF, nor were they related to the cognitive styles measured with the MCSI. The next step in the process of data analysis was to attempt to predict moral intuition with a production measure of moral intuition. As the goal of this study is to begin the process of finding convergent validity between measures of moral intuition and not simply to propose another distinct measure of moral intuition, the analysis will proceed by attempting to predict performance on existing measures of moral intuition with the new measures of cognitive deliberation and moral cognition style included as predictors. A multiple regression analysis was conducted to predict moral dumbfounding scores (see Table 9). A forced-entry method was employed with the fundamental elements of moral autonomy from the SRM-SF (conscience, fundamental valuing, and balancing perspectives), the moral cognition scales from the MCSI (principled intuition, affective intuition, and rational reflection), and the measures of cognitive deliberation that were added to the SRM-SF (cognitive effort and alternative answers considered) all entered as predictors. The analysis indicated an equation containing the principled intuition score and the alternatives considered score predicted moral dumbfounding at a statistically significant level, F(8, 121) = 2.57, p = .01, f2 = .16. So, scoring higher on the principled intuition scale 123 of the MCSI, and lower on the alternatives considered scale from the SRM-SF, predicts rating the incestuous behavior on the moral dumbfounding scale as being more wrong. Table 10 provides the results from a logistic regression conducted to predict non- normative responding on the action judgments on the causal deviance task. The dependent variable for the regression was a binary, categorical variable: those participants who did not demonstrate deviation on the action judgments of the causal deviance task (n = 110) and those participants who deviated at least once in response to the action questions posed in the causal deviance task (n = 11). The regression was conducted in two steps: in the first step, the between-subjects variable of type of instruction was entered because between-groups differences in the rates of non-normative responding had already been identified; in the second step of the analysis, the conscience, fundamental valuing, balancing perspectives, principled intuition, affective Table 9. Multiple Regression for the Prediction of Scores on the Moral Dumbfounding Measure (Study 3) Variable B SE β p Conscience Fundamental valuing Balancing perspectives Principled intuition Affective intuition Rational reflection Alternatives considered Cognitive effort R2 = .15 -.02 .11 -.02 .12 -.05 -.23 -.54 -.04 .14 .13 .09 .04 .04 .23 .29 .30 -.01 .08 -.03 .29 -.12 -.09 -.23 -.02 .87 .34 .80 .005 .19 .30 .05 .94 124 intuition, rational reflection, alternatives considered, and cognitive effort scores were added. The predictive equation derived from Step 1 of the regression was statistically significant, χ2(1, N = 121) = 4.05, p = .04, φ = .18. Thus, the type of instruction does significantly predict non-normative responding, but the question remains as to whether the inclusion of the other measures in Step 2 will increase predictability. The difference in the fit of the predictive equation in moving from Step 1 to Step 2 was not statistically significant, and the size of the effect was small, χ2(8, N = 121) = 4.66, p = .79, φ = .18. The Hosmer and Lemeshow chi-square goodness-of-fit test indicated that Step 2 of the logistic regression adequately fits the data, χ2(8, N = 121) = 5.76, p = .67, φ = Table 10. Logistic Multiple Regression for the Prediction of Non-Normative Action Judgments on the Causal Deviance Task (Study 3) Variable B SE Exp (B) df p Step 1 Instruction type Step 2 Instruction type Conscience Fundamental valuing Balancing perspectives Principled intuition Affective intuition Rational reflection Alternatives considered Cognitive effort RN2 = .15 1.33 1.51 -.38 -.18 -.05 -.64 .80 -.30 .22 .19 .70 .73 .36 .43 .20 .62 .59 .50 .69 .68 3.79 4.52 .68 .83 .95 .53 2.31 .74 1.25 1.21 1 1 1 1 1 1 1 1 1 1 .06 .04 .28 .67 .81 .30 .17 .55 .75 .78 125 .22. Overall, the equation fits the data adequately, but much of the predictive power is derived from Step 1, the type of instructions. The odds ratios clearly indicate that those participants who received the intuitive instructions were significantly more likely to make non-normative judgments of the protagonists’ actions on the causal deviance task. A second logistic regression was planned to determine the best equation to predict non-normative responding for the character judgments from the causal deviance task. The dependent variable for the regression was to be a binary, categorical variable: those participants who did not demonstrate deviation on the character judgments of the causal deviance task (n = 118) and those participants who deviated at least once in response to the action questions posed in the causal deviance task (n = 3). Given that 98% of the participants responded normatively to the character judgments, regardless of the instructions that they were given, the regression analysis was abandoned as no meaningful predictions would be possible given the data. Discussion Study 3 represents an attempt to find convergent validity among measures of moral intuition. The first step in this process was to address some unresolved questions regarding the moral dumbfounding task and the causal deviance task that emerged from Study 2. Initially, the logic that underlies the moral dumbfounding task was tested: If the task measures moral intuition because there are no rational reasons for the protagonists’ behaviors to be judged as wrong, then will the moral judgments produced by participants differ when the safeguards against negative consequences are removed from the vignette? Attention then turned to the causal deviance task. For the first two studies of this project, the causal deviance task was presented as a within-subjects methodology in which 126 every participant responded to both intuitive and rational instructions. In order to clarify the effects of the instructions on non-normative responding rates, the causal deviance task was employed as a between-subjects test in Study 3. Having examined the moral dumbfounding task and the causal deviance task independently, the next step was to replicate the examination of the convergent validity between the measures that was undertaken in Study 2. The next step in the analysis reflects the introduction of the new measures of moral intuition. The first set of tests focused on whether the new measures of moral intuition and moral cognition style were related to the fundamental elements of moral autonomy from the SRM-SF. Attention then turned to attempting to construct equations to predict performance on the existing production measures of moral intuition with the new measures of moral intuition and moral cognition style. The moral dumbfounding measure is one of the best known, and most contentious, measures of moral intuition in contemporary psychology. Haidt (2001) argues that there is no rational reason for participants to object to the behaviors described in the moral dumbfounding stories and thus their moral judgments must be influenced by unconscious, affective reactions. As has been noted already, this logic is seriously flawed—it neglects the potential effect of deontological decision-making. That is, negative consequences are not necessary for an action to be viewed as morally wrong; it simply needs to violate a moral rule to be wrong. Thus, having sexual intercourse with your sibling is wrong, not because you will bring shame on your family or because you create genetically inferior offspring, it is wrong because Western society has a moral norm prohibiting such behaviors. As a test of this measure, the safeguards against negative consequences were removed from one version 127 of the task and the responses to this new version were compared with responses to the original version of the sibling incest moral dumbfounding task. There was no difference between the scores from the safe and unsafe versions of the moral dumbfounding task. What does this tell us about the measure? The results suggest that the consequences of the behavior have no meaningful impact on the participants’ responses. This, of course, does not nullify Haidt’s claims that the moral dumbfounding task taps moral intuition. The data, however, cannot inform us as to whether the participants were relying on moral intuition or on moral reasoning to make their judgments. The participants may well have been making moral intuitions in both versions of the task, but the output provided does not address this point. Strength of commitment does not reflect how that judgment was arrived at, only how strongly it is held. So, the problem with the moral dumbfounding task is not that it does not necessarily evoke moral intuitions as Haidt claims, or that it cannot evoke moral intuitions, rather the problem with the moral dumbfounding task is that it does not produce a measure that can discriminate between intuition and reason. Some interesting findings arose from the examinations of the causal deviance task. The overall rate of non-normative responding was considerably lower than it was in the previous two studies. As discussed in the Results section, the between-subjects design likely had a significant impact on the decreased rate of non-normative responding in Study 3. Nevertheless, if intuitive processing is the default cognitive process in the human mind, the dearth of non-normative responding strongly suggests that the causal deviance task is not measuring moral intuition. As with the previous studies, there was more non-normative responding on the judgments concerning actions than the judgments concerning character. However, the main reason for including the causal deviance task in Study 3 was to conduct a 128 between-subjects test of the effects on instructions on non-normative responding. For the action judgments, participants who were presented with the intuitive instructions were significantly more likely to make non-normative judgments than were participants who were presented with the instructions to be accurate. The results of the manipulation of the causal deviance suggest that those who respond non-normatively on the intuitive instructions version of the task are simply responding in an irrational manner; and perhaps this is the important point. The Oxford English Reference Dictionary defines irrational as “illogical; without reason” (Pearsall & Trumble, 1996, p. 744). Perhaps participants confuse being intuitive with being irrational; given that researchers adopting a heuristics-and-biases approach, such as Pizarro et al. (2003), also seem to conflate intuition with illogical, should we really expect any different from the typical research participant? Given the emphasis on intuition in contemporary moral psychology and in psychology as a whole, it seems that someone should ask ordinary people what they believe the concept of intuition entails. If, as it appears, asking people to be intuitive makes them behave irrationally, then clearly such instructions are not capturing intuition as it is construed by Osbeck (1999) or Topolinski and Strack (2008). Given that the causal deviance task operationally defines intuition as non-normative responding (being irrational), the causal deviance task does not appear to be a particularly useful measure of moral intuition. In Study 2 there was no evidence of agreement between the moral dumbfounding task and the causal deviance task; that finding was replicated in Study 3. Given the rationales put forth by Pizarro et al. (2003) and Haidt (2001), respectively, the intuitive-instruction version of the causal deviance task and the moral dumbfounding task should both measure moral 129 intuitions; yet, the measures are largely unrelated. The only statistically significant result that emerged from the comparison of these two measures was the product of an outlier. Only one individual made a non-normative character judgment on the accurate instructions version of the causal deviance task; that same individual rated the behaviors in the sibling incest story as not wrong at all. Having replicated these findings across two studies, it seems clear that there is no relation between the two measures which, of course, is not surprising given that one measure taps one’s commitment to a judgment and the other taps non-normative responding to seemingly absurd scenarios. Very few of the hypothesized relations between the fundamental elements of moral autonomy from the SRM-SF and the cognitive subscales from the MCSI or the cognitive deliberation probes added to the SRM-SF materialized. One meaningful relation that was confirmed was the statistically significant negative correlation between scores on the principled intuition scale on from the MCSI and the alternative answers considered score from the SRM-SF. It was predicted that those who indicated that they typically knew what to do would indicate considering fewer alternatives on a production measure of moral judgment, and this finding provides some criterion validity for the scales from the MCSI. On the other hand, none of the predictions regarding the fundamental elements of moral autonomy from the SRM-SF were supported. In general, the predicted relations between the elements of moral autonomy from the SRM-SF and the measures of moral intuition and moral cognitive style failed to materialize. Theoretically, both Kohlberg (1984a) and Davidson and Youniss (1991) described moral autonomy as reflecting a non-deliberative approach to moral decision-making. If this is an accurate representation of the construct of moral autonomy, then the current 130 operationalization of the construct is problematic. Although there is evidence to support the convergent validity between the SRM-SF and Kohlberg’s Moral Judgment Interview (Colby & Kohlberg, 1987) in measuring moral reasoning complexity, the same cannot be said for the measurement of moral autonomy. Although in recent years several authors have employed the SRM-SF as a measure of moral autonomy, there are no published data demonstrating convergent validity between the SRM- SF and Kohlberg’s measure. It is possible that the SRM-SF is not a valid measure of moral autonomy. Future research should adopt the methodological approach employed by Krettenauer and Edelstein (1999). Krettenauer and Edelstein had their participants complete an interview in which they responded to hypothetical moral dilemmas. The participants’ responses were coded for moral autonomy elements that reflected prescriptiveness and universality. Krettenauer and Edelstein found that those demonstrating moral autonomy on their new measure were significantly more likely to engage in real-life moral behaviors. It was not possible to construct equations to predict non-normative responding on the causal deviance task from the fundamental elements of moral autonomy, the cognitive styles from the MCSI, and the average cognitive deliberation scores from the SRM-SF. The logistic multiple regression conducted to predict non-normative responding on the action judgments from the causal deviance task indicated that the type of instructions was the only meaningful predictor of non-normative responding: receiving the intuitive instructions version of the task predicted non-normative responding. It was not possible to identify any measures that predicted non-normative responding on the character judgments from the causal deviance task; given that less than 2% of the participants produced non-normative responses on the character judgments, it is not surprising that we were unable to predict such 131 rare occurrences. These findings reinforce the argument that the causal deviance task is not related to measures of moral intuition. The regression analysis that was undertaken to predict scores on the moral dumbfounding measure provided the first suggestion of statistically significant agreement between possible measures of moral intuition observed during this project. Those scoring higher on the principled intuition scale from the MCSI and reporting considering fewer alternatives when making their moral judgments on the SRM-SF were likely to rate the incestuous behaviors of the siblings on the moral dumbfounding task as being more morally wrong. Although the proportion of the variability in moral dumbfounding responding explained by the equation was relatively small, R2 =.15; the finding that scores derived from two different measures of moral intuition combined to predict scores on a third measure of moral intuition at a statistically significant level is itself a cause for optimism given the lack of convergent validity demonstrated in Study 2. It is possible that the variability explained by the equation is low due in part to the nature of the task. Contrary to Haidt’s (2001) characterization of the moral dumbfounding task, it is clearly possible to rely on deliberative moral reasoning skills and to regard the incestuous behavior as very wrong. Thus, interpretation of the scores on the moral dumbfounding task, and related analysis, are clouded by the equifinality of the processes that may underlie them. It would be very interesting to employ this equation, or one derived from these scores, on a measure that is more clearly measuring moral intuition. The findings that have emerged from Study 3 have the potential to make a meaningful contribution to the study of moral intuition in psychology, particularly the measurement of moral intuition. Having already castigated the causal deviance task for operationally 132 defining moral intuition as non-normative decision-making, Study 3 demonstrated that instructions to be intuitive do in fact increase non-normative responding on action judgments. However, as Topolinski and Strack’s (2008) research clearly indicated that individuals cannot exercise conscious control over intuitive cognitive processes, it would appear that the only reason why instructions to be intuitive might increase non-normative responding is because some participants interpret these instructions as asking them to be irrational. Although the evidence seems very compelling that the causal deviance task is not a very good measure of moral intuition, the findings of the current project leave some questions concerning the causal deviance task unanswered. One potential avenue for future investigation is to conduct another between-subjects test of the causal deviance task. In Study 3, differences between groups given instructions to be intuitive and instructions to be accurate were noted. If it is the case that some participants are interpreting intuitive to mean irrational perhaps the next approach should be to compare groups where one is provided with instructions to be accurate, one is provided with instructions to be intuitive, and a third group is given instructions to be irrational, and look for differences among the groups. The most significant finding that has emerged from Study 3 is the demonstration of agreement between measures of moral intuition. Given that there is no published research that demonstrates convergent validity between measures of moral intuition, this is an important finding. Indices of moral intuition from the SRM-SF and the MCSI predicted moral judgments on the moral dumbfounding task. It would appear that constructing measures that focus on the processes that define intuition has allowed for this convergence. However, it may also be the case that the demand characteristics of the new measures have actually inflated the relation between the new measures and the existing measures. 133 Despite the fact that the questions from the MCSI were buried within a larger battery, it is possible that participants having been asked about their general moral cognition style might respond in kind to the cognitive deliberation probes that were added to the SRM-SF. Perhaps the best way forward would be to evaluate the new measures individually in order to eliminate any potentially spurious relations. 134 General Discussion Moral intuition is one of the most vibrant areas of research in the social sciences; it is an area of study that is rife with debate and disagreement. The study of moral cognition within psychology was dominated by the cognitive-developmental approach for the last 30 years of the twentieth century. Building on a philosophical tradition with a lineage through Kant (1785/1948), to Piaget (1932/1965), Kohlberg (1984a), and Turiel (1983), this approach emphasizes the importance of conscious, deliberate, logical reflection in the process of moral judgment-making. The cognitive-developmental perspective on moral cognition flourished within psychology and its allied fields because, unlike behaviourism, it allowed psychologists to capture the rich complexity of people’s moral reasoning competency through measures such as Kohlberg’s MJI (Colby & Kohlberg, 1987). However, in time, some researchers who studied moral functioning began to argue that the cognitive-developmental perspective, as embodied in Kohlberg’s theory, did not reflect the everyday moral decision-making that people employ outside of the psychology laboratory (Krebs & Denton, 2005). As theoretical models of the organization and functioning of the human mind evolved in other areas of psychology and cognitive science, researchers from these other areas began to question the dated characterization of the human mind evident in the cognitive-developmental approach to moral cognition (MacNamara, 1991). By the end of the twentieth century, much of psychology and the cognitive sciences had moved away from the Piagetian notions of the mind that serve as the foundation of the cognitive-developmental perspective. It was only a matter of time until these changes would come to bear on the study of moral cognition within psychology. 135 The psychologist most commonly associated with the intuitive turn in the study of moral cognition is Haidt (2001, 2007, 2008). As noted in the literature review, Haidt and his colleagues have challenged the relevance of rational approaches to moral cognition; to them, moral judgment is simply the product of automatic, affective evaluations, cognitive deliberation plays no role in the process. And it is this denial of rational deliberation that fuels the disagreements between Haidt, the other sentimentalists, and cognitively oriented theorists. No reasonable psychologist or philosopher would deny the role of automatic or habitual processes in everyday moral functioning. In fact, more than 150 years ago Alexander (1852, p. 156) defended the study of habitual moral functioning as a central aspect of any meaningful moral science: “If we should remove from the list of moral actions all those which are prompted by habit, we should cut off the larger number of those which men [sic] have agreed in judging to be of a moral nature.” However, Alexander, like many contemporary psychologists, believed that an effective moral science must encompass both automatic and deliberative moral cognition. A central issue for the intuitive turn in moral cognition—and for this project—is the definition of intuition. During this project, the lack of consensus amongst psychologists on a definition for moral intuition has been obvious. Haidt (2001) construed intuition as an automatic, affective evaluation; a feeling that someone or something is good or bad. Yet, others such as Hauser (2006) and Nichols and Mallon (2006) have argued that a purely affective construal of moral intuition is not plausible because one needs some kind of decision rule to go from “bad” to “wrong.” On the other hand, equally unsatisfying definitions of moral intuition have arisen from those who construe intuition as a cognitive process. The danger when deriving one’s definition of moral intuition from the heuristics- 136 and-biases tradition, as Sunstein (2005) and Pizarro et al. (2003) have, is that intuition is then defined as the opposite of reason. The lack of a common theoretical definition for moral intuition, and its influence on operational definitions, is readily apparent in this project. Despite Haidt’s (2007) criticism of the cognitive-developmental approach, Kohlberg (1984a) was well aware of the role of non-deliberative moral decision-making. Kohlberg proposed an intuitive moral type: an individual who was able to intuit the appropriate moral principle for a given situation, felt an inner compulsion to act in accordance with the moral principle, and yet was unable to provide a post-conventional rational explanation as to why the particular principle was appropriate. Kohlberg (Colby & Kohlberg, 1987) would later refer to this construct as the morally autonomous type. Moral autonomy is important not only because it refutes Haidt’s (2007) claims that Kohlberg ignored intuitive moral decision- making, but also because it provides a framework for understanding moral intuition as mature moral functioning and it suggests a conception of intuition that is based on the appreciation of moral rules and not simply affective evaluations. This project was inspired by Kohlberg’s autonomous moral type and its potential relation to moral intuition. In the initial stages of the project, Haidt (2001) had not yet published research with the moral dumbfounding measure, so the decision was made to use the causal deviance task as a measure of moral intuition as it had been published in a peer- reviewed journal. Study 1 examined the relation between the autonomous moral type, moral intuition as measured with the causal deviance task, and general cognitive style. Moral autonomy was not related to a general cognitive style, but it did predict intuitive responses to judgments regarding actions on the causal deviance task. Although the predicted relation between moral autonomy and moral intuition was supported by the data, a deeper 137 consideration of the causal deviance task led to some important questions. Reflecting an intellectual heritage that traces back to the heuristics-and-biases tradition, the causal deviance task defines moral intuition as non-normative responding. However, moral autonomy was construed by Kohlberg (1984a) as a mature form of moral functioning, not subject to systematic errors. Thus, it was somewhat difficult to explain the results that emerged in Study 1. Being somewhat perplexed by the causal deviance task as a measure of moral intuition, the next logical step was to examine the convergent validity between this measure of moral intuition and another well-known measure of moral intuition, the moral dumbfounding task. Amazingly, to date there is no published research that examines the convergent validity between these measures of moral intuition. Study 2 set out to test the convergent validity between measures of moral intuition, and to refine our conception of moral autonomy as measured on the SRM-SF. Krettenauer and Edelstein (1999) argued that the definition of moral autonomy adopted by the authors of the SRM-SF actually conflated several different theoretical accounts of moral autonomy. In order to clarify the relation between moral autonomy and moral intuition, and to eliminate the conflation of distinct theoretical accounts of moral autonomy, the measure of moral autonomy employed in this project was decomposed into its fundamental elements: conscience, fundamental valuing, and balancing of perspectives. The results of Study 2 raised some serious concerns for the study of moral intuition in contemporary psychology. Data generated with the causal deviance task was unrelated to data generated with moral dumbfounding task. It is clearly problematic for a field of study when two prominent measures of the same construct demonstrate no convergent validity. 138 However, given that the causal deviance task operationally defines moral intuition as illogical responding to absurd vignettes, and the creators of the moral dumbfounding task argue speciously that all responses to their task must necessarily be intuitions yet operationally define their responses in the metric of magnitude of wrongfulness, there is perhaps little wonder that there is no agreement between these measures. Of course, moral intuition need not be purely affective or necessarily irrational. Contemporary cognitive scientists argue that automatic cognitive processes are often highly efficient and accurate (Bargh, 1994). In order to develop convergent validity between measures of moral intuition, this project adopted a definition of moral intuition that reflected the philosophical history of the concept and the current empirical approaches in other areas of cognitive science. Osbeck (1999) argued that many contemporary definitions of intuition within psychology ignore the philosophical heritage of the construct; according to Osbeck, intuition should reflect a non-deliberative understanding that compliments and facilitates deliberative rational approaches. Osbecks’s definition of intuition is clearly reflected in Topolinski and Strack’s (2008) four central features of intuition as studied by contemporary cognitive scientists: first, intuitions operate outside of conscious awareness; second, intuitions are fast and efficient decision-making processes; third, intuitions are affectively imbued; and fourth, intuitions are automatic, they are not intentionally employed. What seems to differentiate these definitions of intuition from those operationalized in the causal deviance task and the moral dumbfounding task is that the former definitions emphasize the processes that underlie judgments, whereas the latter measures operationalize intuition in terms of the output. For Study 3, this project sought to adopt a definition of moral intuition that 139 emphasized the non-deliberative appreciation of moral rules. Working from this definition, two new potential measures of moral intuition were devised. The first measure builds on one of Topolinski and Strack’s (2008) central themes of intuition: that intuition happens outside of conscious awareness. At present, the only viable means of determining whether someone consciously deliberated before making a decision is to ask. In Study 3, two probes were added to the SRM-SF that were intended to measure conscious deliberation: the first probe asked participants to indicate how hard they had thought about the moral issue before they had made their judgment, and the second probe asked them to indicate the number of alternative answers they had considered before coming to a decision. The other measure that we constructed was intended to capture general cognitive styles, but in such a way that it would be possible to discriminate between those who report typically engaging in rational reflection, affective intuition (going with their gut feelings), or principled intuition (just seeming to know what was appropriate). Study 3 also represented an important step in our analysis of the causal deviance measure and the moral dumbfounding measure. There was some suspicion that the effects observed with causal deviance task in the previous two studies were the results of the participants interpreting the instructions to be intuitive as asking them to be irrational. In order to test this possibility, a between-subjects test was conducted of the effects of the instructions on non-normative responding in making judgments of action and of character. The logic that underlies the moral dumbfounding task was also tested in Study 3. Haidt (2001) has argued that responses on the classic version of the moral dumbfounding task must necessarily be moral intuitions because there are no rational reasons for the behaviors described in the vignette to be judged as morally wrong. As we have argued before, this line 140 of reasoning is specious; there need not be any readily apparent physical or psychological harm for an action or person to be judged as morally wrong; Haidt (2001) has confounded consequentialism and rationality. In order to test this logic, in Study 3 two versions of the moral dumbfounding task were compared: one group received the standard version of the sibling incest story, the other group received a version in which all of the safeguards against physical, psychological, and social harm were removed. Given Haidt’s (2001) logic, responses on the unsafe version of the moral dumbfounding task are not necessarily moral intuitions. The question that was tested in Study 3 was whether or not the responses would differ between the two versions of the task. There were no differences between the responses to either version of the moral dumbfounding task. Although this does not preclude the possibility that people make intuitive judgments when responding to the moral dumbfounding task, it does suggest that the logic that underlies the measure is, in fact, flawed. As in the previous two studies, those participants who were asked to make intuitive judgments on the causal deviance task were more likely to make non-normative judgments regarding the actions of the protagonists in the stories. More importantly, however, the finding of no convergent validity between the causal deviance task and the moral dumbfounding task that was established in Study 2 was replicated in Study 3. Giving credence to the claim that the causal deviance task is not a valid measure of moral intuition, both attempts to predict non-normative responding on the causal deviance task with measures of fundamental elements of moral autonomy, cognitive deliberation scores, and moral cognition styles were unsuccessful. The prediction of moral dumbfounding scores in Study 3 marked the first demonstrations of statistically significant agreement between measures of moral intuition in 141 this project. Indices of moral intuition from the SRM-SF and the MCSI predicted moral judgments on the moral dumbfounding task. Thus, despite the lack of agreement between the moral dumbfounding measure and the causal deviance task, there does seem to be hope of finding some convergent validity between measures of moral intuition, which suggests that researchers may be able to come to some agreement on features that constitute intuition, if not a general definition for the construct. The most important intellectual contributions of this project relate to the measurement of moral intuition. Given the level of rancor evident in the writings of the theorists from the different camps in the moral intuition debate, it is astonishing that to date no research has been conducted to determine whether the various camps were actually measuring the same construct. The data clearly indicate that the causal deviance task and the moral dumbfounding task are not measuring the same construct. And it is becoming increasingly evident that non-normative responding on the causal deviance task is not a valid operationalization of moral intuition. On the other hand, by constructing measures of moral intuition that focus on the processes that underlie intuition we have been able to establish some evidence of convergent validity among measures of moral intuition. There seems to be much that researchers interested in moral intuition can learn from the cognitive scientists that have been studying non-deliberative cognition in other topic areas. Andersen, Moskowitz, Blair, and Nosek (2007) set out a framework for conceptualizing the four basic processes that underlie automatic social thought: availability, accessibility, applicability, and self-regulation. Each of these basic processes has been the subject of rigorous experimentation by researchers interested in social cognition. Narvaez et al. (2006) have applied the theoretical and methodological knowledge from the study of 142 knowledge accessibility to the study of automatic moral cognition. The accumulated research provides strong support for the idea that frequently activated constructs become chronically accessible, thus increasing the automaticity of related cognition (Bargh, 1994). Narvaez et al. (2006) have applied the concept of chronic accessibility to explain the development of moral identity or moral personality. Thus, individuals who experience frequent activation of their moral concepts or schema are more likely to demonstrate chronically accessible moral schemas. In their research, Narvaez et al. (2006) demonstrated clear differences between those with highly accessible moral schemas and those with less accessible moral schemas on tasks that focused on the processing of morally relevant information. Narvaez et al. have managed to successfully apply the theory and methods from an established automatic cognition research tradition to enhance our understanding of automatic moral cognition. Their research also illustrates how automatic cognition can influence many aspects of information processing beyond moral judgment-making. One of the potential benefits of the new synthesis, as Haidt (2008) refers to it, is that when researchers from other traditions converge on the study of moral intuition they will bring with them methodological approaches that have been successful in other areas of study. Although there is much to learn from those who specialize in studying intuition, the study of moral intuition also faces methodological issues that have been faced by those who studied moral cognition in the past; in particular, the types of scenarios employed to measure moral functioning. The bulk of the research that has examined moral intuition to date has employed scenarios that are so contrived that they verge on the absurd. Even if we exclude the causal deviance task from this discussion, there is good reason to be concerned about the 143 generalizability of results obtained from the trolley dilemma, the footbridge dilemma, or the various catastrophe dilemmas. A strong argument can be made that if we are interested in trying to understand how and why people make moral decisions in their everyday lives, then it stands to reason that the measures that we employ should reflect real life. Given the research on real-life moral dilemmas, the ideal situation would seem to encompass an interview setting in which an individual could recount their own moral judgments (Walker et al., 1995), an approach that is clearly the antithesis of the methodologies currently employed in the study of moral intuition. The challenge then is to operationalize the processes that are thought to underlie moral intuition, but to do so in the context of the participants’ idiosyncratic moral experiences. The inspiration for this project was the potential relation between moral autonomy and moral intuition. The hypothesized relation has not been clearly demonstrated with this project. In Study 1, those participants who were scored as the autonomous moral type were more likely to make non-normative action judgments on the causal deviance task than were participants who were scored as the heteronomous moral type. Although our hypothesized relation had been supported by the data, the questionable validity of the causal deviance task created some concerns. If moral autonomy is an indicator of moral maturity, then why did it predict seemingly irrational decision-making? One possible explanation came from Krettenauer and Edelstein (1999) who argued that the operationalization of moral autonomy on the SRM-SF conflated several distinct theoretical conceptions of moral autonomy. However, the scoring structure of the SRM-SF made it possible to decompose the moral autonomy scores into three relatively homogeneous scales: conscience, fundamental valuing, and balancing of perspectives. 144 The results from Study 2 indicated that the conscience element of moral autonomy was significantly related to non-normative responding on the causal deviance task, whereas fundamental valuing and balancing of perspectives scores were unrelated to non-normative responding. Conscience elements are thought to reflect the integration of moral principles into the self structure (Gibbs et al., 1992); but they also reflect affective intuitions, which might explain why they were significantly related to irrational decision-making. In Study 3, conscience scores were also found to be negatively related to the number of alternative answers that participants reported considering in making their decisions on the SRM-SF. For Study 3, it was predicted that the fundamental valuing elements of moral autonomy would be related to principled forms of intuition. The fundamental valuing elements reflect the use of universal moral rules, an approach to moral decision-making that emphasizes a deontological approach. Fundamental valuing was negatively related to cognitive effort, which we should expect if it is related to intuitive decision-making. However, fundamental valuing was positively related to the consideration of more alternative answers when making moral judgments. It is not entirely clear why fundamental valuing has this interesting relation with measures of cognitive deliberation. If we consider these results in light of Osbeck’s (1999) conception of intuition, we might speculate that a reliance on universal moral rules simplifies the decision-making process, thus reducing cognitive effort; however, the general moral rule might also serve as a starting point for further deliberation, thus the increased consideration of alternative answers. The relation between moral intuition and moral autonomy is far from resolved. The decomposition of moral autonomy into its fundamental elements seems to have clarified the relation between moral autonomy and different aspects of moral intuition. The next step in 145 examining this relation will likely require the adoption of a different measure of moral autonomy. Krettenauer and Edelstein (1999) were able to significantly predict real-life moral behavior from measures of moral intuition derived from Kohlberg’s MJI. Instead of applying the standard scoring criteria for moral autonomy on the MJI, Krettenauer and Edelstein decomposed moral autonomy into measures of prescriptiveness and universality. Perhaps future research on the relation between moral intuition and moral autonomy could employ an interview format so that the participants’ real-life moral decisions can be coded with Krettenauer and Edelstein’s criteria for measuring moral autonomy. Of course, this project has not been without its limitations. It is possible that the relations between the measures of moral intuition were inflated because of the demand characteristics. Particularly in Study 3, participants were asked repeatedly how hard they had thought about the moral decisions they had just made on one measure, and then whether they just knew things or just felt things on another measure. Having demonstrated at least some convergent validity between measures of moral intuition, the next step in refining these measures should be to evaluate their criterion validity independently. The battery of questionnaires all related to social decision-making ensured that the demand characteristics were likely very evident, regardless of the fact that new measures were buried within unrelated questionnaires. As Topolinski and Strack (2008, p. 1032) explained, “where there is a will—there is no intuition.” If participants are consciously reacting to demand characteristics to be intuitive, then the data we are collecting is likely not intuition. That being said, this project represents the first attempt to establish convergent validity between measures of moral intuition. It demonstrated that two measures of moral intuition currently being used by psychologists are unrelated. The adoption of a definition of 146 moral intuition that emphasizes the processes that are thought to underlie intuitive cognition allowed for the construction of two new means of measuring moral intuition. In Study 3, moral cognition scale scores from the MCSI and self-reported cognitive deliberation scores added to the SRM-SF predicted scores on the moral dumbfounding task, the first indication of convergent validity between measures of moral intuition. This research suggests that by focusing on the processes that underlie cognition psychologists should be able to develop a greater understanding of moral intuition and its relation to deliberative moral cognition. Thankfully, many researchers in the field of moral intuition have come to recognize that the arguments concerning whether affect or cognition drives intuition, or whether intuition or deliberation is the primary seat of moral decision-making are, to some extent, false dichotomies. One cannot construct a truly sentimentalist theory of moral intuition without including some evaluative mechanism; before we feel disgust or moral indignation some rule must have been violated; without such a rule there is no way to distinguish between the experience of things feeling bad versus things feeling wrong (Hauser, 2006). Of course, affect plays a role in moral cognition; the affective responses of others evoke empathic responses in most well formed people, and strong feelings amplify our moral judgments, imbuing them with motivational force (Schnall et al., 2008; Wheatley & Haidt, 2005). And, of course, in situations where the moral rules are unclear, where there are conflicting moral rules, or where the consequences of obeying rules might prove overwhelmingly negative, we must rely on our deliberative resources to make good decisions (Nichols & Mallon, 2006). Thus, affect and cognition are both involved in moral intuition, and intuition and deliberation play complimentary roles in people’s everyday moral decisions. The task for those interested in moral cognition is to uncover how these forces 147 combine in the course of everyday decision-making; and to that end, progress seems to be being made. 148 References Alexander, A. (1852). Outlines of a moral science. New York: Charles Scribner. Alonso, D., & Fernandez-Berrocal, P. (2003). Irrational decisions: Attending to numbers rather than ratios. Personality and Individual Differences, 35, 1537-1547. Andersen, S. B., Moskowitz, G. B., Blair, I. V., & Nosek, B. A. (2007). Automatic thought. In A. W. Kruglanski & E. T. Higgins (Eds.), Social psychology: Handbook of basic principles (2nd ed., pp. 138-175). New York: Guilford. Aquino, K., & Reed, A., II. (2002). The self-importance of moral identity. Journal of Personality and Social Psychology, 83, 1423-1440. Baldwin, J. M. (1898). Social and ethical interpretation in mental development. New York: Macmillan. Bargh, J. A. (1994). The four horsemen of automaticity: Awareness, intention, efficiency, and control in social cognition. In R. S. Wyer & T. K. Srull (Eds.), The handbook of social cognition (Vol. 1, pp. 1-40). Hillsdale, NJ: Erlbaum. Baron-Cohen, S., & Wheelwright, S. (2004). The empathy quotient: An investigation of adults with Asperger Syndrome or high functioning autism. Journal of Autism and Developmental Disorders, 34, 163-175. Bartsch, K., & Cole Wright, J. (2005). Towards an intuitionist account of moral development. Behavioral and Brain Sciences, 28, 546-547. Basinger, K. S., Gibbs, J. C., & Fuller, D. (1995). Context and the measurement of moral judgment. International Journal of Behavioral Development, 18, 537-556. Baumrind, D. (1978). A dialectical materialist’s perspective on knowing social reality. In W. Damon (Ed.), New directions for child development: Moral development (No.2, pp. 61-82). San Francisco: Jossey-Bass. Bjorklund, F., & Backstrom, M. (2008). Individual differences in processing styles: Validity of the Rational–Experiential Inventory. Scandinavian Journal of Psychology, 49, 439-446. Boyd, D. R. (1977). The moralberry pie: Some basic concepts. Theory into Practice, 16, 67-72. Bry, C., Follenfant, A., & Meyer, T. (2008). Blonde like me: When self-construals moderate stereotype priming effects on intellectual performance. Journal of Experimental Social Psychology, 44, 751-757. 149 Chaiken, S., & Trope, Y. (1999). Dual-process theories in social psychology. New York: Guilford Press. Colby, A. (1978). Evolution of a moral-developmental theory. In W. Damon (Ed.), New directions for child development: Moral development (No. 2, pp. 89-104). San Francisco: Jossey-Bass. Colby, A., & Damon, W. (1992). Some do care: Contemporary lives of moral commitment. New York: Free Press. Colby, A., & Kohlberg, L. (1987). The measurement of moral judgment (Vol. 1). New York: Cambridge University Press. Cushman, F. (2008). Crime and punishment: Distinguishing the roles of causal and intentional analysis in moral judgment. Cognition, 108, 353-380. Davidson, P., & Youniss, J. (1991). Which comes first, morality or identity? In W. M. Kurtines & J. L. Gewirtz (Eds.), Handbook of moral behavior and development, Vol. 1: Theory (pp. 105-121). Hillsdale, NJ: Erlbaum. de Waal, F. (1991). The chimpanzee’s sense of social regularity and its relation to the human sense of justice. American Behavioral Scientist, 34, 335-349. Epstein, S. (1998). Cognitive-experiential self-theory: A dual-process personality theory with implications for diagnosis and psychotherapy. In R. F. Bornstein & J. M. Masling (Eds.), Empirical perspectives on the psychoanalytic unconscious (pp. 99- 140). Washington, DC: American Psychological Association. Epstein, S. (2008). Intuition from the perspective of cognitive-experiential self-theory. In H. Plessner, C. Betsch, & T. Betsch (Eds.), Intuition in judgment and decision making (pp. 23-38). New York: Erlbaum. Epstein, S., Denes-Raj, V., & Pacini, R. (1995). The Linda problem revisited from the perspective of cognitive-experiential self-theory. Personality and Social Psychology Bulletin, 11, 1124-1138. Epstein, S., Lipson, A., Holstein, C., & Huh, E. (1992). Irrational reactions to negative outcomes: Evidence for two conceptual systems. Journal of Personality and Social Psychology, 62, 328-339. Fiske, A. P. (1991). Structures of social life: The four elementary forms of human relations. New York: Free Press. Fiske, A. P. (1992). The four elementary forms of sociality: Framework for a unified theory of social relations. Psychological Review, 99, 689-723. Gibbs, J. C., Basinger, K. S., & Fuller, D. (1992). Moral maturity: Measuring the development of sociomoral reflection. Hillsdale, NJ: Erlbaum. 150 Gigerenzer, G., Czerlinski, J., & Martignon, L. (2002). How good are fast and frugal heuristics? In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 559-581). New York: Cambridge University Press. Gilligan, C. (1982). In a different voice: Psychology theory and human development. Cambridge, MA: Harvard University Press. Gilovich, T., Griffin, D., & Kahneman, D. (Eds.). (2002). Heuristics and biases: The psychology of intuitive judgment. New York: Cambridge University Press. Greene, J. (2005). Cognitive neuroscience and the structure of the moral mind. In P. Carruthers, S. Laurence, & S. Stich (Eds.), The innate mind: Structure and contents (Vol. 1, pp. 338-352). New York: Oxford University Press. Greene, J., & Haidt, J. (2002). How (and where) does moral judgment work? Trends in Cognitive Science, 6, 517-523. Greene, J., Sommerville, R., Nystrom, L., Darley, J., & Cohen, J. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293, 21. Haan, N. (1975). Hypothetical and actual moral reasoning in a situation of social disobedience. Journal of Personality and Social Psychology, 32, 255-270. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108, 814-834. Haidt, J. (2004). The emotional dog gets mistaken for a possum. Review of General Psychology, 8, 283-290. Haidt, J. (2007). The new synthesis in moral psychology. Science, 316, 998-1002. Haidt, J. (2008). Morality. Perspectives on Psychological Science, 3, 65-72. Haidt, J., & Bjorklund, F. (2008a). Social intuitionists answer six questions about moral psychology. In W. Sinnott-Armstrong (Ed.), Moral psychology: Vol. 2. The cognitive science of morality: Intuition and diversity (pp. 181-217). Cambridge, MA: MIT Press. Haidt, J., & Bjorklund, F. (2008b). Social intuitionists reason, in conversation. In W. Sinnott-Armstrong (Ed.), Moral psychology: Vol. 2. The cognitive science of morality: Intuition and diversity (pp. 241-254). Cambridge, MA: MIT Press. Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason. Unpublished manuscript, University of Virginia. Haidt, J., & Graham, J. (2007). When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize. Social Justice Research, 20, 98-116. 151 Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133, 55-66. Hauser, M. D. (2006). Moral minds: How nature designed our universal sense of right and wrong. New York: Harper Collins. Higgins, E. T., & Brendl, C. M. (1995). Accessibility and applicability: Some “activation rules” influencing judgment. Journal of Experimental Social Psychology, 31, 218- 243. Huebner, B., Dwyer, S., & Hauser, M. (2009). The role of emotion in moral psychology. Trends in Cognitive Sciences, 13, 1-6. Hume, D. (2007). A treatise of human nature. Charleston, SC: Bibliolife. (Original work published in 1740) Kahneman, D. (2003). A perspective on judgment and choice: Mapping bounded rationality. American Psychologist, 58, 697-720. Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 49-81). New York: Cambridge University Press. Kahneman, D., Schkade, D. A., & Sunstein, C. R. (1998). Shared outrage and erratic rewards: The psychology of punitive damages. Journal of Risk and Uncertainty, 16, 49-86. Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80, 237-251. Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American Psychologist, 39, 341-350. Kant, I. (1948). Groundwork of the metaphysic of morals (H. J. Paton, Trans.). London: Hutchinson’s University Library. (Original work published 1785) Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization. In D. A. Goslin (Ed.), Handbook of socialization theory and research (pp. 347-480). Chicago: Rand McNally. Kohlberg, L. (1981). Essays on moral development: Vol. 1. The philosophy of moral development. San Francisco: Harper & Row. Kohlberg, L. (1984a). Essays on moral development: Vol. 2. The psychology of moral development. San Francisco: Harper & Row. 152 Kohlberg, L., with Higgins, A., Tappan, M., & Schrader, D. (1984b). From substages to moral types: Heteronomous and autonomous morality. In L. Kohlberg, Essays on moral development: Vol. 2. The psychology of moral development (pp. 652-683). San Francisco: Harper & Row. Kohlberg, L., Levine, C., & Hewer, A. (1983). Moral stages: A current formulation and a response to critics. Basel: Karger. Krebs, D. L., & Denton, K. (2005). Toward a pragmatic theory of moral functioning. Psychological Review, 112, 629-649. Krebs, D. L., Vermeulen, S. C. A., Carpendale, J. I., & Denton, K. (1991). Structural and situational influences on moral judgment: The interaction between stage and dilemma. In W. M. Kurtines & J. L. Gewirtz (Eds.), Handbook of moral behavior and development, Vol. 2: Research (pp. 139-169). Hillsdale, NJ: Erlbaum. Krettenauer, T., & Edelstein, W. (1999). From substages to moral types and beyond: An analysis of core criteria for morally autonomous judgments. International Journal of Behavioral Development, 23, 899-920. MacNamara, J. (1991). The development of moral reasoning and the foundations of geometry. Journal for the Theory of Social Behavior, 21, 125-150. Mikhail, J. (2007). Universal moral grammar: Theory, evidence, and the future. Trends in Cognitive Sciences, 11, 143-152. Miller, C.H., Burgoon, J. K., & Hall, J. R. (2007). The effects of implicit theories of moral character on affective reactions to moral transgressions. Social Cognition, 25, 819- 832. Minoura, Y. (1992). A sensitive period for the incorporation of a cultural meaning system: A study of Japanese children growing up in the United States. Ethos, 20, 304-339. Nahmias, E., Morris, S., Nadelhoffer, T., & Turner, J. (2005). Surveying freedom: Folk intuitions about free will and moral responsibility. Philosophical Psychology, 18, 561-584. Narvaez, D. (2008). The social-intuitionist model: Some counter-intuitions. In W. A. Sinnott-Armstrong (Ed.), Moral psychology, Vol. 2: The cognitive science of morality (pp. 233-240). Cambridge, MA: MIT Press. Narvaez, D., & Lapsley, D. K. (2005). The psychological foundations of everyday morality and moral expertise. In D. K. Lapsley & F. C. Power (Eds.), Character psychology and character education (pp. 140-165). Notre Dame, IN: University of Notre Dame Press. 153 Narvaez, D., Lapsley, D. K., Hagele, S., & Lasky, B. (2006). Moral chronicity and social information processing: Tests of a social cognitive approach to the moral personality. Journal of Research in Personality, 40, 966-985. Newitt, C. S., & Walker, L. J. (2002, November). The personality features of autonomous and heteronomous moral types. Paper presented at the meeting of the Association for Moral Education, Chicago. Nichols, S. (2008). Sentimentalism naturalized. In W. A. Sinnott-Armstrong (Ed.), Moral psychology, Vol. 2: The cognitive science of morality (pp. 255-274). Cambridge, MA: MIT Press. Nichols, S., & Mallon, R. (2006). Moral dilemmas and moral rules. Cognition, 100, 530- 542. Osbeck, L. M. (1999). Conceptual problems in the development of a psychological notion of “intuition.” Journal for the Theory of Social Behaviour, 29, 229-250. Pacini, R., & Epstein, S. (1999). The relation of rational and experiential information processing styles to personality, basic beliefs, and the ratio–bias phenomena. Journal of Personality and Social Psychology, 76, 972-987. Pearsall, J., & Trumble, B. (Eds.). (1996). The Oxford English reference dictionary (2nd ed.). New York: Oxford University Press. Piaget, J. (1965). The moral judgment of the child. New York: Free Press. (Original work published 1932) Pizarro, D. A., Uhlmann, E., & Bloom, P. (2003). Causal deviance and the attribution of moral responsibility. Journal of Experimental Social Psychology, 39, 653-660. Reed, A., II, & Aquino, K. (2003). Moral identity and the circle of moral regard towards out-groups. Journal of Personality and Social Psychology, 84, 1270-1286. Rozin, P., & Nemeroff, C. (2002). Sympathetic magical thinking: The contagion and similarity heuristics. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 201-216). New York: Cambridge University Press. Schnall, S., Haidt, J., Clore, G. L., & Jordan, A. H. (2008). Disgust as embodied moral judgment. Personality and Social Psychology Bulletin, 34, 1096-1109. Shaver, K. G. (1985). The attribution of blame: causality, responsibility, and blameworthiness. New York: Springer-Verlag. Shiloh, S., Salton, E., & Sharabi, D. (2002). Individual differences in rational and intuitive thinking styles as predictors of heuristic responses and framing effects. Personality and Individual Differences, 32, 415-429. 154 Stanovich, K. E. (2004). Balance in psychological research: The dual process perspective. Behavioral and Brain Sciences, 27, 357-358. Stanovich, K. E., & West, R. F. (1999). Individual differences in rational thought. Journal of Experimental Psychology: General, 127, 161-188. Stanovich, K. E., & West, R. F. (2008). On the relative independence of thinking biases and cognitive ability. Journal of Personality and Social Psychology, 94, 672-695. Sunstein, C. R. (2003). Why societies need dissent. Cambridge, MA: Harvard University Press. Sunstein, C. R. (2005). Moral heuristics. Behavioral and Brain Sciences, 28, 531-573. Tappan, M., Kohlberg, L., Schrader, D., Higgins, A., Armon, C., & Lei, T. (1987). Heteronomy and autonomy in moral development: Two types of moral judgment. In A. Colby & L. Kohlberg, The measurement of moral judgment (Vol. 1, pp. 315-380). New York: Cambridge University Press. Thomson, J. (1986). Rights, restitution, and risk: Essays in moral theory. Cambridge, MA: Harvard University Press. Topolinski, S., & Strack, F. (2008). Where there’s a will—there’s no intuition. The unintentional basis of semantic coherence judgments. Journal of Memory and Language, 58, 1032-1048. Trevethan, S. D., & Walker, L. J. (1989). Hypothetical versus real-life moral reasoning among psychopathic and delinquent youth. Development and Psychopathology, 1, 91-103. Turiel, E. (1983). The development of social knowledge: Morality and convention. Cambridge, UK: Cambridge University Press. Turiel, E. (2006). Thoughts, emotions, and social interactional processes in moral development. In M. Killen & J. Smetana (Eds.), Handbook of moral development (pp. 7-35). Mahwah, NJ: Erlbaum. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124-1131. Walker, L. J., de Vries, B., & Trevethan, S. D. (1987). Moral stages and orientations in real- life and hypothetical dilemmas. Child Development, 58, 842-858. Walker, L. J., & Moran, T. J. (1991). Moral reasoning in a communist Chinese society. Journal of Moral Education, 20, 139-155. 155 Walker, L. J., Pitts, R. C., Hennig, K. H., & Matsuba, M. K. (1995). Reasoning about morality and real-life moral problems. In M. Killen and D. Hart (Eds.), Morality in everyday life: Developmental perspectives (pp. 371-407). New York: Cambridge University Press. Weber, M. (1949). The methodology of the social sciences. New York: Free Press. Wegner, D. M., & Bargh, J. A. (1998). Control and automaticity in social life. In D. T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.), The handbook of social psychology (Vol. 1, pp. 446-496). New York: Oxford University Press. Werker, J. F., & Tees, R. C. (1984). Cross-language speech perception: Evidence for perceptual reorganization during the first year of life. Infant Behavior and Development, 7, 49-63. Wheatley, T., & Haidt, J. (2005). Hypnotic disgust makes judgments more severe. Psychological Science, 16, 780-784. Wilson, E. O. (1975). Sociobiology: The new synthesis. Cambridge, MA: Harvard University Press. Witteman, C., van den Bercken, J., Claes, L., & Godoy, A. (2009). Assessing rational and intuitive thinking styles. European Journal of Psychological Assessment, 25, 39-47. 156 Appendix A Moral Cognition Style Inventory Please read each statement carefully and decide how well it describes you. Use the scale provided to indicate how well the statement fits the way you typically do things at work, at school, and at home. Write “1” if the statement does not fit you at all, that is, you almost never do things this way. Write “7” if the statement fits you extremely well, that is, you almost always do things this way. Use the values in between to indicate that the statement fits you in varying degrees: 1 2 3 4 5 6 7 not at all well not very well slightly well somewhat well well very well extremely well There are, of course, no right or wrong answers. Please read each statement and write next to the statement the scale number that best indicates how well the statement describes you. IN ORDER FOR THE SCALE TO BE VALID, YOU MUST ANSWER EVERY QUESTION. 1. When I am in a conflict with someone, I can usually trust my hunches to find an appropriate solution. 2. When I am in a conflict with someone, I usually try to think about the situation from the other person’s perspective before I decide on an appropriate solution. 3. When dealing with a problem, I am good at perceiving the dynamics of the situation. 4. When confronted with a problem, I usually need to concentrate to clearly understand the situation. 5. When confronted with moral issues, I usually understand the central issue without much thought. 6. When thinking about moral issues, I find that I need to concentrate in order to extrapolate the moral principles at the heart of the issue. 7. When resolving conflicts with people, my intuition usually serves me well. 8. When dealing with people, logic typically serves me well. 9. I can usually tell if someone is a good or bad person without much thought. 10. I get a sense from people; I can usually feel whether people are good or bad. 157 11. I am usually quite careful in considering whether people are good or bad; I like to carefully consider their actions, and the contexts of those actions, before I make a decision. 12. When deciding that an action is good or bad I usually like to take my time and consider the situation carefully. 13. I just seem to know that certain actions are good or bad, it doesn’t take much thought at all. 14. I can tell from the way that things make me feel, like if they make my stomach turn or the hair on my arm stand on end, whether or not they are good or bad. 15. In most situations I just seem to know what my obligations are, I don’t really need to think about it. 16. I can usually sense what my obligations are in most situations. 17. I usually try to think carefully when I consider what are my obligations or responsibilities, and what are not. 18. When I am in conflict with someone, I don’t feel that I can rely on my intuition to arrive at the best outcome. 19. When I am in a conflict with someone, I don’t usually consider the issue from the other person’s perspective before I decide on what I think is the best outcome. 20. When I am trying to resolve a problem, I usually focus on the task at hand with little awareness of the situation that caused the problem. 21. When I encounter a problem, I usually understand the issue without needing to concentrate very much. 22. When resolving conflicts with people, my intuition doesn’t usually serve me well. 23. When dealing with people, logic typically doesn’t serve me well. 24. I can’t tell if someone is a good or bad person without a lot of thought. 25. I don’t get a sense from people; I can’t usually feel whether people are good or bad. 26. I am not very careful in considering whether people are good or bad; I don’t like to take the time to consider their actions, and the contexts of those actions, before I make a decision. 27. When deciding that an action is good or bad I usually I don’t usually like to take my time to consider the situation carefully. 28. I don’t just know that certain actions are good or bad, it takes a lot of thought. 158 29. I can’t tell from the way that things make me feel, whether or not they are good or bad. 30. In most situations I don’t immediately know what my obligations are, I really need to think about it. 31. I can’t usually feel what my obligations are in most situations. 32. I don’t typically bother to think carefully when I consider what are my obligations or responsibilities, and what are not. 33. When I am in conflict with someone, I don’t feel that I can rely on my intuition to arrive at the best outcome. Note. Affective intuition is assessed by items 1, 7, 10, 14, 16, 18, 22, 25, 29, 31, 33; principled intuition is assessed by items 3, 4, 5, 9, 13, 15, 20, 21, 24, 28, 30; and rational reflection is assessed by items 2, 6, 8, 11, 12, 17, 19, 23, 26, 27, 32. 159 Appendix B Ethics Approval Forms