UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

A normative account of risk Ahmad, Rana Amber 2009

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2010_spring_ahmad_rana.pdf [ 1.39MB ]
Metadata
JSON: 24-1.0068844.json
JSON-LD: 24-1.0068844-ld.json
RDF/XML (Pretty): 24-1.0068844-rdf.xml
RDF/JSON: 24-1.0068844-rdf.json
Turtle: 24-1.0068844-turtle.txt
N-Triples: 24-1.0068844-rdf-ntriples.txt
Original Record: 24-1.0068844-source.json
Full Text
24-1.0068844-fulltext.txt
Citation
24-1.0068844.ris

Full Text

A NORMATIVE ACCOUNT OF RISK by Rana Amber Ahmad  A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF ARTS (Philosophy)  THE UNIVERSITY OF BRITISH COLUMBIA (Vancouver)  October 2009 © Rana Amber Ahmad, 2009  Abstract Risks, even though familiar, are in fact more complicated than might first seem. To call something a risk is to describe potentially harmful events in the world. However, sometimes we mean to do more than merely describe the chance of harm occurring. To call something a risk is to also mean that some action ought to be taken to avoid it. By providing reasons for action, risk can also be a source of normative force. There are therefore two senses of risk, the descriptive and the prescriptive, yet it is often the case that only the descriptive sense is recognized. Many technical views, such as those of risk analysts or scientists assume that risks are objective matters of fact, however it is assumed that what counts as harmful includes a limited range of events such as illness, injury or death. Criticisms from other disciplines such as the social sciences and psychology have challenged the narrowly construed technical views and argued that in fact, it is difficult to distinguish what is harmful, tolerable or desirable. Incorporating these concerns I argue that a more encompassing definition of risk is the chance of some harm occurring, where harm is understood to be whatever negatively affects something of value to someone. This thesis will provide an account of normative risk by first placing it in historical context to explain how this definition of risk emerges. I will then provide an argument against the idea that risk is an objective matter of fact, and thus an ontological categorization, in favour of the view that it is subjective harm is a matter of evaluation. Following this, I then propose that risk is weakly normative in a similar way to morality. Finally I argue that when risk is understood as what might negatively affect something of value, and since a single situation might involve a threat to different values at the same time, it might be the case that different actions are prescribed.  ii  Table of Contents !"#$%&'$((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((())! *&"+,-./-0.1$,1$# ((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( )))! 2)#$-./-3)45%,#(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( 6! !'71.8+,94,:,1$#((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((6)! ;,9)'&$).1((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((6))! <! =1$%.95'$).1 ((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( <! >! !-?%),/-@)#$.%A-./-B)#7(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( <C! >(<! =1$%.95'$).1 (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( <C! >(>! *D,-@)#$.%)'&+-!''.51$-./-B)#7(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( <E! "#"#$! %&'!()'*+',-'!./!0123 ###############################################################################################################$4! "#"#"! %&'!5627')871-!9::;1-871.,2!./!<*.=8=1;176!%&'.*6#######################################################""! "#"#>! %&'!?):.*78,-'!./!@.:'!8,A!B'8*#########################################################################################"C! "#"#C! %&'!D.71.,!./!E71;176 ####################################################################################################################"F! "#"#G! 0123!1,!7&'!%H',71'7&!I',7J*6#################################################################################################"4! >(F! B)#7-*.9&A ((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( FF! "#>#$! IJ**',7!9--.J,72!./!0123 ###########################################################################################################>K! >(G! !+$,%1&$)6,-!''.51$#-./-B)#7(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( FE! "#C#$! (-.,.)1-!9--.J,72 ######################################################################################################################CL! "#C#"! <26-&.;.+1-8;!9--.J,72 ##############################################################################################################C$! "#C#>! 5.-18;!8,A!IJ;7J*8;!9--.J,72####################################################################################################C>! >(H! !-?%.&9,%-;,"&$,-&".5$-I&+5,#-&19-B)#7(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( GH! >(J! 0.1'+5#).1(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( GC! F! !-0D&++,14,-$.-$D,-K1$.+.4)'&+-I),8-./-B)#7 ((((((((((((((((((((((((((((((((((((((((((((((((((((((((( H<! F(<! =1$%.95'$).1 (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( H<! F(>! B,#'D,%L#-!''.51$ ((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( HG! >#"#$! (M8;J871,+!8,A!I.):8*1,+!0123##############################################################################################GN! F(F! *D.:M#.1L#-!''.51$ (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( JG! F(G! !1&+A#)#-./-B,#'D,%-&19-*D.:M#.1 (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( JE! F(H! 0.1'+5#).1(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( NE! G! O%.6)9)14-B,&#.1#-/.%-!'$).1 (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( C<! G(<! =1$%.95'$).1 (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( C<! G(>! !-?&#)'-P19,%#$&19)14-./-B)#7 (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( C>! G(F! O%,#'%)")14-!'$).1Q--RD&$-)#-B&$).1&+-)#-S.%:&$)6,((((((((((((((((((((((((((((((((((((((((((((((((( CJ! G(G! 0D&1',-.%-P1',%$&)1$A-)1-B)#7((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( ET! G(H! @&%:-&19-P19,#)%&")+)$A-)1-B)#7#(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( EF! G(J! U..9-B)#7#-&19-=16,%$,9-B)#7# ((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((( EE! G(N! !-0.:M&%)#.1-8)$D-V.%&+)$A((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<TN! G(C! *D,-V&$$,%-./-O,%#M,'$)6, ((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<<T! G(E! =++5#$%&$).1-./-&-B)#7A-0D.)',(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<<H! G(<T! 0.1'+5#).1 (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<<E! H--*D%,,-0&$,4.%),#-./-I&+5,# (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<>H! H(<! =1$%.95'$).1 ((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<>H! H(>! W:M)%)'&+-3)19)14#-!".5$-B)#7((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<>N! iii  H(F! @,5%)#$)'#X-?)&#,#-&19-V..9 ((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<>C! H(G! *%5#$X-?,$%&A&+-&19-K"+)4&$).1((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<F<! H(H! !#M)%&$).1#-&#-O.##)"+,-@&%:# (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<FG! H(J! Y$&19&%9-B)#7-I&+5,# (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<FC! H(N! !#M)%&$).1&+-B)#7-I&+5,#(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<GT! H(C! V.%&+-B)#7-I&+5,#((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<G>! H(E! 0&#,-Y$59AQ-0.41)$)6,-W1D&1',%#((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<GH! H(<T! 0.1$%)"5$).1#-./-W:M)%)'&+-B,#,&%'D (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<GJ! H(<<! *D,-B)#7#-./-0.41)$)6,-W1D&1',%# (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<GC! H(<>! Y5%6,A-.1-B)#7#((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<HF! H(<F! 0.1'+5#).1 (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<HH!  J! 0.1'+5#).1((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<HC! ?)"+).4%&MDA(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<JN! !MM,19)Z-!Q-Y5%6,A-V,$D.9.+.4A (((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<NJ! !MM,19)Z-?Q-Y5%6,A-B,#5+$# ((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((<EF! !MM,19)Z-0Q-P?0-B,#,&%'D-W$D)'#-?.&%9-0,%$)/)'&$,-./-!MM%.6&+(((((((((((((((((((((((>TH!  iv  List of Figures Figure 1: Diagram of a Risky Choice on the Descriptive View …………………....115 Figure 2: A Model of Choice involving Promise-breaking………………………....117 Figure 3: The Normative Model of Decision Making Under Risk…………………118  v  Acknowledgements I would like to acknowledge the efforts of my thesis committee including Dr. Peter Danielson, Dr. Scott Anderson and Dr. John Beatty for their insight, and invaluable advice. In particular I would like to thank Dr. Danielson for providing me with the opportunity to join his research group at the W. Maurice Young Centre for Applied Ethics at the University of British Columbia. It has been a great benefit to work in this interdisciplinary group and I have learned a great deal because of his direction and support. I would also like to thank Dr. Anderson for his many detailed comments on my work which were both challenging and helpful. I also thank Dr. Dan Weary, Dr. Cathy Schuppli and Elisabeth Ormandy for their assistance with the survey analysis.  Special thanks are owed to my family, friends and colleagues who have been a source of great support. I especially wish to thank Adele, Sharon, Tarick, and of course Michael, to whom I owe so much.  vi  Dedication  To R.  To reach a port we must sail, sometimes with the wind, and sometimes against it. But we must not drift or lie at anchor. --Oliver Wendell Holmes  vii  1  Introduction The familiar concept of risk is in fact more complex than usually assumed. There is a  wide array of definitions of the word “risk” that range from the very technical to everyday usage. In the physical sciences, risk is the product of the probability of an event multiplied by some quantitative measure of the event’s consequences; in economics, it is the relative frequency of an undesirable event over time where undesirable events are limited to physical harm to humans or the environment; in psychology, it is a subjective perception of expected utilities and the probability of their occurrence. Some researchers go so far as to claim that there is no correct definition of a risk and that the choice one makes in using a particular definition is in fact “political.”1 Unsurprisingly, the many ways in which “risk” can be defined technically produces vagueness and it is often difficult to communicate across disciplines when, for example, health care uses “risk” to mean the cause of an unwanted event while in engineering, “risk” refers to the probability of an unwanted event. Garland provides the following summary to highlight the many different accounts of risk and the variety of senses in which the term is used: Risk is a calculation. Risk is a commodity. Risk is a capital. Risk is a technique of government. Risk is objective and scientifically knowable. Risk is subjective and socially constructed. Risk is a problem, a threat, a source of insecurity. Risk is a pleasure, a thrill, a source of profit and freedom. Risk is the means whereby we colonize and control the future. ‘Risk society’ is our late modern world spinning out of control.2  1  Fischhoff, Watson and Hope, “Defining Risk”, 30. By “political,” Fischhoff and his colleagues explain that the decision expresses “someone’s views regarding the importance of different adverse effects in a particular situation.” 2 Garland, The Rise of Risk, 49.  1  Similarly, there are many variations in everyday usage where the word “risk” is used as a noun to indicate the chance of some harm or some unwanted occurrence, but we can also attribute the characteristic of “risk” or “risky” to people, actions, events, or objects. Additionally, we talk about “taking risks”, being “at risk”, and “risking.” Risking is an action we take that exposes us to the chance of some harm or loss; sometimes we actively seek out risks in this way rather than merely avoid them. People choose to ski, drive their cars and go rock-climbing even though such activities involve the chance of injury or death. We also say that there are “good” risks and “bad” risks, usually meaning that the chance of avoiding some unwanted outcome is high in the first place and low in the second. Over the last forty years (since the emergence of formalized risk analysis), one of the most significant distinctions between different kinds of risk has become that between objective and subjective risk. Fischhoff and his colleagues explain that objective risk is “the product of scientific research, primarily public health statistics, experimental studies, epidemiological surveys, and probabilistic risk analyses. The latter [subjective risk] refers to non-expert perceptions of that research, embellished by whatever other considerations seize the public mind.”3 On this view, much depends on making a clear delineation between experts—scientists and risk researchers—and non-experts (everyone else). The distinction between subjective (or perceived) and objective risks results in the separation of technical analyses by experts from psychological, political or social analyses dealing with non-expert views.  3  Fischhoff, Watson and Hope, “Defining Risk,” 30.  2  There is in fact much evidence to suggest that non-experts sometimes have puzzling views about risks.4 For example, Starr first concluded that people will accept risks incurred during voluntary activities (e.g. skiing) that are about one thousand times greater than they will accept when they are involuntarily imposed upon them (e.g. food preservatives).5 This conclusion inspired research leading to the classic report Acceptable Risk which described how people are more afraid of the harm from something new or unfamiliar but with a low probability of occurring (e.g. nuclear meltdowns) than they are of the harm from something familiar (e.g. driving a car) with a high probability. 6 Although it is acknowledged that even experts have to make judgments about what constitutes a harmful event and what kinds of considerations need to be taken into account in identifying risks, there remains a division between what is objective or “real” for experts (e.g. scientists and risk analysts) versus what is subjective or “perceived” for nonexperts. Slovic et al, for instance, compared the responses from two groups on the relative risks of 30 activities and technologies.7 One group was composed of 15 national experts on risk assessment while the other group was composed of 40 members of the U.S. League of Women Voters. The disparities found between the two groups were significant. The study’s authors explain that “[t]he experts’ mean judgments were so closely related to the statistical or calculated frequencies that it seems reasonable to conclude that they viewed the risk of an activity or technology as synonymous with its annual fatalities.”8 The experts ranked nuclear power 20th while the League of Women Voters ranked it as the number one risk. While the experts put X-rays at number 7 in 4  Slovic, “Perception of Risk,” 280. Starr, “Social Benefit versus Technological Risk,” 1232. 6 Fischhoff et al, Acceptable Risk. 7 Lichtenstein et al, “Judged Frequency of Lethal Events,” 14-20, 36-39. 8 Slovic, Fischhoff and Lichtenstein “Rating the Risk,”, 68 5  3  their rankings, the League members ranked them at number 20. The conclusion seems to be that the average person is misinformed about risks and therefore bases decisions on perceptions of both the likelihood and severity of possible harms which are often incorrect. Experts, in contrast, often seem to have a better understanding of ‘real’ risks thanks to the data they have access to and their background knowledge. The discrepancy between expert and non-expert perceptions of risky technologies and activities is sometimes attributed to various cognitive limitations such as a dependence on judgmental heuristics. For example, Kahneman and Tversky found that people judge an event as more likely and less risky if it is easy to recall.9 Relying on heuristics is thought to distort people’s understanding of risks and partially explain their inability to rank various hazards according to their actual probability of occurrence. The problem seems to be that many of these informative studies on people’s perceptions of risk begin with the assumption that a risk is an objective and usually measurable fact. There are many instances, however, when this assumption does not adequately capture what the average person means when they say that something is risky. To return to the distinction between objective and perceived risks, we can say that the chance of some harm or unwanted event is a fact about the world which is quantifiable and measurable while questions about what kinds of harm matter to people concern the perceptions of these facts. On this view, a risk is something that describes facts about the world. It is possible to be wrong about descriptions, and this is more likely if one does not know all the facts. However, sometimes when we say something is risky we mean to do more than merely describe the chance of some harm occurring or communicate our  9  Kahneman and Tversky, “On the Psychology of Prediction.” I discuss biases and heuristics in risk in more detail in Chapter 5.  4  perception of possible harm. If my sister tells me that X-rays are risky, she may base this on her factual knowledge or on her perceived opinion about X-rays. But the accuracy of her knowledge is not what matters if she means to convey the implicit message that I ought to be cautious and try to limit my exposure to something potentially harmful. There is thus another distinction to be made which reflects the two possible senses of “risk”: the descriptive and the prescriptive (ought to do) senses. Two Senses of “Risk” People talk about risks in many different ways. It is possible to take a risk, to be at risk, or to make a risky decision. Sometimes the meaning of the word “risk” is situationdependent and context-specific. There are two main distinctions that can be made which help to clarify a speaker’s meaning. First, “risk” is used descriptively to point out that some event or action involves the chance of some unwanted outcome or to indicate the degree or exact probability of the unwanted outcome. Second, “risk” is used to recommend that some kind of action ought to be taken. I will provide some illustrations that outline some of the uses that support this claim but are not meant to be definitive. The Descriptive Sense The following is a list of generic illustrations where “risk” is used to describe the chance of some unwanted outcome.  5  1. About 20,000 Americans die from air pollution east of the Mississippi every year and about 100 million Americans are exposed to this polluted air, so the risk of dying from air pollution in any given year in the Eastern US is 0.0002.10 2. “Before 1996 when the drug cocktails were not widely available, the heightened death risk [for those with HIV] ranged from nearly 8 percent to 20 percent depending on age...”11 (Newspaper headline) 3. Risks are a part of everyday life. 4. “Ontario girl going home after risky heart surgery.”12 (Newspaper headline) 5. “A man who risked his life to infiltrate and spy on the Irish Republican Army 20 years ago is protesting a new film inspired by his story and set to debut at…”13 (Newspaper headline) In each of these examples, the term “risk” is used descriptively. Examples 1, and 2 use “risk” to indicate what the chance or probability of death or harm caused by something has been calculated to be. When reported quantitatively, as it often is in epidemiological studies for instance, the term describes the calculable chances of harm occurring and equates “risk” with the result of that calculation. These examples are typical of the objective categorization produced by experts. Example 3 is simply a statement about the possibility of harm that is just an unavoidable part of living. It describes the conditions under which daily life takes place but does not recommend any particular action since it would be impossible to avoid all risks. Without any other context or background information about the speaker, the  10  Wilson, “Analyzing the Daily Risk of Life,” 58. This number is calculated by dividing 20,000 by 100 million. http://www.canada.com/health/People+with+living+longer+study+shows/797024/story.html 12 http://www.cbc.ca/canada/toronto/story/2009/08/13/heart-operation.html 13 http:://www.cbc.ca/arts/tiff/story/2008/08/26/50deadmen-mcgartland-tiff.html 11  6  audience or the circumstances the statement might make reference to, it is impossible to determine what other message might be meant so in this case, it simply describes. Example 4 is a more complicated case that demonstrates that the two senses of risk are often ambiguous because much depends on their context. The surgery in question obviously involved the chance of some harm to the patient. Since it is well known that all surgeries involve some risk, a “risky surgery” is notable either for the increased likelihood of something going wrong, or for the severity of the harm that might befall the patient. The description of the surgery is not targeted to anyone in particular other than newspaper readers. The people involved have made all the relevant decisions about whether the girl should undergo the surgery in question and now we are merely learning about the end result of those decisions, the circumstances under which the surgery was performed and perhaps the skill or vigilance of those who performed it. In this way, “risky” is used to provide us with more information about the circumstances of the story and to describe something that has already occurred. While it is tempting to think that the temporal element (past, present or future) will determine which sense of “risk” is meant, it often does not. For example, if I say to my brother while we are skiing that “taking that jump was a risk”, I might mean only to say something descriptive about the particular jump he took (i.e. it was fairly big, poorly shaped, or near an obstacle) or about his actions which exposed him to the possibility of harm. It is more likely, however, that I also mean to say that he was foolish, that he should have known better and that he ought not to have taken the jump. This second meaning of my statement would be even more pronounced if I used an angry tone of voice to express it. My brother knows that I have an interest in his well-being and that if I  7  raise my voice, it is probably because I am concerned for him. My intent is to persuade him not to repeat his action or at least to be more cautious. Example 4 is one case where “risk” is descriptive because it refers to something that has occurred in the past, but context will often determine which sense is meant. Similarly, in example 5, “risked” is also used descriptively about events in the past. If a man risked his life to infiltrate the IRA, then it is understood that he subjected himself to the chance of being killed in doing so. This provides the audience with further information about the man and the decision he made but is directed at no one in particular and refers to events that have already transpired. Although 4 and 5 are about events in the past, descriptive meanings of “risk” can also be about present or future events. The ‘ought-to do something’ or prescriptive sense The second way in which “risk” is used also describes an event or action as entailing the chance of some harm, but there is a further meaning as well. Consider the following examples, also taken from recent headlines: 6. “Third-hand smoke poses risk to infants, doctors say.”14 7. “Alleged Calgary gangster deemed flight risk, remains in custody.”15 8. “Belief in cancer myths risky: report.”16 9.  “Americans believe Whitehorse man a risk.”17  Example 7 describes the potential harm to infants caused by their exposure to the poisonous particulate matter (third-hand smoke) deposited onto objects by smoking. 14  http://www.cbc.ca/health/story/2009/01/06/smoking-third.html http://www.cbc.ca/canada/calgary/story/2008/10/30/tran-detention-review.html 16 http://cbc.ca/health/story/2008/08/26/cancer-survey.html 17 http://www.cbc.ca/news/story/2001/07/09/09flathe.html 15  8  Infants can ingest these poisons when they come into contact with the particulate matter, for example by crawling along the floor, or being held by their smoker-parents, and then putting their hands in their mouths. In this case, the “risk” of third-hand smoke descriptively indicates the possibility of infants coming to harm through the ingestion of poisons, but it also conveys an additional message that some action ought to occur, assuming that we value the well-being of our children. For example, the writer of the article goes on to suggest that smokers should avoid smoking inside where infants might be and that parents should wear a smoking jacket outside and wash their hands before they come into contact with their children.18 If third-hand smoke is identified as a “risk” to infants, then the function of “risk” in this case is both to inform and to recommend. To expose one’s infant to poisons is to do something wrong. Similarly, in example 8, an alleged gangster is described as a flight risk, meaning that he is likely to leave town or escape to another country to avoid prosecution, but here the term “risk” is also a reason for action. In this case, labeling the alleged gangster a flight risk provides justification for denying him bail and holding him in custody; it does not merely describe the fact that he may or may not escape. “Risk” is being used to say something like “there is a chance this person will escape prosecution so some action is warranted to prevent it.” In example 9, readers are told that maintaining beliefs about cancer that are grounded in myths is “risky” since they can lead to misunderstandings about the causes of cancer and about the importance of controllable factors such as diet and lifestyle choices. Therefore, if a person wants to take steps to reduce their chances of developing cancer, they ought to find out if their beliefs about cancer are correct. 18  http://www.cbc.ca/health/story/2009/01/06/smoking-third.html  9  In the final example, “risk” is used to warn residents about the possibility of harm arising from a convicted sex offender living in Whitehorse. To tell residents that this man is a “risk” is to say that he is someone who could be dangerous, that they ought to be aware of his presence, and to take precautions that they would normally not take. When “risk” is used in this second “ought-to” sense, failing to act may produce some degree of blame or responsibility. If a person is told that getting their neck manipulated by a chiropractor is risky, yet they choose to have the procedure done anyway and subsequently suffer some sort of injury, they are held (at least partially) responsible for exposing themselves to harm. It might be argued, however, that there also is an element of responsibility for those who fail to take action when they are told about the probability of harm (descriptive risk). Suppose that a man going out for a walk in a thunderstorm is told that the odds of being struck by lightning leading to death or injury each year is 1 in 700,000.19 If he goes for his walk anyway and gets struck it might be possible to hold him partly to blame, but to do so we would have to assume he understood how to interpret the probabilistic information provided. But it is often difficult for the average person to make sense of such data. It is not clear how the overall likelihood of being struck by lightning relates to one’s decision to take a walk. Even if pointing out the exact degree of risk he faced was meant to serve as a warning, the intent might not have been clear to him. If the same man is instead told by his wife, who is concerned about his safety, that walking in a thunderstorm is risky because of the chance of being hit by lightning, it is easier to see that he has done something worthy of some blame if he goes out and gets hit. He has missed, or chosen to disregard, something important that his wife meant to say to him that did not require special knowledge to interpret. 19  http://www.lightningsafety.noaa.gov/medical.html  10  By providing reasons for action, risk can be a source of normative force as long as one is averse to possible harm. When we say that something is “risky”, we mean that there is good reason to think that it involves the chance of some unwanted outcome and that some action ought to be taken to avoid it. An account of risk that includes both senses in which it is used therefore cannot rely solely on probabilities or numerical estimates. However, it is often taken for granted, particularly in technical definitions and analyses of risk, that risk is only descriptive. These technical definitions in turn inform many other studies and accounts of risk. The sort of objective risk (meaning the measurable chance of harm) in question is about what is ‘real’ in comparison to perceived (subjective) risk, which concerns the various kinds of attitudes and judgments people have about what is ‘real’.20 The emphasis of most risk-related study has been primarily on the descriptive sense of risk. In Renn’s thorough review of the literature spanning thirty years of risk research from the 1960s up to the turn of this century, he explains that there is no commonly accepted definition of the term “risk” and, like others, notes that the term often refers to the possibility that natural or human events will produce undesirable consequences but that there is ambiguity over what counts as undesirable given that some people actively seek out “good risks” (in extreme sports for instance) and that in economic theory both gains and losses are described by “risk.”21 He therefore proposes a definition that attempts to encompass these different aspects reflected in the literature: Risks refer to the possibility that human actions or events lead to consequences that affect aspects of what humans value. This definition implies that humans can and will make causal connections between actions (or events). Consequences are 20 21  Slovic, The Perception of Risk, xxxvii Renn, “Thee Decades of Risk Research,” 51.  11  perceived from a non-fatalistic viewpoint. They can be altered either by modifying the initiating activity or event or by mitigating the impacts... Risk is therefore both a descriptive and a normative concept. It includes the analysis of cause-effect relationships, which may be scientific, anecdotal, religious or magic … but it also carries the implicit message to reduce undesirable effects through appropriate modification of the causes or, through less desirable, mitigations of the consequences.22 Renn proposes this definition because it allows him to distinguish between various conceptions of risk arising from science and engineering, psychology and cultural theory in order to assess their strengths and weaknesses. He does not elaborate further except to say that the normativity of risk is evident since it makes sense that people should want to avoid harm. To make the claim that risk is both a descriptive and a normative concept convincing, further explanation is necessary. My purpose is to provide an account that provides more structure to the idea that a risk can provide reasons for action and that incorporates some of the features of risk that distinguish it from other normative concepts. For Renn, the normativity of a risk is obvious when it involves harm, but this seems problematic. It could be argued that, in fact, risk is not normative at all, but that it simply points out harm in the world that it is rational to want to avoid. If it simply makes sense to want to avoid what is harmful, this does not mean that risk can prescribe action. I will explain why there are reasons to challenge this view and I will suggest that a risk can be weakly normative in the way that morality is normative. To call something a risk is to imply that one ought to act in such a way as to avoid possible harm. If a person fails to heed this message, there are consequences for doing so; however, since a risk is uncertain, these consequences are only possibilities.  22  Ibid.  12  Another feature of Renn’s view is that risks are about what matters to people since they affect whatever is of value to them. This is an important refinement for the understanding of risk: the narrow, usually technical conception of risk as simply the chance of some objective harm seems too confining. The importance of those things that people value when it comes to decisions involving risk has been recognized throughout its history, yet has not been fully incorporated into the modern definition of risk. Although some theories, such as expected utility theory (EUT) and subjective expected utility (SEU) have been designed to take this and other contextual or qualitative considerations into account. But the idea that risks are measurable is problematic, however: even if values or similar considerations are included in the calculation, it is extremely difficult to quantify something like one’s hopes for the future. I will provide further reasons to support the view that risks are those events or actions that can negatively affect what is of value to people. My account explains that risk is normative in the way that morality is normative although it is weaker than morality. This dissertation will not amount to an analysis of risk or all of its features. Each chapter will contribute to the normative account of risk and help to provide the further detail I think necessary to explain how risk can be used to both describe an event or action as involving the chance of some harm, and to give a person reasons for acting. In the second chapter, for example, I will provide a brief historical overview of risk that explains how the concept has developed over time. There has been a long-held view that risks were matters of fact since it was possible to calculate their probability of occurring. While calculating risk proved to be a powerful tool for economics, decision theory and the insurance industry, it also contributed to the idea that risks were not  13  merely matters of fate but could be predicted and therefore avoided. It was also recognized, however, that risks could be a matter of subjective evaluations. More recently, a number of alternative conceptions of risk have been proposed which provide a much broader understanding and challenge the very notion of objective risks. The claim that risks are objective matters of fact has been entrenched in many disciplines, and there is a particularly interesting debate on risk in philosophy, although little has been written about the philosophy of risk per se. In his philosophical exploration of risk, Rescher makes the claim that risks are at root ontological in that they are facts about the world. Thompson challenges Rescher’s claim and suggests that risks are better understood as epistemological, since it seems to matter what people think about what counts as a risk. In Chapter 3, I will describe the debate and argue that Thompson’s argument is the more convincing, although I disagree with some of his claims. If risks are ontological categories, then subjective evaluations are external rather than part of what might define them. Value judgments on Rescher’s view are confined to people’s assessments (i.e. acceptability, severity) and comparisons of risk. We would have to accept, however, that harms or undesirable events exist in the world apart from any judgment or evaluation: that harms are matters of fact. If risks are epistemological, then a more convincing argument can be made that one’s subjective judgment of events in the world determine what will count as a harm. Furthermore, this also supports the view that what is of value to someone can in fact determine what will be of harm to them. In Chapter 4, I provide a more thorough means of understanding the prescriptive sense of risk, going beyond the notion that a harm is simply something that it is rational to avoid. I begin with a generic understanding of risk as the combination of both harm and  14  chance, although this is not meant to be a claim about the nature of risk. I then suggest that risks are best understood as negative if they are to provide reasons for action. I then provide a more detailed argument in two parts. First, I address the claim that risks are not in fact normative. If we accept that what is rational is normative, then we can say that it is rational to avoid the chance of harm. The reasons we have for acting arise from what it is rational to do. There are some potential limitations with this explanation, however; most importantly, the fact that it does not adequately express what we mean when we call something a risk. There are two components of risk, chance and harm, but their meaning cannot merely be assumed. What counts as harmful is not limited to physical or economic losses as many applied fields such as risk analysis or epidemiological studies assume. In considering the different types of harm that a risk might entail, I conclude that a risk is more fully understood as a chance that what is of value to someone is negatively affected. The fact that a risk can be labeled good (e.g. low chance of harm or high chance of benefit) might pose challenge my account, but I explain that many of these good risks often involve certain background assumptions or constrained circumstances. This discussion helps provide the basis for the claim that risks can be normative the way morality is. Values are not merely ascribed onto events or decisions involving risks, but are internal to one’s understanding of such events as risky to begin with. Singer suggests that morality can act like a filter for our decisions. If, for example, we want to keep our promises, we cannot merely weigh the costs and benefits of possible outcomes for each option. Our actions must first pass through the filter of promise-keeping (or ethics) and then an analysis of the possible consequences can proceed. Perhaps risk acts  15  in this way as well. In comparison with morality however, risks are weakly normative and can be overridden by other concerns. In Chapter 5, I describe some of the empirical investigations concerning the way risk affects how people make decisions. Some of these studies conclude that people are generally poor judges of risk since they often do not act according to the laws of probability and thus exhibit irrationality. Other research, however, has provided evidence to suggest that when faced with a risk involving physical harm and the chance of betrayal, some people will in fact be more likely to avoid the threat of harm to something of moral value (such as their sense of trust or honesty). Other research indicates that risks are threats to our aspirations. Since risks in the real world often involve the chance of harm to more than one thing of value, it might not always be the case that people are irrational or risk-seeking when they seem to prefer a risky choice as predicted by traditional measures. To explore this idea, I propose three broad categories of values: standard physical/economic values, aspirational values and moral values. A threat to these values will count as a risk for some people. If more than one value is threatened at a time, then whatever matters most to a person will be the risk they avoid. Many studies of risk do not take into account the fact that harm can be a matter of such subjective evaluation. As part of a larger research project on the ethical issues of cognitive enhancement, I was able to test this hypothesis empirically. The use of drugs such as Ritalin for improving performance involves risks to one’s health, aspirations and morality making it an ideal case study. I will present some preliminary results which suggest that these three categories might be worth investigating further.  16  It should be noted that after outlining some of the major views about risk and the various directions research in the field currently takes, the concept of risk that my claim begins with is not any single one of those described. My aim is to develop an account of risk that aligns and reflects how the term is used and understood by people in everyday language and in reference to real life events. In a sense, the general view of risk I use as the foundation for my work is a meta-concept of risk since it is derived from this substantive and multi-disciplinary area. Therefore, risk understood as the combination of both chance and some harm, is more distinct from any one view yet reflects somewhat of a consensus amongst all of them. While this might prove unsatisfying to those who adhere to one particular understanding of risk, it is part of the design of my account to incorporate a general description that most could agree on and more importantly, directly addresses the way real people talk about risk in their lives.  17  2  A Brief History of Risk  !"#$  %&'()*+,'-)&$  Risk is often understood in a fairly narrow way. It is most often assumed that when we talk about the risks of cancer, traffic accidents or terrorism, we mean to describe something factual in the world. The chances of death, illness, injury and economic loss can be measured and quantified, or merely described, as “high” or “low.” This information then helps us to make decisions about how to lead our lives and how much risk might be involved in the actions that we take. On this view, a risk is an objective matter of fact. As I have proposed, although the term “risk” can be used to describe a situation in which an action under consideration entails the possibility of harm, it can sometimes also prescribe action. Additionally, the claim that some action or situation is “risky” is not always objective since often what counts as risk is determined by a person’s subjective evaluation of harm. Throughout its history and continuing into technical and scientific fields today, however, there has been a tendency to focus only on risks that can be measured. Recently, challenges to the objective conception of risk have been made by psychologists and social and cultural theorists who argue that a more comprehensive account of risk is needed. In this chapter, I will provide a brief description of the historical evolution of both the concept and current applications of risk. I will also describe today’s scientific risk analyses and outline the economic, psychological and cultural theories that are critical of this technical approach that dominates much of the risk literature. The view of risk as both descriptive and normative is not meant to challenge these different conceptions, but 18  rather is inspired and informed by them. Rather than defining risk merely as the chance of harm, it is better understood as the chance of negatively affecting what is of value to people. Risk understood in this way helps to inform further discussion of how risks might both describe facts in the world and also prescribe some action.  !"!$  ./0$1-2')(-,34$5,,)+&'$)6$7-28$  2.2.1 The Emergence of Risk As a formalized concept, risk has a fairly short history beginning around the middle of the sixteenth and early seventeenth centuries.23 The etymology of the word is uncertain, but linguists believe that a new term comes into use when the existing language does not express some problem or situation precisely enough.24 The term “risk” appears to express something (or refer to a problem) that is apparently not fully described by other terms, including danger, chance, luck, courage, fear or adventure. Despite the random occurrences of the word “risk” in everyday language, Luhman says we can nonetheless “presume that the problem lies in the realization that certain advantages are to be gained only if something is at stake. It is not a matter of the costs, which can be calculated beforehand and traded off against the advantages. It is rather a matter of a decision that, as can be foreseen, will be subsequently regretted if a loss that one had hoped to avert occurs.”25 This makes it clear that risk is very much dependent on an understanding of chance or probability. Gambling is perhaps the most obvious form of risk and risk-taking since it 23  There is much consensus on this time as the starting point for formalized risk. Bernstein, Against the Gods; Furedi, Culture of Fear; Denney, Risk and Society; Hacking, The Emergence of Probability; Zinn, Social Theories of Risk and Uncertainty; Luhman, Risk: A Sociological Theory. 24 Luhman, Risk: A Sociological Theory, 10. 25 Ibid., 11.  19  involves the chance of some loss. Evidence of this can be found dating back to 3500 BC where paintings in Egyptian tombs portray an early type of dice game.26 Until fairly recently, people generally thought that the outcome of uncertain events, like throwing a die, was ultimately controlled by the gods or some force of nature. There is evidence to suggest that Greek mathematicians and Roman philosophers had begun to consider alternatives to the idea that the gods controlled games of chance.27 However, historians claim that Christianity supplanted such ideas until the seventeenth century, at which time they re-emerged and formed the basis of our modern rationalist conception of risk.28 Up to this point, risk as a concept had been unnecessary since divinatory practices, “although unable to provide reliable security—nevertheless ensure[d] that a personal decision did not arouse the ire of the gods or of other awesome powers, but was safeguarded by contact with the mysterious forces of fate.”29 Thus, the idea that humans might have some ability to control or predict the outcome of events that might originally have seemed beyond control was still largely unfamiliar until a few centuries ago. The first reference to the word “risk” occurred in German during the sixteenth century, and the same expression was first used in English a century later.30 As a concept, however, risk is first used in reference to early maritime explorations, which were vital for economic growth through trade, but exposed sailors, and those who backed them financially, to unpredictable harms. Increased trade meant increased risk, but these risks needed to be weighed against the potential profits; it did not seem prudent to early  26  Bernstein, Against the Gods. 28; David, Games, Gods and Gambling, 1-26. David, Games, Gods and Gambling, 24. 28 See Bernstein, Against the Gods; David, Games, Gods and Gambling; Hacking, The Emergence of Probability. 29 Luhman, Risk: A Sociological Theory. 8. 30 Ibid., 9; Lupton, Risk, 5. 27  20  capitalists to leave everything up to the gods if it was possible to somehow reduce the likelihood of losses.31 Ewald suggests that maritime insurance in the Middle Ages was premised on the idea that risk “designated the possibility of an objective danger, an act of God, a force majeure, a tempest or other peril of the sea that could not be imputed to wrongful conduct.”32 Even though human fault was excluded on this view, we can see that risk nonetheless became closely tied to business and commerce early on. The incentive of avoiding losses, particularly economic losses, led merchants to attempt to understand which elements in the “risk equation” were in fact controllable or predictable. This led to greater reliance on developments in mathematics and statistics, and increasingly accurate bookkeeping and accounting systems. Since most of the focus in mathematics and statistics employed games and economics, the element of harm in theories of risk was limited to losing games of chance and the loss of money or potential riches.33 For Bernstein, the conception of risk has played a central role in human history. He claims that “the revolutionary idea that defines the boundary between modern times and the past is the mastery of risk: the notion that the future is more than the whim of the gods and that men and women are not passive before nature.”34 Notwithstanding the justification for such a claim, the idea that risk can be ‘mastered’ is an underlying and enduring motive in the sciences, the field of risk analysis and management, economics, industry, government policy and even in some social and cultural theories of risk. 31  Bernstein suggests that two activities became necessary once the future was no longer attributed to God or random chance: bookkeeping which allowed new numbering and counting techniques to develop, and forecasting which associates payoffs with risk taking. Bernstein, Against the Gods, 21. 32 Ewald, “Two Infinities of Risk,” 226. 33 There was also the loss of life but this did not seem to factor significantly into calculations of risk. Sailors made money going off to sea but the merchants taking the risks with their money and goods were the primary focus. See Luhman, Risk: A Sociological Theory. 34 Bernstein, Against the Gods, 5.  21  What, then, has been the traditional approach to mastering risk? People have long thought that this was a matter of limiting it as much as possible to what could be quantified, measured and compared. The development of rational, systematic techniques for understanding and characterizing risks has its origins in mathematics and statistics. Such techniques allow us to use knowledge and data from past experience to develop reliable predictions about the future, which is of obvious value when we are faced with potentially harmful events.  2.2.2 The Systematic Applications of Probability Theory Although limiting risks to what can be measured effectively excluded many of the everyday types of risks that people are faced with and which are not easily quantified, it also led to the procedure of probability analysis. Modern formalized notions of risk, therefore, have their origins in very narrowly construed conceptions of harm and are closely aligned with the development of methods of probability and statistics. For example, it was a simple game that first led Blaise Pascal to change the way risk was understood as something one was a victim of, to something that could be calculated and predicted. The problem concerns a game of balla: A and B are playing a fair game of balla. They agree to continue until one has won six rounds. The game actually stops when A has won five and B three. How should the stakes be divided?35 The question is about how to divide the stakes in an unfinished game, since there is a greater probability that the player who is ahead when the game stops would have won. Pascal, in collaboration with Fermat, laid the foundations of modern probability theory  35  Ibid., 43.  22  and provided a procedure for determining the likelihood of each result happening when more things can happen than will happen.36 Bernstein explains that the problem is more significant than it first seems, as it “marked the beginning of a systematic analysis of probability—the measure of our confidence that something is going to happen.”37 Thus, if the possible outcomes of an event can be measured mathematically, their likelihood of occurrence can be calculated. By a simple probability analysis, a person can take some control over their fate because they have some information about what is more or less likely to occur in the future. Another significant application of probability analysis that helped shape our current understanding of risk was John Graunt’s compilation of one of the first comprehensive accounts of all the births and deaths in London from 1604 to 1662. This was tremendously influential since, for the first time, the general public had access to information about the number and causes of death. This compilation, which represented the beginnings of risk management, was particularly relevant at the time since the city was experiencing an outbreak of the plague.38 Graunt hoped that having a better understanding of the world and its risks could prevent unnecessary worry: Whereas many persons live in great fear and apprehension of some of the more formidable and notorious diseases, I shall set down how many died of each: that the respective numbers, being compared with the total 229,520 [the mortality over twenty years], those persons may the better understand the hazard they are in.39 Collecting birth and death statistics was an attempt to empower, or at least to provide knowledge, to people about their risk of death by different causes. For instance, in 1603, 36  For the solution and a more complete account, see Todhunter, A History of the Mathematical Theory of Probability; Bernstein, Against the Gods. 37 Bernstein, Against the Gods, 43. 38 Ibid., 78. 39 Ibid., 82.  23  one of the worst years of the plague, Graunt calculated that 82% of all burials were plague-related. However, after realizing that it was important to have an account of the total population and to track the different causes of death, he noted that there were equal numbers of burials between plague and non-plague related deaths. Without such an assessment, it was possible to assume that the risk of plague was higher than it actually was, and thus cause unnecessary worry. Graunt attempted to reassure people about the actual probabilities of death and disease, hoping that their concerns would be allayed if they had accurate information.40 Communicating risks in terms of probabilities was seen to be an efficient and accurate means for explaining and characterizing various uncertain harms and dangers, and it soon became the standard for public officials, academics and theorists. It was at this point that a major change in our understanding of risk and probability became evident. The medieval view was that the likelihood of events was a matter of opinion informed by the testimony of an approved authority. After the seventeenth century, the same view prevailed except that the data collected from investigations served as the authority.41  2.2.3 The Importance of Hope and Fear It was therefore the analysis of games, of discrete events linked to maritime insurance, and of statistical bookkeeping items such as births and deaths, that laid the foundations of modern probability theory. Such analysis provided a procedure for determining the likelihood of each result happening when there is the possibility that more things can  40  41  Ibid., 80-83. Ibid., 44.  24  happen than will happen.42 But in spite of the apparently objective and mathematical nature of these early developments, it was recognized that the fears and hopes people might have while playing games or making decisions about uncertain events were difficult to account for and to quantify.43 While quantification became entrenched in the tradition and formalized application of risk, there were always some theorists who recognized that probability was more than just a measure of facts about the world, and could also depend on people’s hopes and fears. Pascal, observing that the fear of being struck by lightning is high, although the probability of it is low, asserts that “[f]ear of harm ought to be proportional not merely to the gravity of the harm, but also to the probability of the event.”44 The notion that both the gravity of the harm, as well as its probability or likelihood of occurring, combine to influence our decisions, suggests that the mathematicians themselves sometimes made the distinction between probability and risk. Huygens also expresses what rational choice theorists now refer to as positive and negative expectations as “hope” and “fear” in his 1657 textbook on probability theory.45 This led to the important idea that peoples’ hopes and fears are clearly not measurable or quantifiable, but ought to be taken into account. The seventeenth century had seen the application of probability analysis to raw data. After this time, however, using such information in a practical way became more nuanced. Like Huygens, many others began to see decisions involving risk as involving  42  See Todhunter, A History of the Mathematical Theory of Probability; Bernstein, Against the Gods. Bernstein, Against the Gods, 67. Pascal and Fermat use probability theory to determine the most just and fair distribution, a morally-laden distribution, rather than merely splitting the winnings between the two players. 44 Hacking, The Emergence of Probability, 77. 45 Ibid., 11. 43  25  two components: the objective facts and a subjective interpretation about how desirable or undesirable the outcomes might be.46  2.2.4 The Notion of Utility The modern application of probability theory thus has its genesis in the eighteenth century. Probability theory is closely tied to risk, since it was thought that risks could be mitigated or controlled by understanding their probabilities. Jacob Bernoulli, a seventeenth-century Swiss mathematician who was more interested in real-world problems than in hypothetical games, first made the connection between probability and the quality of information available.47 Bernoulli thought that in most instances we have incomplete information, and that the estimation of probabilities occurs after an act or event, indicating the need for experimentation and changing degrees of belief. In real-life situations, we have to estimate probabilities based on relatively small samples (and yet treat these as universal). Only in rare cases does life replicate games of chance where we know probabilities a priori. For the most part, we are left to estimate probabilities from what occurs after the fact—a posteriori. In 1738, Daniel Bernoulli (Jacob’s nephew) introduced the notion of utility for measuring risk, which is one of the first attempts to measure something that cannot be quantified.48 Ultimately, he was unconvinced that mathematicians could adequately describe how people actually go about making decisions because their approach dealt with facts alone. Bernoulli disagreed with the generally accepted view that “Expected values are computed by multiplying each possible gain by the number of ways in which it 46  Ibid., 44, 124. David, Games, Gods and Gambling. 48 Bernstein, Against the Gods, 106. 47  26  can occur, and then dividing the sum of these products by the total number of possible cases.”49 Bernoulli thought it was a mistake to assume that the circumstances of the person making a decision involving risk did not matter, and so it seemed unlikely that people would evaluate the consequences of a risky decision in the same way. He recognized that even if the facts are the same for everyone affected, the utility (desirability, usefulness, satisfaction etc.) of the outcomes or the situation is not. In a game of chance or a lottery, someone who is poor cannot use the same rule to evaluate the possible outcomes as someone who is wealthy since “it is highly probable that any increase in wealth, no matter how insignificant, will always result in an increase in the utility which is inversely proportionate to the quantity of goods already possessed”50. This was a significant shift away from the mathematical games that theories of probability (and risk) had been limited to. Bernoulli’s main point was that rational people do not try to maximize expected value but rather expected utility, which is calculated in the same way as expected value but using satisfaction, usefulness, etc., as the weighting factor.51 Bernoulli argued that when people are accused of overestimating the probability of something harmful, like being struck by lightning for instance, they are not actually making a mistake. Instead, they are making an evaluation about the severity of the consequences: the more severe these are, the more fearful people will be of them,  49  Bernoulli, “Exposition of a New Theory of the Measurement of Risk,” 23. Ibid., 25. 51 The essential principle for measuring the value of a risky situation or decision is expressed in the following: “Any gain must be added to the fortune previously possessed, then this sum must be raised to the power given by the number of possible ways in which the gain may be obtained; these terms should then be multiplied together. Then of this product a root must be extracted the degree of which is given by the number of all possible cases, and finally the value of the initial possessions must be subtracted therefrom [sic]; what then remains indicates the value of the risky proposition in question.” Bernoulli, “Exposition of a New Theory on the Measurement of Risk,” 28. 50  27  even if the likelihood of their occurrence is very small.52 Bernoulli’s explanation of expected utility would be very important for further developments in probability theory and risk. Bernoulli recognized that in real world situations, the valuation of money was not, in fact, an objective measure. His expected utility theory (EUT) employing weightedvalues, assumed that the utility of an outcome is dependent on the circumstances of the person. The more money a person has, and the better off his circumstances are, the less utility a constant increment in wealth has to him. EUT was subsequently used as the standard strategy for making decisions under risk which underwent further developments in the twentieth century. In the mid-twentieth century, von Neumann and Morgenstern laid the rational foundation for the introduction of subjective probabilities by Savage in his theory of Subjective Expected Utility (SEU).53 These theories have been used by decision theorists to establish that the utility of a choice to a decision maker operating under the context of risk is a function of the relative values of the potential outcomes of that choice. In other words, “a utility value may be ascribed to each possible consequence in such a way that preferences are fully represented by the expected utility ordering of these acts. Since this representation is complete, these utilities must reflect all attitudes to risk held by the decision maker, and no further trade-off is required”.54 A person’s preferences and  52  Bernstein, Against the Gods,105. Arrow argues that we do not find much evidence of Bernoulli’s work in developments in utility theory for two reasons; he wrote in Latin (and his essay on risk was first translated into German in 1896) and he dealt with utility exclusively in terms of numbers. Nonetheless the eighteenth century highlights the way risk became more prevalent among many thinkers (Arrow, Theory of Risk Bearing). 53 See Neumann and Morgenstern, The Theory of Games and Economic Behaviour; Savage, The Foundations of Statistics. SEU combines two subjective concepts, individual utility and an individual probability analysis based on Bayesian probability theory. The history of expected utility theory is vast and much of it covers both uncertainty and risk. Here my purpose is to outline the theories as they relate to descriptive and normative risk. I will not provide an analysis of the theory, or will I comment on its development over time. 54 Davies, “Rethinking Risk Attitude,” 160.  28  attitudes toward risk are taken into account, since expected utilities are calculated by weighing the sums produced by adding the utility values of outcomes multiplied by their respective probabilities. This approach continues to be accepted as a standard of rationality today, and some proponents suggest that violations of its basic rule are indicative of irrationality.55 When there is no objective measure of the value of various outcomes, but merely subjective evaluation of the different possible outcomes, appealing to the SEU is thought to be one way of resolving this problem.56 Proponents of these two methods claim that preferences and decisions between risky options are influenced by people’s attitudes toward risk, which in turn influence the assignment of both probabilities and utilities.57 The utilities in EUT and SEU include not just attitudes to risk, but also psychological reactions to potential outcomes, social influences and strength of preferences for the outcomes. SEU anticipates that a person’s subjective probabilities may differ from their stated probabilities, although both theories acknowledge that decisions made under risk are not adequately captured by seemingly objective measures of value. The development of these theories would have an enduring influence on modern conceptions of risk in the twentieth century.  2.2.5 Risk in the Twentieth Century The use of probability to measure and interpret uncertainty has become common in our modern world, where it has proven to be a very powerful and useful tool. In the twentieth century, applications of probability theory to instances of risk became prevalent in various disciplines such as economics, science and business. Probability theory was 55  Thompson and Dean, “Competing Conceptions of Risk,” 364. Feldman, “Actual Utility,” 50. 57 See Pratt, “Risk Aversion in the Small and in the Large”; Arrow, Theory of Risk Bearing. 56  29  used in analyzing social data and eventually changed the way people thought about risks in general. 58 It was easier for everybody to think of risk as an objectively measurable quantity rather than as anything evaluative, and this is generally what happened. However, it sometimes seemed that probability theory was threatened when its application to a real-world problem or concept failed.59 This is, of course, not surprising; there are times when we may find ourselves having to make decisions when we simply have too little information to fully understand possible risks, or to apply the laws of probability. As a result, some scholars thought there was a distinction between probability as the quantitative measurement of events and risk as something more ambiguously understood as a combination of probability and subjective evaluation of events, (although for others, no such distinction was thought necessary). For example, Arrow, an economist working in the mid-twentieth century, found that people overestimate the amount of information they have at their disposal. Arrow points out that an analysis of probabilities tells us that gambling and paying premiums to an insurance company are both losing propositions, yet the majority of people engage in both activities routinely. He argues that most of us consider it acceptable to take a chance with a high probability of losing a small amount of money for the chance of winning big. Lotteries, for instance, are very popular even though winning the milliondollar prize is highly unlikely, while losing the two dollars paid for the ticket is highly probable. Similarly, most people think it is acceptable (if not necessary) to buy insurance for their homes, even though there is a low probability of a large gain (compensation for the cost of your house and belongings should they be lost in a fire). Both of these  58 59  Gigerenzer, The Empire of Chance, xiv. Stigler, The History of Statistics, 361.  30  situations result in a definite loss of money with a very small chance of some gain based on statistical odds.60 Arrow concluded that we gamble because it is more about entertainment than about risk, and we buy insurance because we cannot afford to take the risk of losing our home if something catastrophic, like a fire, were to occur. Arrow’s later work began to shift focus away from mathematics and economics, and towards understanding how people’s decisions are influenced by a lack of certainty. How people distinguish between acceptable and unacceptable risk, how they make decisions about what sort of risks they are willing to take, and how they come to live with these decisions, are questions that became the focus for the field of risk management.61 The British economist John Maynard Keynes and his contemporary, Frank Knight, attempted to make the distinction between uncertainty and risk clear. Bernstein claims that both Knight and Keynes were instrumental in the development of our modern understanding of risk.62 Knight, for instance, claimed that a risk is a measurable uncertainty, but since it is so different in nature from immeasurable uncertainties, it should not be considered uncertain at all.63 As long as it is possible to make estimations about the likelihood of an event’s occurrence, then something was known about it and on Knight’s view should not be considered uncertain. Keynes developed the idea much further, but his emphasis was on the distinction between what we can define and what we cannot. He argued that probability theory is flawed when applied to the real world, as it requires looking to the past for information on how to make decisions about the future which is ultimately problematic. According to Keynes, predictions ought to be based on 60  Arrow, Theory of Risk-Bearing. Bernstein, Against the Gods, 206. Bernstein claims that Arrow is considered the father of our modern day understanding of risk management as both a field of study and as a practical art. 62 Ibid., 217 63 Knight, Risk, Uncertainty and Profit, 205. 61  31  propositions, or a priori probabilities, since there is no way of knowing the objective probability of a future event with certainty.64 Keynes therefore advocated a more practical approach to uncertainty and risk. He concluded that probability theory cannot be applied to real-life events because they provide us with estimations of their likelihood rather than certain knowledge and so probability was therefore best understood as degrees of belief about the future. On his view, the laws of probability allow us to make better predictions or estimations of events but do not necessarily tell us how to act. The result of a coin toss might mean a win or a loss, but it does not lock us into a certain set of behaviours or a course of action.65 Even though it was recognized early on that there was a subjective component to risk (evaluations of harm or levels of satisfaction with outcomes), the methods of statistics and probability proved to be powerful tools that were seen as transforming risk-taking into a rational process, and allowed theorists to describe risky situations as a set of probabilistically determined outcomes based on the underlying facts in the world. Quantifying risks was assumed to empower people to make decisions, as it gave them information about how likely an event was to occur. It allowed one to exert greater control over their future because it allows one to choose more rationally under conditions of uncertainty.  64  Bernstein, Against the Gods, 223-230. At about the same time that Keynes and Knight were rethinking uncertainty, the Hungarian-American mathematician John von Neumann was proposing a what would later become known as game theory. He argued that the only rational option in a game of chance with another person was not to guess what their intentions were but rather to avoid revealing one’s own intentions. In other words, one’s strategy needs to change from trying to win to trying to avoid losing. Bernstein explains that “[i]t is not the laws of probability that decree the 50-50 payoff in this game. Rather, it is the players themselves who cause that result.” (Bernstein, Against the Gods, 234). Von Neumann and Austrian economist Oskar Morgenstern’s work was based on the idea that people were fundamentally rational and always had a full understanding of their preferences that they applied consistently in every instance. However, psychologists studying risk and uncertainty later in the century challenged such an optimistic picture of the average person. 66  32  Attempts to account for the subjective or evaluative aspect of risk occurs both in calculations of expected utility, and in the distinction between risk analyses and risk perception. This leads to many different understandings of the term “risk”, and to much ambiguity as well. In part, this is because a number of different interests are served in estimating risks that began primarily with marine traders and mathematicians, but continues in the fields of business, economics, science and engineering, industry, government policy, medicine, and insurance.  !"9$  7-28$.)*3:$  The analysis of probability has transformed our understanding of nature and our ability to control aspects of the world that previously seemed to be matters of fate. Such a transformation has, in turn, had far-reaching effects on the structure of modern bureaucracies and science-based knowledge.66 Despite the development of the mathematical and statistical tools to predict the likelihood of harmful events and their applications in economic theory and the insurance industry, it was not until the middle of the twentieth century that any actual systematic assessment of harm was performed on potentially dangerous technology.67 Renn explains that …risk has always been a part of human existence and the field of risk research started as early as human beings started to reflect on the possibility of their own death and contemplated actions to avoid dangerous situations…However, a systematic scientific attempt to study risks in society and to professionalize risk management agencies is a rather more recent addition.68  66  Gigerenzer, The Empire of Chances, xiv. Renn, “Three Decades of Risk Research,” 50. Renn notes that there is some debate concerning when the first assessment was performed ranging from the space exploration programs in the 1950’s, to Chauncey Starr’s defining article about the voluntariness of risk in 1968, to the studies of chemical and nuclear plants in 1983. As a result he chooses to characterize the time frame simply as after World War II. 68 Ibid. 67  33  The field of risk analysis takes a technical perspective of risk, and represents current efforts to apply the methods of probability theory to the real-world hazards that affect the health and safety of society. It has been subdivided into risk assessment, the process for determining the likelihood and extent of harm caused by a hazard, and risk management, where public or private policy decisions are informed by both risk (determined from risk assessments), as well as economic, political, legal, ethical and other considerations.69 The science of quantitative risk assessment attempts to characterize the probability of some unwanted and potentially harmful event, such as a nuclear power accident or a chemical spill, through statistical analysis. There are three standard processes involved: risk identification, estimation and evaluation.70 A variety of methods are used to identify a risk, such as epidemiological or toxicological studies, but identification always relies on statistical analysis. Schrader-Frechette explains the generally accepted practice in risk assessment where “risk” is defined as a compound measure of the perceived probability and magnitude of adverse effect. For example, one might say that in a given year, each American runs a risk, on the average, of about one in 4,000 of dying in an automobile accident. Assessors most often express their measures of risk in terms of annual probability of fatality for an individual.71 Such a definition treats the concept of risk as descriptive of the world, in that to assess a “risk” is to state the objective likelihood of certain sorts of significant events of specified magnitudes that could happen to people in a specified span of time, and/or under specified conditions. A quantitative characterization of risk is carried out in a number of different ways using a variety of methodologies, the most prominent of which are risk-cost benefit 69  Glickman and Gough, Readings in Risk, xi. Schrader-Frechette, Risk Analysis and Scientific Method, 18. 71 Ibid. 70  34  analysis (RCBA) methods. RCBA is the procedure that allows risk assessment to address the complexity of societal decision-making by taking into account different viewpoints, allowing for discussion, and providing a well-established basis for argument or agreement.72 Through a series of steps, RCBA defines the risk problem, describes the relationships among the various courses of actions and their consequences, assigns a common unit to the risk decisions (typically the common unit is money), and then calculates a single numerical value for each of the alternatives which is representative of the difference between the benefits and the risks and costs. Once the risk for a given technology or product has been assessed in the way described, the results are used to make decisions on how to manage such risks in society and in the environment, which ultimately have far-reaching and widespread effects. Risk assessment, then, plays a crucial role in determining which risks society must face. Risk analysis is thus divided into two separate fields: risk assessment and risk management. Assessors are assigned the task of quantifying the degree of risk that society faces, and managers attempt to use this information to make recommendations to policy makers and officials about what levels of risk are acceptable for society by weighing different costs and benefits. However, with an increased call for more public involvement in making decisions that will affect their lives, it was found that people often disagree with the ‘experts’’ judgments about what counts as acceptable levels of risk. This has lead to the development of a third field, risk perception, which attempts to understand how the chance of harm influences people. 73  72  Ibid., 34 See Douglas, Risk and Blame; Beck, Risk Society; Van Loon, Risk and Technological Culture; Taylor-Gooby and Zinn, Risk in Social Science. Some of the research from this field is discussed in Chapter 5. 73  35  Since risk assessments are science-based, they are not easily or widely accessible to the general public. As a result, questions about which risks are worth measuring are not routinely open to public discussions. The long-held view, that such risks are quantifiable, is attributable to its emergence as a tool for economists, scientists and gamblers, but it is clear that this purely objective and descriptive explanation was untenable quite early on. The division between risk assessment and risk perception as fields of study is indicative of the problems inherent in attempts to separate the evaluative component of risk from its quantitative component. Further research into risk perception has revealed that people often make ‘mistakes’ about risks that are not anticipated by the experts, or which seem irrational when the facts of the situation are revealed to us. Graunt attempted to show this with his analysis of the risks of dying of the plague. However, it is not clear that such data was effective in allaying the fears of many in the general population.74 In general, technical perspectives of risk are limited to anticipating potential physical harm to humans, ecosystems or cultural artifacts through measures or estimates of the probability of occurrence.75 The data produced by risk analyses has been very effective in helping to inform critical regulatory decisions, policies and legal standards about various sources of risk that can affect all members of society. Considerations about what constitutes an acceptable or tolerable level of risk are informed, but not determined, by risk analysis. There is a separate field of risk management to tackle such evaluative concerns, which are thought to play little or no part in determining risk.76  74  Not only is the definition of risk is riddled with ambiguity, but the research involving risk is also vague on certain levels. Risk analysis, assessment, characterization, management, perception and communication are the primary classifications for investigations about risk yet it is difficult to make sense of these distinctions in a consistent manner. 75 76  Renn, “Three Decades of Risk Research,” 53. Rodricks and Taylor, “Application of Risk Assessment to Food Safety Decision Making,” 148.  36  2.3.1 Current Accounts of Risk Many of the technical definitions of risk used in risk assessment and analysis have specialized meanings. In general, the definition of risk used in investigations or research will reflect the conventions and interests of a particular discipline, resulting in much variation.77 Typically, technical meanings have to do with measuring the magnitude and severity of harm associated with some technology or process. Hansson summarizes what he takes to be the most common technical meanings as follows:78 1. An unwanted event which may or may not occur. “Lung cancer is one of the major risks that affect smokers.” 2. The cause of an unwanted event which may or may not occur. “Smoking …is by far the most important health risk in industrialized countries.” 3. The probability of an unwanted event which may or may not occur. “There is evidence that the risk of having one’s life shortened by smoking is as high as 50%.” 4. The statistical expectation value of an unwanted event which may or may not occur. “The total risk from smoking is higher than that from any other cause that has been analyzed by risk analysts.” 5. The fact that a decision is made under conditions of known probabilities (“decision under risk” as opposed to “decision under uncertainty”).79 “The probabilities of various smoking-related diseases are so well-known that a decision whether or not to smoke can be classified as a decision under risk.” Some of these technical definitions (in particular 3, 4 and 5) have evolved as effective means of quantifying the likelihood of hazards and sources of danger, which provide necessary statistical information for decision-making, policy formation and further research. In professional risk analysis, the fourth example is the standard meaning where 77  Fischhoff , Watson and Hope, “Defining Risk,” 30. Hansson, Philosophical Perspectives on Risk, 1-2. All examples are also taken from Hansson. 79 Ibid., 10. 78  37  a risk is the product of the probability of an unwanted event and a measure of its severity, while engineers use both the third and fourth meanings.80 When understood this way, risk is something measurable and can provide information about the world that is grounded in facts that can be verified or disputed. For instance, the severity of the harm of smoking can be measured by deaths, injuries, or the costs associated with treating illness. If we are interested in deaths, then risk will be associated with the number of calculated deaths that obviously will differ from the number of injuries. Despite the variation in meaning, it is clear that technical definitions mean to describe events in the world in terms of their probability of occurrence. This information is then used to make judgments and comparisons about these events in many different contexts. Statistical characterizations of risk dominate much of the technical literature, and are also frequently used to communicate with lay people about many different sources of harm including health, environmental, technological and economic issues.81 The result of this, however, is a rather narrow conception of risk where, as Hanson argues, …all the major variants of technological risk analysis are based on one and the same formal model of risk, namely the objectivist expected utility, that combines objectivist probabilities with objectivist utilities. By an objectivist probability is meant a probability that is interpreted as an objective frequency or propensity, and thus not (merely) as a degree of belief. Similarly, a utility assignment is objectivist if it is interpreted as (a linear function of) some objective quantity. It is often taken for granted that this sense of risk is the only one that we need.82 When “risk” conveys more than mere descriptions of probabilities, it is usually attributed to people’s perceptions of risk rather than seen as a feature of risk. Thus, on the technical view, there is a distinction between objective risk (the product of scientific research) and  80  Ibid., 2. Glickman and Gough, Readings in Risk. 82 Hansson, Philosophical Perspectives on Risk, 2. 81  38  subjective risk (non-expert perceptions of that research), which is important to make clear. For example, in their article “Defining Risk” Fischhoff and his colleagues explain Technical experts often distinguish between “objective” and “subjective” risk. The former refers to the product of scientific research, primarily public health statistics, experimental studies, epidemiological surveys, and probabilistic risk analyses. The latter refers to non-expert perceptions of that research, embellished by whatever other considerations seize the public mind.83 There have been challenges to this characterization from the social sciences, but there are also concerns from within science about just how objective risks really are, given that they often involve judgments about what is harmful.  !";$  54'0(&3'-<0$5,,)+&'2$)6$7-28$  The foundation of current technical accounts and conceptions of risk has been shaped by the powerful probabilistic and mathematical tools developed to predict the likelihood of events. However, these technical accounts and conceptions of risk have also been subject to criticism for failing to take other considerations into account, particularly from the social sciences. While risk analysts and scientists continue to assess and analyze risk quantitatively, the social sciences use risk to make claims about people and their interactions with each other and institutions in modern society. 84 Smith reports that there is much debate concerning the nature of risk where “[f]or scientists…risk is measurable and therefore can be used for the construction of predictive models concerning the likelihood of specific events to occur…For the critics, risk is subjective and intersubjective, and consequently the notion of objective risk assessment is flawed.”85 Alternative accounts attempt to provide concepts of risk which include more of the 83  Fischhoff, Watson and Hope, “Defining Risk,” 30. Kasperson et al, “The Social Amplification of Risk.” 85 Smith, “Mad Cows and Mad Money,” 312. 84  39  subjective considerations, such as social context and meaning, since such considerations are often omitted from more technical approaches. At present, research in this area is roughly divided into three major categories: economic, psychological and social/cultural.  2.4.1 Economic Accounts Not surprisingly, current economic accounts of risk are informed by the developments in probability theory over the last few centuries. Closely related to technical concepts of risk, economic accounts differ in that possible harms or unwanted outcomes are expressed as utilities, meaning the degree of satisfaction or dissatisfaction associated with some action or event.86 It is assumed that individuals act according to the principle of utility, so that an activity or situation involving risk results in weighing the costs (negative utilities) against possible benefits (positive utilities). Gross and Rayner explain that on this view “individuals decide to take a risk by first weighing its potential costs and benefits and then opting for the course of action that they think will maximize the advantages that will accrue.”87 While the criteria for economic accounts of risk are subjective evaluations of satisfaction and thus seem to improve on the narrowness of technical conceptions where undesirable effects are assumed rather than related to what people think, economic theories of risk are also problematic. In order to compare one risk with another, an objective measure is required; for economists this measure is the amount of money a person is willing to pay for a higher degree of utility. In general, economists believe that  86 87  Renn, “Three Decades of Risk Research,” 55. Gross and Rayner, Measuring Culture, 67.  40  the more value you place on the outcome of some action or event, the more you are willing to pay.88 One of the major criticisms of the economic view is that some risks, such as those involving serious injury, death, or environmental damage, simply cannot be expressed in terms of money. Values are thought to be incommensurable however since there is no way to express the relative value of one’s health, for example, compared to an increase in one’s income.89  2.4.2 Psychological Accounts While economic accounts attempt to standardize social values into monetary ones, psychological (or psychometric) accounts attempt to explain why people worry about some risks more than others, and why expected values do not always inform people’s judgments about risk.90 Using an experimental approach has taken questions about decision making in risky situations from mere theory and game-playing to real-world situations that are thought to more accurately gauge people’s behaviour when faced with risk.91 The results of such research have inevitably led to even more questions about how people perceive risk and uncertainty, how framing such situations changes people’s responses, and how presenting information in different ways affects decision making.  88  For example, suppose you have a choice between buying two cars that are identical in every way except that one car has enhanced safety features and is significantly more expensive. If you choose to buy the more expensive car, it suggests that you value the car’s safety features. 89 Baram, “Cost-Benefit Analysis.” There are other objections to the economic theory of risk such a the difficulty in assigning a single social welfare utility when decisions involve an entire society (Schrader-Frechette, Risk and Rationality); or the problem inherent in determining how to discount consequences that do not occur until months or years after a decision has been made (Hyman and Stiftel, Combining Facts and Values in Environmental Impact Assessment). 90 Lopes, “Some Thoughts on the Psychological Concept of Risk.” 91 Munier and Machina, “Introduction,” viii.  41  For example, risk evaluation is the process of determining the acceptability of a certain risk through various methods, including cost-benefit analysis, revealed preferences and expressed preferences.92 In the expressed preferences method, people are asked through surveys or focus groups to rank the acceptability of different types of risks (defined as the probability of death) and the results are analyzed. Slovic and his colleagues conducted an investigation into what people mean when they say something is risky using this method.93 Analysis of the answers led the researchers to …reject the idea that lay people wanted to equate risk with annual fatality estimates but were inaccurate in doing so. Instead, we are led to believe that lay people incorporate other considerations besides annual fatalities into their concept of risk. Such results were surprising to analysts and scientists, as they had tended to assume that the narrow way in which they defined risk was uncontroversial. Further research into risk perception has thus revealed that people often make ‘mistakes’ about risks, meaning that they worry about things with a low likelihood of occurrence such as nuclear accidents, that are not anticipated by the experts or which are considered irrational when the facts (meaning the calculated probabilities of death or harm) of the situation are revealed. It has since been found that the statistical data produced by risk analysis measuring the probability and magnitude of harmful events is often ineffective in allaying the fears of many in the general population, and that the risks under consideration are not always the ones that matter to people or what people understand them to be. Further investigations have revealed identifiable patterns of probabilistic reasoning which seem to be useful for everyday situations. For instance, Kahneman and Tversky  92  Rasmussen, “The Application of Probabilistic Risk Assessment Techniques to Energy Technology,” 124; SchraderFrechette, Risk Analysis and Scientific Method, 15-29. 93 See Chapter 1.  42  demonstrated that if potential losses are high, people are risk averse and if potential gains are high, they are risk seeking.94 Other studies have shown, however, that a number of biases and heuristics are used when drawing inferences about probabilities that influence risk perceptions. For example, the amount of dread, familiarity, or frequency of occurrence associated with a particular event will play a role in whether it is considered more or less risky.95 A well-known study by Baron has identified what he calls the omission bias where people think causing harm is worse than not preventing it.96 Psychological accounts of risk thus challenge both economic and technical assumptions by demonstrating that there are a number of different meanings of risk that integrate both contextual and evaluative considerations.97  2.4.3 Social and Cultural Accounts Social and cultural accounts of risk stress the importance of human relations, and take the view that undesirable events are socially defined. It is typically suggested in such accounts, that methods of risk evaluations are biased to the extent that they can at times be based on social or cultural ideologies. This contributes to the fact that more technical knowledge about risk does not make people more rational about risk.98 The role of knowledge or reason is usually discounted on such views in favour of social interpretation and group values. Social scientists like Beck and Giddens argue that risk is more complex than mere descriptions of the chance of danger, or people’s assessments of possible unwanted 94  Kahneman and Tversky, “Prospect Theory.” Baron, Morality and Rational Choice; Baron, Judgment Misguided; Redelmeier and Shafir, “Medical Decision Making”; Bromley and Curley, “Individual Differences in Risk Taking.”1992. 96 Baron, “Tradeoffs Among Reasons for Action.” 97 I will discuss some of these psychological theories in more detail in Chapter 5. 98 Douglas and Wildavsky, Risk and Culture, 63-64, 71; Sumner, Folkways, 39. 95  43  events.99 Instead, at issue is the way that people “interpret risk, negotiate risk, and live with the unforeseen consequences of how modernity will structure our culture, society and politics.”100 Cultural theorists like Douglas, meanwhile, charge that any “risk analysis that tries to exclude moral ideas and politics from its calculations is putting professional integrity before sense.”101 She claims that people reveal their cultural biases, psychological traits and institutional affiliations when they identify risks in their environments. For Douglas, risk is in fact a collective construct where beliefs about purity and danger play a central role.102 Social and cultural accounts do not attempt to understand risk to improve how it is assessed or managed. Instead, they aim to improve our understanding of social and cultural theory, as well as the relationships of individuals to various institutions involved in producing and managing risks and the organization of society.103 Since many of the policies that affect members of a particular society are informed by risk analysis, issues of fairness and justice must be addressed along with quantitative measures.104 It is also important to recognize that the ways in which various cultural or social groups interpret what is harmful can and often do differ, so the assumptions made in technical risk assessment may not reflect these views. Renn explains that “the selection of physical harm as the basic indicator for risk may seem irrelevant for a culture in which violations of religious beliefs are perceived as the main risks in society.”105 Finally, Douglas and Wildavsky make the specific claim that statements about risk often reflect social structure  99  Beck, Risk Society; Giddens, The Consequences of Modernity. Franklin, “Introduction,” 1. 101 Douglas, Risk and Blame, 44. 102 Douglas, Risk and Blame; Douglas and Wildavsky, Risk and Culture. 103 Garland, The Rise of Risk, 70. 104 Wynne, “Public Perceptions of Risk.” 105 Renn, “Three Decades of Risk Research,” 62. 100  44  and the status of that person’s place within their society. For example, the Hima in Africa believe that women should not come into contact with cattle because this is considered risky. While this does not represent an actual risk in the world, it contributes to the organization of Hima society.106  !"=$  5$>()3*0($?0@3'0$3@)+'$A34+02$3&*$7-28$  There is much debate concerning the nature of risk since “[f]or scientists, both natural and social, risk is measurable and therefore can be used for the construction of predictive models concerning the likelihood of specific events to occur…For the critics, risk is subjective, and intersubjective , and consequently the notion of objective risk assessment is flawed.”107 The development of these varied accounts has resulted in a broader debate in the risk literature concerning the role that people’s values have to play in the identification and assessment of risk. Policies about what kinds of risks are acceptable to society are made in part by recommendations from technical risk analyses, but some argue that this results in a very narrow understanding of what counts as a risk and which issues matter. For instance, does it matter whether someone takes a risk which involves harm to themselves, or whether their actions or choices may lead to harm for others? I will provide a very brief outline of this discussion since it is an important part of current investigations into risk, and provides further background for the relevance of values to risk. Generally speaking, technical analyses assume that risk is knowable and measurable and for this reason are given a positivist label. They define risk in terms of objectively 106 107  Ibid., 40-48. Smith, “Mad Cows and Mad Money,” 312.  45  construed probabilities and as a result, risk is considered to be entirely value-free. However, others claim that risk is subjective because values are intrinsic to judgments about what constitutes a risk or risky event. Moderate views attempt to reconcile these two perspectives by suggesting that value judgments play a role in risk evaluation, and hold that many risks, once identified, are measurable and real. For proponents of the technical or positivist view, risk is thought to be simply a measure of the likelihood that an event will occur.108 Value judgments are therefore not taken into consideration and it is thought that mathematical criteria (i.e. the assignment of probabilities) are sufficient for the evaluation of risks.109 This approach has been attributed to a number of people where science is taken to be value-free and thus risk estimation must also exclude judgments of value.110 For these individuals, risks refer to the objective circumstances of the physical world, and we should try to avoid the introduction of values (although they do not completely discount these values) into science and risk evaluations. On this view, people who fail to understand risks or who act in a way that seems to be unwarranted by statistical data (for example who are overly fearful of statistically low risks or not concerned enough for risks that are more likely) do so because their perceptions of probabilities are deeply flawed. Interestingly, while risk was, and still is, considered objective by some, there is doubt over the objectivity of probability itself. As Hacking explains, there are different conceptions of probability, although it remains unclear which one to employ: 108  Despite this view, there is evidence to suggest that even early practitioners were aware that subjective evaluations played a role in risk although it was uncertain what their function was exactly. 109 Shrader-Frechette, Risk and Rationality, 30. Thompson and Dean argue that Shrader-Frechette unfairly characterizes Starr as the exemplar of positivist risk estimation and that, in fact, no one could really hold such an extreme view. Starr focused his work on the differences between voluntary and involuntary risks which he suggests provides the reason behind disparities in the rate of acceptance of new technologies, which itself reveal social preferences. In this way, values do have a role for Starr but analytical approaches should nonetheless be favoured (Thompson and Dean, “Competing Conceptions of Risk”). 110 Most notably Starr and Whipple.  46  Philosophically minded students of probability nimbly skip among these different ideas, and take pains to say which probability concept they are employing at the moment. The vast majority of the practitioners of probability do no such thing. They go on talking of probability, doing their statistics and their decision theory oblivious to all this accumulated subtlety…Most people who in the course of their work use probability, pay no attention to the distinction. Extremists of one school or another argue vigorously that the distinction is a sham, for there is only one kind of probability.111 Critics, however, suggest that the central notion of neutrality that underpins the objectivity required for this view is problematic, and can be attributed to a failure to distinguish between value types. Longino, for instance, proposes three types of values which occur in any scientific endeavour, where they cannot all be successfully avoided. She suggests that bias values, contextual values, and constitutive values work together to make the notion of value-free science (and therefore technical risk analyses as well) impossible.112 In contrast to the technical perspective, proponents of more relativistic views argue that values are the necessary condition of risk and that actual probabilities or chance do not play a role at all in some instances.113 Some go so far as to argue that methods of risk evaluations are biased since they can at times be based on personal ideologies, and this means that more knowledge about risk does not make people more rational about risks.114 The role of knowledge or reason is usually discounted on such views, and one evaluation of risk is seen as being as good as another.115 Cultural relativists, for instance, may acknowledge that risk evaluations (and science by implication) cannot be value-  111  Hacking, The Emergence of Probability, 4. Longino, Science as Social Knowledge. I will not go into more depth about Longino’s overall claim about values in science here but find it useful to mention as a well-established critique of the positivist view of science. 113 For example, Douglas and Wildavsky claim that risk is in fact a collective construct and that what is central is the role of beliefs about purity and danger which they argue for in Risk and Culture. 114 Douglas and Wildavsky, Risk and Culture, 63-64, 71; Sumner, Folkways, 39. 115 Douglas, Risk and Blame, 187-188. 112  47  free.116 The contextualist view is more moderate in its approach: probability and harm are not taken to be the only influences on people’s attitudes towards risk and in particular towards risky technology.117 Instead, other factors matter just as much: whether a risk was taken or imposed (voluntariness), whether harms are cumulative or isolated (catastrophic nature), and whether the harms could be mitigated or reversed (reversibility). While there are good reasons for rejecting the extreme views in favour of more moderate ones (and both Schrader-Frechette and Thompson and Dean offer thorough analyses for doing so), the debate about whether and how to incorporate values into accounts of risk continues.118 The failure to adequately incorporate both probabilities and values into assessments or evaluations of risk is thought by many to be a serious limitation to any account. If science cannot be a value-free endeavor, this must extend to technical risk analyses since they employ the methods of science.119 At the same time, there is good reason to think that purely contextual accounts of risk mistakenly neglect the element of probability or chance and the fact that the likelihood of events can be measurable.  !"B$  C)&,4+2-)&$  The historical development of risk has shaped current understanding not just for experts in the field, but for nonexperts as well. Notwithstanding the justification for such a claim, the idea that risk can be ‘mastered’, reflects an underlying and enduring motive 116  Others such as Sumner, also argue for similar views and Schrader-Frechette describes at some length the basic arguments cultural relativists make. Schrader-Frechette, Risk and Rationality, 31.. 117 Morgan, “Choosing and Managing Technology-Induced Risks,” supra note 7. 118 See Schrader-Frechette, Risk and Rationality, 27-52; Thompson and Dean, “Competing Conceptions and Risk,” 361-375. 119 Fischhoff, Watson and Hope, “Defining Risk.”  48  in the sciences, the field of risk analysis and management, economics, industry, and government policy. Such mastery relies on a narrow conception of risk as something that is measurable and quantifiable, where what matters most is providing some sort of context in which to understand the likelihood of a possible harm. However, information about the probability of something harmful occurring is also seen as being able to alleviate unnecessary worry (Graunt’s motivation back in the seventeenth century), or motivate some mitigating action. This is one of the aims of current technical analyses of risk. Technical perspectives take the view that risks are objective measures of harm; however, a historical investigation reveals, through the work of Pascal and Bernoulli, that subjective evaluations were identified early in studies of risk as having potentially significant effects on probabilistic assessments and on people’s understanding and decision-making. Attempts to incorporate subjective judgments into accounts of risk continued in more modern times, with theorists like Arrow and Keynes who recognized that people did not necessarily follow the laws of probability when making decisions because of the influence that uncertainty and risk had on their perceptions and subjective judgments. More recent accounts that are critical of the technical characterization of risk also point out that some aspects of risk are not easily quantifiable, but just as important. They demonstrate that the undesirable effects of a risk are not limited to physical harm to humans or the environment, but often are difficult to determine. What is desirable or undesirable can be influenced by economic considerations, psychological biases or predispositions and social or cultural norms. The view that risks are always about one  49  sort of harm, as assumed on the technical account, therefore falls short as a comprehensive characterization. Influenced by the various conceptions of risk, and given the disagreements over how best to characterize it, Renn proposes adopting a definition in an attempt to reflect the difficulty in maintaining the view that risk is an objective matter of fact. Subjective evaluations of harm are as important as quantitative measure of their likelihood of occurrence. Neither can be omitted from a full account of risk without serious challenges. Risks are not confined to physical or economic harm, but can also include possible threats to many of the things that are of value to people. Therefore, he concludes that a risk is both descriptive and prescriptive since we have reason to avoid what might be harmful, and harm can simply be what matters to us.  50  3  A Challenge to the Ontological View of Risk  9"#$  %&'()*+,'-)&$  As discussed in the previous chapter, the fact that there are a number of different definitions of risk has led to much ambiguity in both technical and nontechnical discussions. There also exists a range of competing conceptions of risk, ranging from the positivist view of risk as a scientific concept that can be characterized quantitatively, to the cultural relativist view which holds that risk is merely a subjective reaction to one’s experiences in the world.120 The positivist approach, also known as the rationalist conception of risk, is normally associated with Starr and Whipple, who argue that scientific analysis is the only way in which a risk can be understood. On this view, quantified estimates of probability are the sort of information that can alleviate people’s fears and inform them about risks. Positivists recognize that there is an ‘intuitive’ component to risk, but Starr and Whipple do not think that this produces the type of knowledge that is necessary to truly understand risk. They argue that “[f]or specific types of risk, in which intuitive evaluations of risk and benefit contradict analytical evaluations, the necessary consensus may not develop, but rather a conflict requiring political resolution is likely to result.”121 In contrast to the positivist view, there is another extreme where risk is not about objective quantitative facts or probabilities at all, but is instead seen as a matter of purely subjective evaluation. Jasanoff, one of the proponents of this subjectivism, argues that “[r]isk is no longer seen merely as the probability of harm arising from more or less  120 121  Schrader-Frechette, Risk and Rationality, 16-20. Starr and Whipple, “Risks of Risk Decisions,” 298.  51  determinable physical, biological or social causes. Instead, it seems more appropriate to view risk as the embodiment of deeply held cultural values and beliefs…concerning such issues as agency, causation, and uncertainty.”122 Jasanoff refers to the argument of Douglas and Wildavsky, who claim that risk is a collective construct in which beliefs about purity and danger play a central role.123 These arguments typically suggest that methods of risk evaluation are so biased that they can at times be based merely on personal ideologies: the result is that more knowledge about risk does not make people more rational about risks.124 The role of knowledge or reason is usually discounted on such views, which consider any evaluation of risk to be as good as any other.125 These different conceptions of risk naturally give rise to some fundamental philosophical questions. If a claim about riskiness can be analyzed in to a statement of probabilities, is riskiness an ontologically real quality? Does risk really exist? If assessments of riskiness are culturally bounded—in that different cultures would assess the riskiness of the same thing in different ways—does the riskiness of something depend strictly on the epistemological framework from which one assesses it? Is “riskiness” merely a matter of how one perceives something? The answers to such questions might provide some help in understanding risk.126 Although there are many aspects of the topic that could be of interest, risk receives little attention in philosophy apart from decision 122  Jasanoff, “The Songliness of Risk,” 135. Douglas and Wildavsky, Risk and Culture, 40-48. 124 “Ibid., 63-64, 71; Sumner, Folkways, 39. 125 In between these two extremes are a host of more moderate views. Thompson and Dean, “Competing Conceptions of Risk.” There is some argument over whether the prevailing views are dichotomous and represent two opposing extremes or whether there is actually a continuum of perspectives that encompass the extreme views while accommodating those that are less radical. Thompson and Dean argue that this positivist/relativist divide misses the true matter of dispute. In place of naïve positivism, they suggest instead a purely probabilist conception where the primary component of risk is probability with all other dimensions being inessential. At the other end of the continuum is the contextualist conception where probability is on par with intention, voluntariness and other characteristics of risks where no one is essential but some combination of them must occur. 126 Sorting out the debate between the broader conceptions of risk I have briefly outlined is not a part of my current project, but it explains the background for the debate between Rescher and Thompson. 123  52  theory and the utility analysis of choice. One noteworthy exception is the debate between Rescher and Thompson. Rescher maintains that risk is an ontological category since it has to do with objective outcomes in the real world (the chance of some mishap as the result of an action). Thompson argues that there are three major problems with Rescher’s view and that since the outcome of an event has to be judged before being categorized as a risk, risk seems to be an epistemological category since it has to do with what people think rather than simply real events in the world. For example, Thompson suggests that Rescher seems to base his view on the distinction that Knight makes where chance is the genuinely random potential for change in the world while probability may simply be a conditional description or prediction. In this chapter I will summarize these two views of risk and support Thompson’s contention that risk is better understood as epistemological. While I agree in general with Thompson’s objections to Rescher, I will also argue that these might not be as convincing as they first seem and offer reasons of my own for this view. In his introduction to the theory of risk, Rescher challenges some of the assumptions made in technical risk analyses which he claims are often overly simplistic and lead to violations of commonly accepted principles of rational decision-making, inconsistent standards for determining risk and assessing both its magnitude and acceptability, and differences between real and perceived risks.127 By emphasizing the philosophical dimension of risk and paying attention to first principles, Rescher finds it easier to account for some of the problems encountered by analysts who have made assumptions based on a narrow and incomplete understanding of risk.  127  Rescher, Risk, 3.  53  While Rescher’s philosophical analysis involves developing a detailed account of many different aspects of risk, it is more complex than the account I want to provide. As I have explained at the beginning of this thesis, my aim is not to argue from first principles that risk is normative or to provide an analysis of the concept of risk. However, Rescher’s discussion concerning whether risk is ontological or epistemological is relevant for my account. Rescher claims that subjective evaluations, including people’s recognition of what actually counts as a risk, are matters for the assessment and management of risks, but not for the concept of risk itself. The normative account that I want to develop proposes that risks are both descriptive matters of fact and prescriptive messages for action. If Rescher’s view is correct, then the claim that risks are normative is a claim about how risks are understood or perceived rather than a claim about what they are. This distinction, however, is the same one made by proponents of the technical perspective which, as discussed in the last chapter, has faced many challenges. It is therefore worth investigating the ontological/epistemic characterization of risk to determine if there is a way to support the normative account by challenging this distinction.  9"!$  702,/0(D2$5,,)+&'$  For Rescher, risk is simply “the chancing of negativity—of some loss or harm.”128 These two elements, chance and negativity129, are not based on understanding or recognition, but are ontological in that they are matters of how things stand in the real  128  Ibid., 5. “Negativity” is a term explicitly used by Rescher in reference to something bad, some mishap, or harm etc. Although I think the word is awkward, especially since he uses it as both an adjective and a noun, I will adopt its use as a noun for this discussion. 129  54  world; it follows from this that risk itself is objective. Therefore, for Rescher risk is ontological because “it has to do with action affecting the chance of mishap itself, not with the recognition or acknowledgement of this chance.”130 On this view, actions like driving or flying are risks because of the chance of some associated negative outcome such as a collision or a crash. Rescher does not provide a more substantive discussion of what he means by ontological other than to indicate that, a) risks are about objective matters of fact in the world, and b) risks are separate from values which are the product of subjective evaluations of such facts. Rescher makes two key distinctions that form the basis for his assertion. First, he describes the difference between taking a risk (as a result of one’s actions) and facing a risk (which involves no action of one’s own). Secondly, he distinguishes between risk as a real phenomenon and perceived or subjective risk. While both distinctions are necessary for his overall claim about the ontological status of risk, the second one is particularly problematic. The first distinction he makes is between taking a risk, which involves the choice of acting so as to increase the likelihood of some misfortune, and being at, or facing risk, where the possibility of misfortune is due to one’s circumstance: To take a risk is to resolve one’s act-choices in a way that creates or enhances the chance of an unfortunate eventuation, rendering the occurrence of something unwanted more likely. And to be at risk is to be so circumstanced that something unpleasant might happen, some misfortune occur [sic]. Risk is correlative with the prospect that things may go wrong—the chance of a mishap; it exists whenever there are probabilistically indeterminate outcomes some of which are of negative character. Risks face us with the possibility that something untoward might occur, while leaving us unable to foretell any specific outcome with categorical assurance.131  130 131  Rescher, Risk, 7. Ibid., 5.  55  It is thus possible to face a risk that is not taken. Our circumstances can affect whether we face a risk such as the recent economic downturn, the outbreak of H1N1 flu or natural disasters like earthquakes and hurricanes. In this case it is possible to be at risk or to face risk without having taken any action. There is nothing a person has done to put herself at risk, and there is nothing he or she can do to avoid the risk. To take a risk, you first need to know that a risk exists, but such knowledge is not necessary when we face a risk. An earthquake is a risk to everyone, and even though it is known to be more likely to occur in Vancouver, Vancouverites face this risk just as anyone else does since earthquakes are unpredictable and have far-reaching effects. However, people might be taking a risk if they do not insure their property because any financial loss they experience as the result of the earthquake is something they can avoid by taking certain precautions. Thus, on Rescher’s view, people face the risk of the earthquake but they take the risk of losing everything they own if they choose not to take some kind of action.132 Rescher concludes that there are three components involved in any risk-taking: 1. Choice of action: deliberately doing certain things towards the production or avoidance of results. 2. Negativity of outcome: whatever harm, loss, unpleasantness, or misfortune that can result from the choice of action. 3. Chance of realization: the specific prospect (possibility or probability) of realizing the unfortunate result at issue.133  132  Some might argue that if someone chooses to live in an area like the ‘Ring of Fire’ along the Pacific Rim where scientists anticipate and record higher than average levels of earthquake activity, they are taking a risk. However a natural disaster of any kind is itself a risk everyone faces since it is beyond our control. Failing to take action to avert or reduce the type and kind of harm one suffers is a risk a person takes. Merely moving out of the area does not guarantee that they will not experience an earthquake or some other natural event somewhere else. I recently travelled to Peru and found myself in the midst of a devastating earthquake less than 24 hours after arriving, yet I’ve lived in Vancouver for 5 years without having ever experienced a single tremor. The risk I took in Peru was not having a contingency plan or a survival kit with me, but the earthquake is a risk I faced. Also, my action (taking a trip to Peru) resulted in a risk although I did not explicitly take this risk, so I ended up facing a risk I did not take. In this way, Rescher explains that our actions can result in our facing a risk we did not take. 133 Rescher, Risk, 6-7.  56  However, only elements 2 and 3 are important to risk per se because of the possibility that a person could face a risk without explicitly taking it. Deliberateness of action is an external feature of risk and has more to do, presumably, with the outcome of a risk or risky situation rather than with risk per se.134 As described in Chapter 2, it is well established that risks often involve subjective evaluations that cannot be accommodated in the rationalist or technical conception of risk as an objective fact about the world. Rescher does not discount subjective judgments completely, but sees them as comprising a separate though ineluctable dimension of risk which he refers to as perceived risk. According to Rescher, the analysis and management of risk must incorporate both risk and perceived risk, but the subjectivity involved in perceptions of risk is very much separate from risk itself. Perceived or subjective risks might involve ‘eccentric ideas’ about the chance or character of a threat and so will seem like risks to a person even though they are not ‘real’ and may be based on misconceptions. However, Rescher claims that since we cannot expect that people have any other choice than to react to a situation, as they perceive it, we ought to include these types of risks in our analyses. In fact, it turns out that subjective risks are an unavoidable part of risk assessment and management. On this view we are to understand that real risk is ontological and a matter of objective fact but that perceived risk is epistemological and a matter of subjective judgment, and while important to risk analysis and decision theory, is separate from the foundation of risk. For Rescher, ‘real’ risks are therefore objective, ontological examples of how thing stand in the world.135 Rescher seems to do what others before him have done as  134 135  Ibid., 7. Ibid.  57  described in the previous chapter; many people have recognized that risk is and must be subjective and agent-centered, but they also wished to maintain a distinction between objective or ‘real’ risk (which can be quantified, measured and weighed against other facts and people’s evaluations) and subjective risks. Here Rescher attempts to relegate the subjectivity of risk into a separate category of things—perceived risks—although the end result is unconvincing. Nonetheless his argument proceeds on the assumption that risk has an ontological foundation.  3.2.1 Evaluating and Comparing Risk Another feature of Rescher’s claim is that since any assessment or comparison of risks will necessarily involve measurement, we must be able to measure the two components of risk, negativity and its chance of realization. While measurement is not necessary for recognizing risks or for claiming that they are objective matters of fact, it is required in order to determine how serious a risk might be, to decide how to act, and, when there is more than one risk, to be able to compare them. This discussion highlights a potential challenge for his view because negativities are not easily measurable, nor does it seem like they are free from subjective evaluations. A negativity, on his view, can include boredom, illness, pain or discomfort, but it is not obvious how to rank these in terms of their severity since long-term boredom for some people might be much worse than a short term discomfort while for others the reverse is true. Rescher explains that although measuring negativities is obviously fraught with difficulties, there are three factors which can assist us in determining their severity:  58  character, extent and timing.136 The character of a negativity is a qualitative matter where we decide what type of loss or harm is involved: monetary loss, boredom, physical injury, etc. The extent of a negativity includes both the severity—how much or how great the loss is—and the distribution of the loss—the number of people affected. Finally, the timing of a negativity characterizes the time span over which the effects will last, which can range from a few seconds to several years or even generations. These three factors taken together produce a broad range of qualitatively different negativities that are often incommensurable. For instance, how do we weigh the death of forty people from a sudden but painless accident against the prolonged suffering of a thousand people afflicted with a chronic, painful disease? Rescher admits that [t]o assess and compare risks we must be in a position to measure them so as to determine the extent of their relative overall seriousness. And this requires us to be in a position to assess and compare the size of various negativities. Unfortunately, however, there is no assurance that different negativities can be measured in a common comparability unit—that they all have a mutually commensurable quantitative size. The very idea of the relative magnitude of negativities is problematic.137 Rescher argues that we can partly resolve this problem if we focus on negativities with similar qualitative characters, since we are then able to make at least some straightforward comparisons. We can thus compare one instance of monetary loss to other instances of monetary loss, and deaths against other instances of death, although he concedes that even this can lead to disagreements and controversies. It is much easier to decide between two risks when one involves the chance of three people becoming ill and the other involves the chance of a hundred people becoming ill. However, it is rarely the case that risks are presented to us, or occur, in this way and Rescher seems correct in his  136 137  Ibid., 18. Ibid., 20.  59  claim that negativities involve “a plurality of distinct and largely incomparable considerations.”138 Rescher goes even further, however, claiming that “it is not that negativities are totally incomparable, but only that they are not automatically comparable and commensurable in their intrinsic nature.”139 The solution thus lies in making extrinsic comparisons by imposing a particular evaluative framework on those things we wish to compare and measure. For Rescher, the measurement of negativities is not a measurement of facts about the world but rather an evaluative, subjective judgment in reaction to objective circumstances (although it is not an aspect of those circumstances). Rescher is able to maintain the view that risks are facts about the world, but that the frequent challenges we face in attempting to compare and measure them against each other is a result of the system of measurement itself. The measurement, therefore, is not a fact about the world, while a risk is. He explains that when we ascribe values to negativities—it is (in large measure) a matter of human decision to assess negativities vis-à-vis one another. The sizes or magnitudes of negativities are not preexisting quantities—they are the derivative result of an evaluative judgment or decision. Its size or magnitude is not something a negativity has, it is something it gets. It is a matter of evaluation—of something that lies largely (though doubtless within limits) in the “eyes of the beholder.”140 More importantly, for Rescher these evaluative judgments are not simply necessary to assessing risks, but are ineliminable components of it, which offers a solution to the apparent incommensurability of negativities. An individual will probably impose her or his personal value framework on a situation, while a group might rely on a political framework, but in both cases different individuals and different groups will assess the same risks in different ways. He notes that the differences in how people appraise  138  Ibid., 21. Ibid., 26. 140 Ibid., 27. 139  60  negativities are based on the differences in what they value. For example, some people might prize reputation and honour above physical pain or discomfort while others would find that even the chance of a slight discomfort would outweigh the loss of money such as a person who pays a high price for a luxury hotel when a moderate one would do. Although the incorporation of ineliminable value judgments might suggest a threat to the notion that risk is at its foundations ontological, Rescher makes one final distinction that he thinks elucidates his overarching claim: The issues of risk description—of characterizing the nature, intensity, diffusion, timing, and probability of risks—are all factual, scientific questions. They represent matters of observation, theorizing, and inductive extrapolation from experience. But questions of risk assessment address themselves to the appraisal and measurement of negativities—of determining how “serious” or “significant” they are. Such issues pose fundamentally normative, evaluative questions.141 In this way he thinks that one can maintain that risk is ontological while preserving the crucial role of value judgments in assessing and weighing risks in the world. Value judgments are extrinsic to the foundation of risk and despite the fact that they are unavoidable, their function is relegated to that of risk assessment rather than risk description. Risk description remains in the ‘objective matters of fact about the world’ category and only when we have to measure or weigh one risk against another does the ineliminability of evaluations become apparent. While Rescher admits that the measurement of negativities can be a difficult project, he considers the measurement of chance, the second component of risk, to be comparatively simple since “probability is the measure of chance.”142 He adopts the rationalist view which suggests that probabilities arise from three sources: statistical data 141  Ibid., 30 Ibid., 33. Note that Rescher provides further discussion about the measurement of probabilities but I have left out those sections that do not specifically work towards establishing the nature of risk as he outlines at the beginning of his book. 142  61  based on observed frequencies, theoretical considerations based on principles derived from some general theory, and personal estimates based on a person’s confidence in a given situation.143 In order to defend the ontological categorization of risk, both of its components must be themselves ontological, so there must be some way to explain the role of personal probabilities (personal estimates) on his view. Following Savage144, Rescher holds that judgments of confidence or conviction are to be measured in terms of betting behaviour. For example, a person will be indifferent to a bet if, in her judgment, she thinks there is an equal likelihood of winning or losing. Admittedly such estimates of objective probabilities are potentially fallible but the more important qualification emphasized by Rescher is that these are personal, rather than subjective probabilities. Subjective probabilities may be arbitrary or a matter of taste while personal probabilities can satisfy conditions of objectivity. Rescher explains that we might think one’s personal probabilities are rational if, “first, his preferences…are mutually consistent…second, his personal probabilities are reasonably stable over time, provided he receives no new relevant evidence; and third, his personal probabilities are affected appropriately when new items of relevant evidence are introduced.”145 In this way those personal probabilities “conforming to such requirements are not “subjective” in any way which implies their being haphazard or arbitrary, but are rigidly controlled by demands of logical consistency and requirements of reasonable conformity with the overt evidence.”146  143  Ibid., 33-34. Savage, The Foundations of Statistics. 145 Rescher, Risk, 35. 146 Ibid., 35. 144  62  I have outlined Rescher’s account of the nature of risk, and his discussion concerning the evaluation of its two primary components, chance and negativity, which he takes to be separate issues. He argues that risk is ontological since it has to do with how things stand in the real world and not with whether there is any recognition or understanding of how they stand. From here he develops a substantive discussion of each component in turn within the broader context of risk assessment, although this is closely tied to, and often informed by, his description of risk. Since negativities are sometimes effectively incommensurable, whenever we attempt to compare or measure a negativity, our actions must ultimately be determined by the ascription of some set of values. Such a comparison is necessary, he thinks, for understanding and characterizing risks, which in turn provides us with guidance about how to act, behave and think given a particular set of circumstances. For Rescher, probabilities, defined as the measure of chance, provide us with a “splendid mechanism” for comparison. The quantitative, objective values we need to assess and calculate risks arise from statistical, theoretical and personal estimates. While personal probabilities might seem problematically subjective, we need to keep in mind that they are in fact “governed by various objectifying requirements” and are neither arbitrary nor merely a matter of preference.147 Our actions, behaviour and thoughts within a particular situation involving some kind of risk will therefore be determined by an axiological value-perspective, either our own, a group’s or someone else’s. For Rescher, risk is factually and scientifically described but is assessed through a normative, evaluative process. Subjective judgments, while ineluctable, are salient only in regard to the analysis and appraisal of risks, and only in 147  Ibid.  63  their minimal role in personal probability estimates. Just as there are two components to risk, there are also two ways to think about risk: the ontological—as an objective category of things, and the epistemological—as a comparison of these objective things.  9"9$  ./)EF2)&D2$5,,)+&'$  While there are a number of analytical frameworks within the social sciences rejecting views like those of Rescher’s in which risks are treated as objective matters of fact, these criticisms often lack a detailed justification for assuming that subjective evaluations ought to feature prominently in understanding risk.148 However, Thompson provides a detailed critique of Rescher and argues that risks are intrinsically epistemic in nature. Thompson prefaces his response to Rescher by situating his account within the broader framework of decision theory and the subsequent development of risk analysis, which include two main traditions. The rationalist tradition of risk analysis is characterized by the work of Friedman and Savage, and operates under the assumption that a person is risk-averse if he or she avoids decisions with a probabilistic outcome, risk-seeking if he or she seeks out probabilistic outcomes, and risk-neutral if the individual demonstrates no preference.149 On this view, a risk refers to a situation with a probabilistic outcome but does not exclusively involve a harm, loss or a negativity as per Rescher’s description. More moderate and commonly held views, however, include  148  See Funtowicz and Ravetz, “Three Types of Risk Assessment and the Emergence of Post-Normal Science”; Krimsky and Golding, Social Theories of Risk; Pidgeon, Hood and Jones, “Risk and Perception”; Slovic, “Perception of Risk”; and Wynne, “Risk and Social Learning.” 149 Thompson, Rights, Restitution and Risk, 275.  64  some negative feature or disutility as a component of risk along with probability, which more accurately reflects the way people commonly understand risk.150 Thompson suggests that one of the contributions Rescher makes to the development of a philosophical foundation for risk is his distinction between objective (matters of fact) and perceived (matters of some subjective evaluation) risks. Thompson claims that these two categories are the source of much confusion in risk analysis, particularly in its application to public policy, and that Rescher’s clarification on this point is extremely useful. However, Thompson has three major objections to Rescher’s philosophical description of risk, as well as a more generalized argument against any explanation that isolates subjective judgments from discussions or explanations of risk per se. Thompson’s first objection involves Rescher’s ontological characterization of both chance and negativity. Thompson points out the longstanding nature of such problems for philosophers in general and maintains that Rescher does not adequately demonstrate how either element is ontological. Thompson first takes issue with the claim that chance, rather than probability, is a component of risk. He suggests that this is likely based on Knight’s account that there is a distinction between chance—the genuinely random potential for change in the universe—and probability, which may simply be a conditional description or prediction that substitutes for complete description when knowledge of full causal conditions is incomplete. Phenomena and behavior governed by probability are not necessarily governed by chance, in this sense. The objects in question may be perfectly determined by natural causes which either aren’t or can’t be known with specificity; but that doesn’t mean they aren’t there.151 Thompson argues that while Rescher is correct that probability can be a measure of chance-like events, or events that are “uncaused”, he relies on an unviable definition of  150 151  Rasmussen, “The Application of Probabilistic Risk Assessment Techniques to Energy Technologies.” Thompson, “The Philosophical Foundations of Risk,” 278.  65  probability since there are many cases where the outcomes of events might not be due to chance. The probability of rolling a six with a pair of dice, for instance, is not a measure of the random behaviour of the dice but is in fact the measure of a behaviour that can be anticipated with probability through the application of Newtonian mechanics to the motion of the dice. Thompson therefore believes that probability need not be merely tied to the measure of chance as Rescher states, but can also reflect epistemic ignorance of unfolding future or unobserved past events as well. Further challenges arise from Rescher’s discussion of relative frequencies, classical theory and personal probabilities as the source of those values involved in the probability calculus. Thompson asserts that for “Rescher’s characterization to succeed, risk must be found only in cases where chance, not lack of knowledge, gives rise to probabilistic assessments of a state of affairs. This, however, is patently not the case.”152 More specifically, there is a problem with personal probabilities, which are based on the confidence with which a person judges the truth of a particular thesis. Dismissing relative frequencies for their unavailability, Thompson claims that these subjective (personal) probabilities are most crucial to risk analysis and given their dependence on measures of confidence rather than chance, “seem poor candidates for ontological categorization.”153 The second objection that Thompson raises challenges the idea that the negativity of a risk can be ontologically conceived, especially given Rescher’s own characterization of negativities as derived from evaluative judgments. Rescher anchors his categorization in the idea that one can be at risk without knowing it, which is meant to support the claim that risks are matters of how things stand in the world. However, for Thompson this  152 153  Ibid., 278-279. Ibid., 279.  66  seems problematic given the ineliminable role that subjective judgments have in understanding negativities and in risk analysis. Risk cannot be just a matter of how things stand in the world since it is often the case that events are negative because of the way they are understood or evaluated by a person. As discussed earlier, Rescher explains that it is possible both to face a risk and to take a risk. People can face a number of risks every day, which they may or may not be aware of, but their awareness does not change the fact that the risk exists. For example, if a passenger on a plane returning from a trip abroad has contracted a contagious illness but has not yet become symptomatic, other passengers on board face the risk of contracting the illness although they might not be aware of it. The risk of illness exists whether or not the passengers know about it. Taking a risk, however, requires some knowledge that the chance of harm exists in the first place. If the sick passenger went abroad knowing that there was a chance of contracting the contagious illness, but opted not to get the vaccine available to prevent it, we would say that he took a risk. Although Thompson suggests that ontological accounts of both chance and negativity might still be possible, he dismisses Rescher’s claims (which rely on levels of probabilistic confidence and evaluative judgments) as unconvincing. According to Thompson, any philosophical theory which attempts to provide an ontological foundation of risk must measure up against our common understanding and use of the word. This third objection highlights the fact that risks, unlike other objects in the world like “protons or wildebeest”, are not easy to discover or analyze because they rely on the judgments we make. Thompson explains that “[r]isk is, in that sense, epistemic: not that we can change probabilities through squinting our eyes and wishing hard, but that the  67  very basis for identifying someone or some group as “at risk” presupposed standards for selecting the actions and events we want to quantify.”154 For Thompson, it is only through the combined forces of the social and natural sciences that risk is understood. The natural sciences develop theories to explain and describe objects in the natural world which have their own structure and common reference point. In the social sciences, the choice of objects to study reflects not only the interests of the scientists but also other background considerations rendering elusive a common point of reference.155 Thompson argues that the exclusion of those features and objects of risk that are better described by a social scientific methodology jeopardizes Rescher’s ontological claims. Where Rescher’s ideas are successful according to Thompson, however, is in making the necessary distinction between real and perceived risks. In part this is because a successful analysis of risks attempts to identify and predict events, such as the number of deaths or injuries caused by a chemical spill at a manufacturing plant, probabilistically and is not concerned with what people think the risks of having the plant nearby are.156 But Thompson points out that this is problematic since it imposes methodological parameters on the concept of risk where potentially harmful events are not influenced by a person’s evaluative judgments (fears, concerns etc), or perceptions. Further, it does not accommodate those instances where the risk a person faces, or takes, is in fact determined by their perceptions and judgments about which they cannot be mistaken. The source of risk in this case is not something measurable or easily quantified, nor does it even have  154  Ibid., 282. Ibid. 156 Ibid., 283. 155  68  much to do with what is real, but is whatever sources one thinks they ought to have confidence in.157 Thompson argues that there are different metaphysical interpretations of risk, but that a tendency to rely on the methods of natural science and technical definitions in its analysis persists. Rescher’s characterization of ontological risk assumes that risks are ‘real’ in the same way that technical analyses do. As useful as this strategy is in measuring and quantifying instances of possible harm, for Thompson it does not provide the kind of justification that is required. Thompson argues that scientific thinking enters our background beliefs about what is to be feared and how it can be controlled. It is this judgmental contribution of science that shapes the contemporary categorization of risk and distinguishes it from animistic fears and taboos. In any case, seeing a situation as risky involves a categorical framing of a situation which is value laden, but that, once framed, can be analyzed and assessed “objectively” in Weber’s sense.158 These intrinsic epistemological features of risk are a problem not just for Rescher’s account but also for any generalized view which fails to recognize them.  9";$  5&34:2-2$)6$702,/0($3&*$./)EF2)&$  Thompson’s response provides a number of persuasive arguments demonstrating how Rescher’s ontological account of risk is problematic. While in general I agree with his assertion that there are good reasons to think that risk can be either or both epistemological and ontological, I will offer some objections to what on my view, are his potentially weaker claims. First, Thompson takes issue with Rescher’s categorization of chance, rather than probability, as one of the components of risk. There are two problems with Thompson’s 157 158  Ibid., 283-284. Ibid., 285.  69  critique in this case. First, it is not clear that Rescher is using chance to indicate anything more than what we might generally mean by the term. I am sympathetic to Rescher on this point since it is difficult to talk about the chance or likelihood of risk while avoiding the various debates concerning the uses of each term. Rescher does not make clear any intent on his part to use the term ‘chance’ as distinct from ‘probability’ so it is difficult to argue that he is making a mistake. Second, Thompson’s dismissal of personal probabilities as mere measures of confidence neglects Rescher’s response in anticipation of such a criticism. While in the end I agree that an ontologically conceived notion of chance is not supported by Rescher’s analysis, Thompson’s critique is also problematic. Thompson explains that although Rescher may “be correct that probability can be used as an aggregate measure of events which, individually, are random or “un-caused”, this will not do as a definition of probability.”159 He points to those cases where the outcome of an event can be anticipated with probability and thus is not a measure of physically random behaviour. It is not clear, however, that Rescher was making this distinction as specifically as Thompson suggests given the general terms used to describe probability and chance which he takes simply to be “a matter of likelihood or probability.”160 Rescher does not make the claim that by chance he means those events that are exclusively random or un-caused. He also explains risk in terms of the “likelihood”, “prospect of”, “possibility of” and the “chancing of” a negative outcome, all of which suggests that he is not committed to Knight’s view of chance. Without further elaboration it is difficult to know if he meant to use the term in a general or specific sense, however in describing theory-based probabilities, he explains that:  159 160  Ibid., 278. Rescher, Risk, 18.  70  [t]heory-based probabilities are not based on observed frequencies but calculated on the basis of the principles afforded by some general theory governing the processes that produce the phenomena at issue…Often, these theoretical considerations will be matters of causal symmetry governing the operation of certain physical processes. Coin-tosses, die tosses, roulette wheels, and quantummechanical phenomena all yield examples of this. In such cases, we assume that the causal factors at work operate “randomly” so as to favor neither heads nor tails.161 From this passage it seems that Rescher does not use chance in the way Knight does and he accounts for those very cases such as the rolling of dice that Thompson worries about. In any case, Rescher does not provide a substantive discussion of why he chose to use chance rather than probability but if Thompson is right then the problem seems to be one of semantics. Thompson also suggests that it is problematic for Rescher to claim that subjective probabilities are measures of confidence rather than of chance given that these probabilities are so crucial for an analysis of risk. But here I believe Thompson seems to overlook an important qualification. Measures of confidence are clearly not measures of how things stand in the world. However, Rescher makes an effort to argue that these personal probabilities are “governed by various objectifying requirements” which are distinct from merely arbitrary subjective probabilities.162 Therefore, even though they might be measures of confidence, he thinks that they can be grounded in such a way so as to be distinct from mere matters of taste. While I agree with Thompson’s conclusion, his objection does not quite address the reason why this is problematic for Rescher. As explained previously, Rescher identifies three objectifying conditions for personal probabilities: consistency, stability and the ability to adapt to new evidence, which he sees as “rigidly controlled by demands of 161 162  Ibid., 34. Ibid., 35.  71  logical consistency and requirements of reasonable conformity with the overt evidence.”163 However, he concedes that such estimates are very weak and should be used merely to supplement statistical probabilities and should not be relied upon by themselves. The obvious problem here is that even with Rescher’s qualification and concession, it is difficult to see how these conditions might be ontological. Even rational or logically consistent beliefs which are stable over time and which are responsive to new evidence can simply be subjective. And it is difficult to imagine another category of things that are given ontological status because they fall within reasonable limits of how things stand in the world and not because they are (or are about) how things stand in the world. Rescher’s attempt to make personal probabilities objective is unconvincing and since, as Thompson asserts, they are often the only source of probabilities available, the attempt to categorize chance as ontological is problematic. Thompson’s second objection is to Rescher’s explanation concerning the fact that negativities are matters of how things stand in the world. The major difficulty is that on the one hand, Rescher is quite clear that values are ascribed to negativities and that evaluative judgment is an ineliminable component in the both the assessment of negativities and the assessment of risks. On the other hand, he maintains that while negativities are objective, they are not “automatically comparable and commensurable in their intrinsic nature.”164 It is difficult to see, however, that the subjective nature of negativities is simply a matter that can be limited to comparisons between them, and the distinction he makes between their intrinsic and extrinsic natures is problematic.  163 164  Ibid. Ibid., 26.  72  Again, although I agree with Thompson, there seems to be a further difficulty when Rescher tries to establish this difference by making a distinction between the outcome of an event and its result: [O]ne must take care to maintain the distinction between the outcome of a chance process as such (the events in which it issues) and the result it produces for those involved—i.e. the value-implications of these outcomes for the beneficiaries or maleficiaries at issue. What the value of an outcome is, is sometimes relatively clear (death and dismemberment) and sometimes not (meeting an old acquaintance once more). In general, an inextricable mixture of objective and subjective criteria are at issue in determining the value of particular outcomes for a certain person or group. But insofar as our interest is in rational decisionmaking, this evaluation cannot be something purely subjective and idiosyncratic. It has to be construed as duly adjusted for the “reasonable man.”165 For Rescher, the outcome of the event is whatever actually occurs while the result of an event is the perception that someone might have of that outcome. For example, the outcome of an earthquake might be the ground shaking, buildings collapsing and bridges breaking. The result of the earthquake is the value of the outcome to an agent such as the loss of one’s home, possessions, way of life, sense of security etc. However, if the outcome of an event is value-neutral, as Rescher asserts, then no risk is either taken or being faced. The choice is simply one with an uncertain outcome. What makes an action or decision risky is that it involves some negative, unwanted or disadvantageous result. The outcome of events in the world may very well be an objective state of affairs, but this has little bearing on whether those events will be understood as risks. The reason they are thought of as risks is due to the values we ascribe to them. In this way Rescher is incorrect to say that risks are ontological since what defines a risk is the way we think about, or evaluate the outcomes of events.  165  Ibid., 12.  73  While Rescher also distinguishes between risks we take and risks we face, this does not do much to help his argument about the nature of risk. There are a few cases to consider. First, a person takes a risk when they understand the possible result of an action or event. It is obviously not enough to just know or to be aware of the outcome (what actually occurs) since it is often impossible to know such information ahead of time and outcomes are value-neutral. Only when it is understood or known that some kind of negativity or mishap might occur can a person knowingly take a risk. This would suggest that to knowingly take a risk requires some evaluative work by the individual. An outcome is value-neutral as Rescher explains. On his view, however, a risk is comprised of probability and some negative, unwanted or undesirable event. It seems that an event has to be unwanted or negative in some way but cannot be neutral if there is risk. Taking a risk requires that a person understands then that some result may or may not occur on Rescher’s view, and the result is the ‘value-implication’ of the outcome. Second, a person or group of people who have little control or knowledge may face some risks. However, just because we can face risks we have no hand in creating or that we might not even recognize, does not mean such risks are simply matters about how things stand in the world, although it is understandable that they might seem to be this way. Rescher points out that the passengers of the Titanic faced a risk they did not take. Breaking down the event in Rescher’s terms, we see that the outcome of taking a trip on the titanic was hitting an iceberg, the ship sinking, and people dying. However, when one starts from a sense as to what is important, one then has reason to think of death as a “risk.” The result aspect is that people lose something of value, or suffer something of dis-value and in this case the results were loss of life, trauma, and suffering.  74  The passengers faced a risk because we think that losing one’s life, being traumatized and suffering are all unwanted outcomes and this is because they threaten those things that we value.166 Boarding a ship to cross the ocean was a risky venture, even when claims about the sea-worthiness of the ship were made. The passengers took a risk because there is a chance that something bad might happen on a ship crossing the ocean even though they might not know specifically what might happen. Rescher might be correct to point out that hitting an iceberg is the value-neutral outcome of the voyage but this does not make hitting an iceberg a risk unless it is an unwanted event. We ascribe our values to the outcome of the event and then understand the risk these ill-fated passengers faced. But this is not an objective matter of how things stand in the world. It is a matter of how we think about the outcome and how we ascribe values to it. Looking back on the event, it is easy to see that the passengers faced a grave risk getting on board the ship since so few people survived. But, if Rescher’s distinction is right, then we think of getting on the ship as a risk because of the values we ascribe to the outcome, not because of the outcome itself. If we were only concerned with the value-neutral outcome, what risk did the passengers face? Speaking of a risk in terms of outcomes does not make much sense since we mean to say something more than to offer a mere description of an event when we use the term “risk.” The passengers faced a risk of hitting an iceberg by getting on board, which is different from saying that they faced the outcome of hitting an iceberg.  166  In discussing the Titanic as a situation of risks “faced” rather than “taken”, Rescher is imagining the situation of the passengers already in transit: that is, they have no further decision to make at this point that would affect whether they are liable to be on a sinking ship. So when they were in the process of deciding to take a trip, icebergs were a risk they could take; after getting on board and sailing, icebergs were simply a risk they faced. However, here I am exploring the idea in different way.  75  To further clarify, let us say that there are two passengers on a ship going across the Atlantic Ocean. One man, A, is very wealthy and enjoys a lifestyle few will ever experience. The other man, B, is a poor man who struggles to make ends meet. Halfway across the ocean, the ship is hijacked by pirates and taken to a strange, unknown island which functions as a communist utopia. The outcome of this event is clear but the result is less so. For A, it is easy to see (after events have unfolded) that he faced a risk when he got onboard the ship because he was about to lose the privileged lifestyle he had enjoyed. However, for B, it is not so easy since we can imagine that he might be relieved to find himself with a better standard of living and without the constant stress of meeting his needs to survive. He might also experience the kind of camaraderie and acceptance he lacked when he was a poor man. In this case, would we say that A and B faced the same risk given that the outcome for both is the same? I think we would probably answer in the negative because it is the evaluation of the outcome (Rescher’s result) that matters in terms of what kind of risk was faced. This illustrates the importance of evaluating outcomes. It is possible then to have one outcome that is a risk for one person and not for another person, which suggests that risks are not really objective matters in the world, but very much depend on agent evaluations. Rescher argues that risk assessment is different from risk description. He explains that risks refer to things or events in the world while the assessment of risk necessarily involves subjective evaluations. Evaluations are ineliminable in the assessment of negativities specifically because what may count as very harmful to one person, may be  76  less so to another. For example, one person might avoid any chance of physical harm, while another is more concerned with the loss of money.167 If the outcome of an event is value-neutral then it is not possible for that outcome to be negative or judged by someone to be a negativity. Negativities, on Rescher’s view, are certainly not neutral. While it is true that we need some way of measuring one negativity against another which will clearly involve subjective evaluations, it is not entirely convincing to think that what counts as a negativity is not also evaluative. By his own definition, a risk is a negativity and the chance of its realization. It is possible, however that what counts as a negativity could itself be a matter of subjective evaluation. The field of risk analysis relies on the methods of natural science where objects and events for study exist independently and do not depend on our knowledge of them. However useful such an approach might be, it is limited and potentially flawed in mischaracterizing risk as something that it is not, or, at the very least, in failing to recognize the epistemic dimension of risk. Rescher claims that “at bottom risk is an ontological not an epistemological category: it has to do with action affecting the chance of mishap itself, not with the recognition or acknowledgement of this chance.”168 This seems to be where the problem with this view lies. It may be correct to say that one need not be cognizant of the chance of a mishap in order for a risk to exist, however mishaps are not matters of how things stand in the world, they are the result of our interpretations about events. A mishap is a mishap because that is how we interpret it and this does not require the individuals directly affected or involved in it to know anything about it. We apply our evaluation retroactively on the events and outcomes in order to characterize  167 168  Rescher, Risk, 27. Ibid., 7.  77  them as risks or not. Neutral outcomes become risks after we have evaluated them but until then, they are simply outcomes. This can be explained in the past, present and future. If there is no negative value placed on events in the past, we do not say that people took a risk (knowingly or not). Similarly, one does not take a risk in the present if the outcomes are neutral. If the outcome is unknown or uncertain, we might then say there is risk involved because of the possibility of a negative outcome, which we will not become aware of, or judge as negative until after the outcome is known. Future events follow this similar pattern and if one is worried about the unknown because of possible harm, one will act as if there is a risk but the risk requires knowledge and judgment. We first need to know what the outcome is, and then we need to evaluate it in terms of its negative features. Without this we cannot coherently understand or characterize something as a risk. There is reason to support the view that risk is epistemological since an outcome is sometimes negative through evaluation and not as a matter of fact. Matters of fact are neutral until some assessment occurs which further categorizes them as negative or positive. This is not to say that risks are merely relative, however. In fact, it is understandable that risks seem to be objective matters of how things stand in the world since what constitutes a harm or negativity is often very obvious such as breaking a leg or losing all of one’s money. But this would be to mistake the result of an event or action with its outcome.169  169  I will provide a more involved argument for this in the following Chapter 4 where I further refine my account of normative risk.  78  9"=$  C)&,4+2-)&$  In this chapter I presented a challenge to the view that risk is ontological by addressing some of Rescher’s core arguments. He attempts to argue on the one hand that ‘real’ risks are wholly objective matters of fact and therefore ontological. On the other hand, he claims that perceived risks, which are clearly subjective, are distinct from real risks but also ultimately unavoidable. If what a person perceives is an unavoidable component of risk, then it is difficult to be convinced by Rescher’s assertion that risks are nonetheless ontological, without a substantive discussion concerning the ‘special’ nature of perceived risks. Thompson claims that risk is an epistemological category, which seems to be more consistent with the common sense meaning of the term “risk”, as well as more justifiable from a philosophical perspective. A risk is not merely a matter of how things stand in the world, nor is it merely descriptive in everyday language. For Thompson, risk presupposes epistemological considerations because it requires some kind of value-laden framing of a situation, although he argues that once this framing has occurred we can ‘objectively’ assess and analyze risks. In assessing these two views I offered arguments in support of Thompson and the view that risk is epistemological, but also addressed potentially problematic claims. I argued that even though Thompson challenges the claim that the negativity of a risk can be ontologically conceived he does not provide sufficient reason for rejecting this claim. In an attempt to do just that, I argued that Rescher’s distinction between risks we take vs. those we face, as well as his differentiation between the outcome and result of an event, do not clearly support his view. A risk necessarily involves some kind of unwanted event  79  or negativity, but it is difficult to see how Rescher could justify the claim that negativities are matters of fact yet this is a central component of his argument. As a result, his claim about the ontological nature of risk is unconvincing. On my view, risk, understood as the chance of some harm, can also be understood as epistemological. Subjective assessments are required to identify what will count as harmful and harm is an ineliminable component of risk. Rescher’s ontological categorization seems problematic given the difficulty in describing a risk as an objective fact about the world which does not involve the value-laden procedure of recognizing that fact to be ‘negative’ or unwanted. Events and the outcomes of events by themselves are neutral and cannot be harmful or helpful without the ascription of such qualitative elements by someone. Identifying a risk requires prediction and without a reasonable amount of information, this is not always possible. When it is possible, however, what makes an event, outcome or choice a risky one is the recognition that it may involve something bad, undesirable or harmful. Without this recognition, a risk is an uncertain event but it may produce good, bad or neutral consequences.  80  4  Providing Reasons for Action  ;"#$  %&'()*+,'-)&$  The descriptive sense of risk refers to the chance that some harm may occur, while the prescriptive sense tells us that we ought to avoid or minimize the chance of harm. It can be understood that risk implicitly provides reasons for action simply because people want to avoid those things that might bring about undesirable effects or circumstances. There exist many bad or undesirable things such as illness, breaking a finger or losing money on an investment, yet we do not claim that everything bad or undesirable is normative. Further explanation is therefore required to support the view that a risk is both a descriptive and normative concept. In this chapter, it will be assumed that a risk is always negative, bad or detrimental in some way and that it is composed of both chance and harm. Since it can be said that avoiding harm is rational, and on the view that to say something is rational is to offer an endorsement of it as a means to our ends, then perhaps risks are not in fact normative. All that is required is the descriptive sense of risk which points out possible harm and if we have as one of our ends the desire to avoid harm, then avoiding risk is a means to our ends. On this understanding, taking a risk is not itself intrinsically a normatively disfavoured activity, but is rather likely just to be an inefficient means to antecedently given ends. However, I will show that there are limits to this explanation, which does not adequately address what it means to call something a risk. I will then discuss the two components of risk, chance and harm, to show that there are a number of dimensions to take into account in determining what is harmful, and that a risk is better characterized as the chance of threats to what is of value to a person. 81  While the normative concept of risk is based on the fact that risks are usually negative or harmful, risks are sometimes considered to be good, which could be a potential problem for the account I propose. Often, however, these seemingly ‘good’ risks are not risks at all, or occur in limited or constrained circumstances which set them apart from other types of risks which can be normative. Having established that a risk, on the normative account, is usually negative, and that what makes a risk harmful is often a matter of circumstance or subjective evaluation, I will explain how it might also prescribe action. On my account, risks are normative in the way that morality is normative, although they are best understood as weakly normative.  ;"!$  5$>32-,$G&*0(2'3&*-&H$)6$7-28$  Many people have noted that while risks are a familiar part of everyday life, defining risk is not easy.170 Usage of the word in ordinary conversation does not seem to fit clearly into any easy analysis unless it is narrowly defined in some technical and formalized way. There is an impressive, multi-disciplinary array of uses, interpretations and definitions that makes one doubt whether the common, everyday understanding of risk is accurate at all. Some of these definitions go so far as to claim that there is no correct definition and that the choice one makes in using a particular definition is in fact “political.”171 All of these uses create so much ambiguity that a specific sub-discipline of sorts exists in which scholars attempt to clarify the term “risk.”172  170  Smith, “Mad Cows and Mad Money”; Hansson, “Philosophical Perspectives on Risk”; Fischhoff, Watson and Hope, “Defining Risk”; Hacking, “Risk and Dirt”; Garland, “The Rise of Risk”; Cranor, “Toward a Non-Consequentialist Approach to Acceptable Risk,” 36; Gillette and Krier, “Risk, Courts and Agencies,” 36. 171 Fischhoff , Watson and Hope, “Defining Risk,” 30. By “political,” Fischhoff , Watson and Hope explain that the decision expresses “someone’s views regarding the importance of different adverse effects in a particular situation.” 172 Schrader-Frechette, “Risk and Rationality”; Douglas, Risk and Blame; Ruck, “Risk is a Construct.”  82  In fields such as risk analysis, engineering, health care and economics, risks are measured, quantified and compared. Some have argued that the qualitative or evaluative part of risk is in fact an entirely different aspect or type of risk from its probabilistic measure. This has led to a distinction between subjective and objective risk.173 So-called objective risks are understood as the product of some scientific method, research and analysis. In contrast, subjective risks (also known as perceived risks) are based primarily on subjective perceptions, and frequently these impressions are those of laypeople rather than experts. Subjective risks are most often found in public health statistics, epidemiological surveys, experimental studies and probabilistic risk analyses.174 Separating the measurable component of risk from its evaluative component makes it possible to apply scientific methods to studies and analyses of risks.175 It is of course much easier to measure one risk against another if risks are fully measurable since ‘bigger’ risks are worse than ‘smaller’ ones. Non-measurable evaluations of risk are not always completely discounted since they can be dealt with by qualitative analyses (but are, on this view, separate and distinct from actual risks). Sometimes these qualitative evaluations are included in calculations (i.e. expected utility) and at other times they are considered after the fact as additional features of, or factors affecting, a decision involving risk. On this view, a risk is the probability of an unwanted event, and subjective risk is the evaluation an agent makes either about the risk or about the unwanted event.  173  Garland, The Rise of Risk, 56; Fischhoff, Watson and Hope, “Defining Risk,” 31; Rescher, Risk, 7. Fischhoff, Watson and Hope, “Defining Risk,” 31. 175 The distinction between the two types of risk also allows for the distinction between ‘experts’ and ‘nonexperts’. Thus disagreements between experts and the public are sometimes attributed to the irrationality of nonexeperts and their “perceptions of risk” which, on this view, are not the risks in question. Fischhoff et al note that this is a controversial problem and more complicated than it seems at first. Ibid., 31. 174  83  Another part of this reasoning about the objectivity of risks is that what counts as harmful or undesirable is assumed and, depending on the discipline, becomes standardized.176 There is little debate over what counts as a harm, even though there may be some debate in how to calculate “actual” or “real” risk, and how to compare one risk against another. Other accounts, such as psychological or social and cultural views of risk, do not make this assumption but rather attempt to address the different ways in which a harm might be understood. They provide more subjective views of risk but in fact they are simply making the subjective side of risk a central component of their analyses. The distinction between subjective and objective risks is therefore not about what sorts of risk different fields are interested in but rather it is about the different aspects of risk that they make central or choose to focus on. There is also a clear distinction between the various technical uses of the word “risk” and use of the word in ordinary language. But just like the technical definitions that vary from discipline to discipline and even from situation to situation, ordinary usage of the word is itself unclear. The term “risk” is used loosely in everyday language, and Lupton argues that [i]ssues of calculable probability are not necessarily important to the colloquial use of risk. Risk and uncertainty tend to be treated as conceptually the same thing: for example, the term ‘risk’ is often used to denote a phenomenon that has the potential to deliver substantial harm, whether or not the probability of this harm eventuating is estimable.177 In everyday conversations, “risk” can be used as a noun, an adjective, and a verb. We speak of the risk of terrorism (noun), getting involved in a risky financial scheme  176 177  For example, in Chapter 2 I explained that in risk analysis, risks refer to the annual number of fatalities. Lupton, Risk, 9.  84  (adjective), and risking it all at the roulette wheel (verb). Sometimes a person is called a risk-taker when they seek out situations where there is a chance of harm or loss. If successful, like Warren Buffet for example, being called a risk-taker is meant as a good thing; if unsuccessful however, like an investor who loses their house and goes bankrupt, being called a risk-taker is meant to be something bad. While there are exceptions, in many of these different instances a common idea is being communicated even if the specific details can be quite different; that is, that something unwanted may or may not occur. This unwanted or undesirable consequence or outcome may occur through some action an agent takes or it might be imposed on a person, society or thing, through no action of their own; it might occur now, later today, tomorrow or twenty years from now; and it might be caused by some unknown agent with or without a motive. Any analysis of the nature of risk and the proper way to define it is bound to be very complex. In Chapter 1 I provided a distinction between the descriptive and prescriptive senses of risk. When risk is prescriptive, it serves as a warning or recommendation that some action ought to be taken. This warning arises from the fact that a risk involves the chance of something harmful or undesirable. What is needed for the present discussion which explores how to account for the prescriptive force of risk, however, is a simple understanding. I am not attempting to present a thorough analysis of risk. This account also does not have much to say about how to choose among risky options, how to decide what counts as more or less risky when expressed in terms of their probabilities, or what the best strategy might be in the face of a risky choice.  85  What I will mean by risk in this discussion is the combination of both chance and harm, where the intuition is that in general, risks are bad. This is not a claim about the philosophical concept of risk but is based on the generically accepted conception that is used in the literature including technical and moral discussions.178 It is also what people usually mean when they call something a risk or risky. A normative account of risk begins from a general definition of risk as the potential for the realization of unwanted, negative consequences of an event. On this view then, a risk necessarily involves both an element of chance, meaning what is possible but not certain, and the element of harm.179 If the outcome of an action or event is known with certainty, this is not risk. Similarly, if the outcome of an action or event is not unwanted, harmful, dangerous or simply ‘bad’ in some way, this is not risk either. Suppose that we agree that flipping a coin that comes up heads is an unwanted event. If I flip a coin that has “heads” on both sides, heads will certainly come up. In this case, even though we have agreed that heads is ‘bad’, there is no risk because the unwanted event will certainly occur. If the coin is a normal one and we have not agreed that heads is ‘bad’, flipping the coin does not involve risk either. If I am indifferent about the outcome of an event, even though it is unsure or uncertain, then there is no risk.  ;"9$  I(02,(-@-&H$5,'-)&J$$K/3'$-2$73'-)&34$-2$L)(E3'-<0$  To this point I have proposed that a risk is always something negative. This is not a claim about the nature of risk, nor have I offered a rigorous argument in support of it. 178  See Ericson and Doyle, Risk and Morality; Hacking, “Risk and Dirt”; Rescher, Risk; Bernstein, Against the Gods; Pidgeon et al., “Risk Perception”; Zinn, Social Theories of Risk and Uncertainty. 179 I have made no attempt here to use “chance” in its technical form. I use it only as a synonym for probability or uncertainty although recognize there is some debate about this. Since different instances of risk can involve probability, uncertainty or chance, I use the terms interchangeably to indicate that it does not involve certainty and to emphasize the nontechnical use I am making of the terms.  86  Since what makes risk prescriptive is the implicit message that some action ought to be taken to avoid something harmful, this generic conception provides the basis for further developing the normative account. Of course, a risk is not simply something that is harmful, but rather it is possibly harmful where there is no way to determine exactly what will happen although estimates of its likelihood can be made. While it makes sense that people will want to avoid what is undesirable further explanation of how risks can prescribe action is needed. Suppose you are about to embark on a cruise in the Indian Ocean when your friend calls to tell you that sailing on this particular ship and on this particular route is a risk since the area has seen a rise in piracy. The intent of your friend is twofold. He or she is communicating factual, descriptive information to you, but is also prescribing some action. Norms must provide agents with reasons for action and if risks seem to provide such reasons, it is possible to think of risk as normative. Telling you that your adventure is risky is telling you that there is reason to avoid that action, or at least to be cognizant of the possible harm. If you choose to ignore this implicit message, then you should have a reason for doing so. Dismissing the information out of hand would be odd, and might suggest some kind of lapse in judgment or cause concern for your friend. When we call something a risk, we are not simply offering a neutral description of some fact or event in the world. Usually, to say that something is a risk is to offer a warning of sorts to listeners. In contrast, to explicitly identify an act or situation as not risky or without risk is to offer a weak endorsement of it rather than to merely describe it.180 If the travel agent tells me that a cruise in the Indian  180  Note that it is not necessary to explicitly use the term ‘risk’ to communicate that something is risky. It is possible for your friend to say that there is piracy in the area you are headed to and this would/should still be understood also as  87  Ocean is not risky, I will understand that his or her message is not merely factual but an endorsement of my choice to go on this cruise. I will probably also believe that I should not be concerned with coming to some harm due to my decision. By indicating a lack of risk, the travel agent means to allay my fears. It might be argued that it is simply reasonable to assume that people should want to avoid something harmful and that, in fact, doing so is rational. If this is the case, then perhaps risks are not in fact normative but rather point out what it is rational to avoid and it is this rationality which provides reasons for action. Schmidtz, for example, explains that rationality is a source of normative force. He argues that it is possible to “derive a normative conclusion from facts about a person’s ends and facts about what would achieve those ends, which means that facts about ends have a certain potency, a certain normative force: they give us reasons for action.”181 For Schmidtz, calling a choice or decision rational is not mere description, but neither is it mere endorsement. On his view, we say that X is rational because we think we have good reasons for it, or because X warrants endorsement. Thus, “to call a choice rational is, first, to endorse it, second to have a reason for endorsement, and third, to have as one’s reason for endorsement that the choice will serve the chooser’s ends…rational choice, as understood here, involves seeking to choose effective means to one’s ends.”182 This approach begins with the more general idea that rationality consists of a set of principles, norms, or reasons that apply to all agents capable of understanding them and which serve as guides for what to believe and how to act. An agent is rational insofar as  a risk since it involves the chance of some harm occurring. A risk can be understood without being named as such but I am being very explicit in this explanation. 181 Schmidtz, Rational Choice and Moral Agency, 7. 182 Ibid., 12.  88  her deliberations, beliefs, actions, and, so on, conform to, and are guided by, those principles. The principles of rationality constitute genuine reasons to act, intend, or believe, i.e., those reasons must be internal, or capable of motivating any agent to which they apply. Therefore, if it is possible to say that one of our ends is to avoid the chance of harm, then we can say that it is rational to avoid risks. Schmidtz proposes a means-end conception of rationality, so perhaps avoiding possible harm is a very simple, or basic case of means-end rationality. The descriptive sense of risk is thus all that this view requires because it points out that an action that may appear to be a means to one’s end is in fact not such a means, or not the best means available. While the argument that it is rational to avoid harm is convincing, I think it does not adequately capture what is meant when we call something a risk. My point is that risk is also prescriptive, meaning that it can provide reasons for action. There is a more detailed account for explaining the normativity of risk, made by comparing it with morality. It also addresses the following two important features of prescriptive risk. First, it can often be said that taking a risk (or choosing not to avoid a risk) is to do something for which one may be held responsible. Even if a person chooses to take a risk for whatever reason, and they have sufficient knowledge of what kind of harm may occur, it is usually the case that they are held to account, at least in part, for the outcome of their actions. Similarly, we hold people accountable if they violate moral norms such as stealing or killing by imposing some form of punishment. If a person takes a risk that is thought unnecessary, avoidable or involving a high degree or chance or harm, we hold  89  people to account although to a lesser degree, and usually without punishment.183 Risk can thus be understood as weakly normative in that it gives us reason for action and, generally, it prescribes avoidance of whatever harm may occur. If we choose not to heed this prescriptive message, we must have some reason for doing so since we will likely be held to account, although often this will amount simply to condemnation. Second, a number of empirical studies have found that people often make seemingly irrational choices and/or very poor judgments when faced with a risky situation.184 It seems that what is rational (avoiding harm) is not always what people tend to do. The problem, however, is that the seemingly irrational and mistaken responses gathered in many of these studies are based on the assumption that there is some standard and universally accepted conception of the harm a risk involves. In the following discussion, I will provide an explanation of the two components of risk—chance and harm. Since what makes something harmful or undesirable is a matter of subjective evaluation rather than an objective fact, there will be a broad range of possible harm. Based on this, a risk is more fully understood as the chance that something of value to a person will be negatively affected.  ;";$  C/3&,0$)($G&,0('3-&':$-&$7-28$  As I have explained, in this account, risks have two components; chance and harm. The first component I will discuss is chance, or uncertainty. I use these two terms nontechnically and interchangeably merely to capture the idea that some unwanted event may  183  Often we say that a person does not need further punishment because facing the outcome of their decisions is punishment enough. 184 These are discussed in more detail in Chapter 5.  90  or may not occur.185 What is important is that a risk does not tell us with certainty whether an unwanted event will occur. Risk necessarily involves uncertainty but the reverse is not true: there can be uncertainty without risk for instance, but where there is certainty, there can be no risk. A risk is uncertain because it is not known whether the actual undesirable event will occur or not. It may be quite likely, very unlikely or unlikely, but there is no certainty about it. The probability of the unwanted event occurring might be known because it has been calculated by some means, but whether the unwanted event actually occurs cannot be known until after the fact. It is sometimes possible to calculate the likelihood or chance that some event will occur. Such information simply makes us better able to understand a situation. In a game of tag, for instance, there are many different ways to determine who is “it” at the start of the game. If there are four people playing, four stones are placed into a bag and one of them is marked with an “X.” There is therefore a one in four chance of pulling out a marked stone. Knowing one’s chances of becoming “it” might provide helpful information concerning the actions one will take or decisions they will make. If one is very averse to being “it” in tag, then it would probably make sense to either avoid the game or to play with a large number of people because it is less likely that they would pull the marked stone. But even with many players, there is no guarantee about what will happen.  185  Uncertainty as I use it here also includes doubt, indeterminacy or indefiniteness; the negation of certainty. While there is a substantive discussion concerning the meanings of most, if not all of these terms, it is not relevant to my argument. My interest is in establishing that a risk requires that the outcome of a decision or event is not certain within reasonable limits. Some argue that nothing is certain but this is unhelpful. Throwing a pen into the air has the ‘certain’ outcome of falling back down and although it is true that events may transpire to counter the forces of gravity, it is generally accepted that the pen falling is a ‘certainty’. Risks point out those things that are not like pens falling when thrown in the air. If I throw my pen out the window of a tall building, there is no risk that it will fall because it will fall if there are no other forces to stop it, but there is a risk that it might hit someone.  91  There are also degrees of risk, where some events are extremely likely while others are very unlikely and still others might be simply uncertain where no estimation of their likelihood can be made. Even in the case where a given outcome is extremely likely, as long as there is some degree of uncertainty, we could consider it to be a risk although perhaps not a very serious one. Ericsson and Doyle explain: Risk always exists in the context of uncertainty. Where there is certainty about an event, where we know for certain that it will or will not occur, we do not talk of risks. Risk begins where certain knowledge ends. Claims about risk are, literally, uncertain knowledge claims—impressionistic guesses, informed estimates, and probabilistic predictions about a future that cannot fully be known.186 Uncertainty can refer to those cases where no measurement or estimates of an event’s likelihood can be made, or if they can, they may be merely speculative. For example, there is uncertainty about the effects that genetically modified organisms will have once released into the environment, but so far there is no way to provide an estimate of any possible harm they might produce. Some of the detrimental effects such as genetic drift are known to be possible, while other effects are simply unknown. Many risks are also like this since it is often impossible to know all the relevant factors that might contribute to events and their consequences. Sometimes the difference between risk and uncertainty is conflated, but what I think needs to be made clear is that on the account of risk that I am interested in, risk necessarily involves a lack of certainty which may or may not be expressed explicitly in terms of some estimated or measured probability. Calculations of the actual likelihood of a risk are not always necessary to understand something as a risk, although often such information may be available to us.  186  Ericson and Doyle, Risk and Morality, 52.  92  In the descriptive sense of risk, this component is usually the most important, at least if some quantification is required or needed to help make decisions. In the normative or prescriptive sense of risk, the element of chance is what distinguishes a risk from mere harm or danger, but is less involved in providing reasons for action. Since there is no certainty, the prescriptive force is somewhat weakened because the cause and effect relationship between taking a risk and experiencing something harmful is not guaranteed. Therefore, a risk might provide reasons for action, but choosing whether to act in the way prescribed will be a matter of judgment, even when the probability of occurrence is known.  ;"=$  13(E$3&*$G&*02-(3@-4-':$-&$7-282$  Indifferent events, whether or not their outcomes are known, are not risks since they lack the critical feature of harm, loss, danger or some other type of unwantedness. At the same time, a risk cannot be merely qualitative because bad events that happen and were always certain to happen are not risks—they are merely bad events. The point is that when there is risk, as I have defined it, there is some chance of a harm or negative event occurring regardless of the form it takes. This seems to match up with how we think about risks in general. It would be odd, for instance, to hear the weatherperson on television speak of the “risk of sunshine and temperate weather tomorrow.” In the first case, sunshine is usually not an unwanted event, but rather a desirable state of affairs. Whether someone’s actions are actually risky or not is a matter of fact and not a matter of opinion. A risk must be bad or unwanted to someone and this can include objects or other people. We think that using pesticides on our lawns is a risk because of  93  the possibility of harm to animals, insects, other plants, and the water supply (both of these last ones over the long term). In this way, there can be risks to things like plants, water and animals if someone understands the possible consequences of some event as harmful. This subjectivity applies even to the way we speak about risks to animals, objects or the environment. For example, there is a site off the coast of South Africa known as the “Ring of Death” populated by thousands of fur seals who use one of the rocky islands to raise their young.187 The area is known to attract the highest number of great white sharks in the world’s oceans because the geographical features of the island make it a perfect hunting ground. Marine biologists consulted in a program about this phenomenon stated that the fur seals were “at great risk” when they ventured into the sea to find food. The adolescent seals were considered the most likely to fall victim to the sharks because they “took risks” by travelling alone and swimming close to the surface. It seems that the adolescents had not learned to travel in groups like the adults, or to leave the island at intervals so that they could see if the group before them had been attacked (thus minimizing the risk). The seals in the “Ring of Death” are at risk on our view because we ascribe our values onto the situation. For us, being consumed by a great white shark is an obvious harm, and so the chance these seals have of falling victim to these fish is a risk—from our perspective. Additionally, we might think that the seals are at risk because we value the seals’ lives even if they do not; we want the seal population to continue to thrive, or we are upset merely knowing that seals are eaten whole by sharks because we can imagine it is a painful, frightening event. 187  This example is from the Discovery Channel “Shark Week” airing on Aug. 3, 2009  94  Furthermore, when we say something along the lines of “the planet Jupiter is at risk of being destroyed by a passing comet”, we do not mean that Jupiter is capable of being harmed in the way that a person can be. A planet is not the type of thing that can be harmed, or which can experience loss or injury as we understand these things. Thus, the planet is at risk because we make some type of evaluation concerning its existence and not for any other reason. A risk is generally a negative concept since it carries with it the possibility of some harm or of something undesirable.188 The second component, harm, while usually taken for granted in technical risk analyses, is more difficult to define than the first. A harm could be anything that either is, or is perceived to be, negative, detrimental or simply ‘bad’ to the person or people who experience it. Physical injury, death, damage to or loss of one’s property, economic loss, and psychological suffering causing stress or anguish, the loss of something cherished or important, or the loss of money, time or health are all examples of things that could cause us harm. A disease is a harm because it decreases the state of one’s health. Someone who is already sick still experiences disease or injury as a harm since it involves a further decrease in health. Similarly, the loss of a job is a harm because of the impact it will have on one’s ability to provide for themselves, although it can also be harmful in other ways. Losing a job can damage one’s ego or social status and so if such things are very important to a person, there is more than just one dimension of harm involved. Additionally, the undesirable events associated with risk can occur in the present or the future. It is therefore possible to face imminent risks (such as when we cross the 188  I use “harm” interchangeably with “undesirable effect”, “bad” or “negative” and other expressions to avoid excessive redundancy. I do not mean to make a claim about “harm” but it is used throughout the risk literature as a generic term and so I have adopted this standard practice.  95  street in front of a speeding car) but also future risks such as when we are exposed to a toxic chemical that takes years to adversely affect our health.189 To some, especially the young, it is difficult to take the possibility of future harm seriously so unless the effect is relatively immediate, a risk of something undesirable in the distant future will perhaps not seem like a risk at all. The most important aspect of harm is that it adversely affects our interests. This suggests that harm is a matter of subjective evaluation. As Garland explains: Dangers are dangers for someone—for specific individuals or groups or species under certain conditions—nothing is dangerous as such, not even floods and lightning. On the other hand…anything and everything has the potential to become a danger to something or someone. All that is required is that there are interests or values that the thing may adversely affect.190 It seems problematic to say that a risk involves something that is objectively harmful or that is harmful to everyone in the same way. The reality is that there are some types of harm, which we may all agree, are particularly bad. Death, for instance, is usually considered to be bad or harmful, although even this is not always true. When experts report data from studies involving risks as they define them on the technical view, they provide us with vital information about the likelihood of some very specific sort of harm such as death or injury that is usually expressed as an average for an entire population. When we hear that the risk of developing lung cancer from smoking is 25%, for example, we are provided with data telling us about the chance or likelihood of this unwanted event but it is unclear what such data means exactly. More information is needed to make it clear which segment of the population the statistic applies to, and who might be excluded. Even if such information is known, the subjective evaluations a person might 189  For further discussion about future risks see Parfit, “Future Generations: Further Problems”; Garland, The Rise of Risk, 50-51. 190 Garland, The Rise of Risk, 51.  96  make about the harm involved in developing lung cancer, as well as other considerations such as their age and state of health, can make a difference in how a person understands a risk expressed in this way. The probability is objective, yet the risk of developing lung cancer can mean different things to different people. A young person who is just starting out in life might want to avoid smoking to avoid the risk while a person dying of terminal cancer might not think of this as a risk at all, at least for himself or herself, and in fact the statistic might not even include them to begin with. A risk is not best described simply as chance and some generic harm. In Chapter 3 I noted that Rescher discusses the difficulty in weighing risks against one another when they involve qualitatively different types of harm which can include injury, illness, financial loss or boredom. He suggests that a problem arises from differences in how people appraise negativities (e.g. their severity) which are based on the differences in what people value.191 For example, some people might prize reputation and honour above physical pain or discomfort while others would avoid discomfort at all costs. What is very bad for one person might not be so bad for another. Rescher claims therefore, that values are an issue only when we want to make comparisons between, or assessments of the size or magnitudes of risks. It is difficult, however, to think that values do not matter in determining what is a risk as well. Some risks threaten what we might refer to as primary goods and so it is reasonable to say that they are not always entirely subjective. Rawls explains that primary goods are things “that every rational man is presumed to want. These goods normally have a use whatever a person’s rational plan of life.”192 However, Rescher does not make this  191 192  Rescher, Risk, 26. Rawls, A Theory of Justice, 54.  97  distinction and seems to think that everyone understands all types of harm in the same way, although they might rank them differently. On my view, risks involve the chance of harm where a harm is whatever negatively affects something of value to people. Even though risks may involve subjective evaluations of harm, some can also be considered objective in the way that primary goods are seen as objective (i.e. death). It is of course possible for a person not to care about or value something and still recognize risk when it is present because they recognize a threat to what someone else might value. Many different sorts of things can be harmful or undesirable and it therefore matters that a person’s recognition or judgment of some situation as risky may depend on his or her perspective. Returning to the cruise ship example, if you tell a different friend that you did not go on the cruise ship in the Indian Ocean because it was too risky, they will likely understand at least part of your reasoning even if they do not have more information or, in this case, know much about piracy. Your reasoning is implicit to the extent that something harmful or undesirable ought to be avoided or caution exercised and that there are good reasons to avoid it. However, further explanation or justification may be required. If someone tells me a situation is risky, I may need more information to understand why this might be if it is not immediately obvious to me; upon further evaluation, I may come to agree or disagree. What I value might of course differ from someone else, hence what is risky for me might not be risky for you. Similarly, I might judge your actions as risky when I think about them from your perspective and whether they threaten something of value to you. At the same time, it is not necessary for us to agree on what sorts of things are risky in order to understand each other.  98  ;"B$  M))*$7-282$3&*$%&<0('0*$7-282$  The normative concept of risk is based on the assumption that risks are negative and therefore prescribe some action to avoid or minimize whatever harmful consequences might occur. It might be argued, however, that instances where risks are good, or even desirable, could prove to be a complication. If a risk can in fact be good, then the action it prescribes is to seek out, rather than avoid, whatever effect the risk might produce. But if it is not clear whether a risk carries the message to avoid or to seek out the consequences, then the normative account is less convincing. It is therefore worthwhile to consider what makes a good risk ‘good’ and whether the account is challenged by this possibility. Good risks can be understood in a few different ways. It is easiest to classify them into two groups, risks that are good before the outcome has been determined, and risks that are good in retrospect. I will discuss each group in turn. First, a good risk can refer to an unwanted or harmful event that either will not or is very unlikely to occur. For example, a children’s game I like to play at the carnival involves a hundred rubber ducks floating around in a trough of water. Each duck is marked on the bottom with the size/class of prize you “win” if you select it. There is one duck, wearing a black eye patch and a pirate hat, which is marked with an “X” meaning that you do not win any prize at all. The game involves paying a minimal price (one dollar) and having a player scoop a duck of their choice out of the water with their hands. The possible prizes (usually small stuffed toys) are all worth at least one dollar if you were to buy them in a store, or more depending on the prize the duck indicates you have won. In general playing this game seems like a good risk since you will win something as long as you avoid the very  99  obvious pirate-duck. Similarly, a good risk might also refer to a decision between two good options such as choosing which birthday gift to open or which movie to see if both are appealing. Another way a risk might be good is when it involves either very minimal levels of harm (say losing a five cent bet) or is unlikely to occur (making a bet with a 98% chance of winning). A risk might also be good if it involves both minimal levels of harm but some potential benefit, whether likely or not. In such cases, the higher the perceived value of the benefit, the better the risk seems to be. Playing the lottery, for example, might be a good risk of this type since the low price of a ticket is minimally harmful but the potential payoff if one wins can be quite high. Alternatively, if, you placed a onedollar bet on a 90% chance of winning one million dollars, most people would call that a good risk as well since again the harm is minimal but the chance of a good outcome is high. If the potential benefits are thought or perceived to outweigh the potential harm, the risk might also be good.193 Vaccines might involve some chance of undesirable side effects or in some cases a small chance of developing the illness the vaccine is meant to protect against, but they are recommended to us since the overall benefit for both an individual and often for the community as a whole, is thought to outweigh the possible harm to both the individual and the community. The second group of good risks includes those that are good in retrospect either because no harm was realized, or because no harm was realized and something beneficial 193  In this case I do not mean the absolute magnitude of the possible gains, or the expected value (probability times magnitude) since the cases I am describing are general in nature. Part of my criticism is that it is very difficult to determine the absolute magnitude of harm given its subjectivity and incommensurability with other harm. In this example, it is enough that a person perceives the overall outcome as more beneficial than harmful, but they do not need to be correct about whether this is actually true.  100  resulted instead. It does not mean that in such cases there was no chance or harm, but that no harm was experienced even though it could have been. In general, if taking a risk turns out well, it is a considered to have been a good one. And if a person manages to take a lot of good risks, then being a risk taker is also a good thing, particularly the more successful or wealthy they become as a result of it. Finally, some people are not always as risk-averse as might be predicted: sometimes they actually appear to find risk desirable.194 They actively seek out risk in various activities such as riding on roller coasters, backcountry skiing, rock-climbing or hang gliding. Although there is a possibility of harm that can sometimes be serious or life threatening, people enjoy the thrill they feel or the satisfaction they derive from pursuing these activities. While there are number of different ways in which a risk might be said to be good, there are some important qualifications to make. As I have stipulated, a risk has two components, both of which are necessary. Thus, for decisions or situations involving no uncertainty, or amounts so small as to be negligible, there is no risk. This is also the case when there is uncertainty but no harm. In those cases in which there is a risk of a negative outcome, but in which the outcome itself is only slightly undesirable, or where its likelihood is very low or even trivial, it is easier to see how they might be good. Note, however, that this would exclude those cases with very severe but unlikely possible outcomes, such as a small but not insignificant chance of death: the likelihood of such a severe harm, even though very small, is difficult to think of as good on most views. Playing the lottery for a few dollars is a risk only if losing the few dollars is undesirable. If a person is indifferent to losing the money because they have a comfortable standard of 194  Machlis and Rosa, “Desired Risk.”  101  living and some disposable income for instance, this might not be a risk at all, but just a chance (usually a fairly low chance) of winning a lot of money. These seem to be special cases where it is still recognized that there is some chance of an undesirable event, but this fact is strongly constrained. First, there is a risk of some sort, but second, it might be a good risk meaning that one does not need to be as concerned as they might be in less constrained (more usual) circumstances. Generally, to call something a risk does not involve confusion over whether this means that there is a chance of something good or the chance of something bad occurring. For the other examples, these frequently occur within other limits, constraints or with background assumptions. The first constraint or limitation is where the ‘goodness’ of a risk refers to the fact that all turned out well in the end. If a person makes risky investments but is in general very successful (like Warren Buffet), the risks are considered good; if unsuccessful however (like an investor who loses their house and goes bankrupt), the risks are not so good. In these cases, the risk is only good when the outcome is good in retrospect after it is known what actually happens. The fact that the bad outcome was not realized does not alter the fact that it was always possible. To say it was a “good risk” to take because the bad outcome did not come to pass is to ignore the fact that the good outcome was not inevitable. Evaluating a risk after it has been taken and the outcome determined does not provide guidance on whether or not there was a risk to begin with, or whether it was “good” to take (unless the results show that one either knew or should have known that only good outcomes were possible). Another way good risks are constrained is when the conditions under which they occur are controlled in a fairly strict way. Riding a rollercoaster is risky, but it is a  102  popular and enjoyable activity for many. Part of the reason it can be enjoyable is because considerable effort goes into the design, regulation and enforcement of safety standards. Operators of amusement parks cannot afford to have their customers hurt on a regular basis or they would go out of business. This does not mean that there is no risk in riding a rollercoaster, but it occurs in a controlled setting and under strict guidelines so in some ways, the risk is more illusion than real. The conditions of other good risks are also confined because usually the possible harmful consequences will affect only the person taking the risk. Playing the lottery or gambling where the chance of a winning is high, the cost of playing is very low, or when it is a source of entertainment, may be good risks even if a person loses. However, if a person does not have much money to gamble with so that losing means they are not able to care for their children, we are less likely to say that the risks they take are good. Even if it turns out that they win some money, we are still likely to think that the risk was a bad one because it was possible that the children could have suffered. There are also constraints on those risks that are desirable or enjoyable and which people actively seek out, like those associated with activities like rock climbing. Such cases are atypical because even though the risk of harm is desired and does in fact seem to be intrinsically good (especially when a person considers a safe activity to be “boring” or unrewarding), they occur under specialized conditions where some degree of control can be exerted in order to reduce the chance of harm. In such cases, the risk of harm is desired because the process of avoiding a possible harmful result is something that a person values and can be rewarding due to the skill, concentration and preparation it requires. Machlis and Rosa explain further that what makes taking risks enjoyable is the  103  personal control over the potentially harmful circumstances or environment that one can exert.195 They claim that desirable risks are always voluntary and often require specific sets of skills required to perform the activity. These skills are necessary to both minimize the chance of harm but also to provide some control over one’s environment. There is a difference between someone who seeks out risky activities like mountain climbing by learning the necessary skills and using the right equipment, and someone who goes mountain climbing without any knowledge or training. The first person has attempted to limit the degree of risk they face and enjoys the challenge of (hopefully) achieving a positive result. The second person has made no effort to limit the chance of harm and even if on their view the risk is desirable, most people would think that it is reckless or ill-advised behaviour. It may still be argued that some risks are in fact good, and so a more thorough analysis might be required. However, where it might still be said that the risk is good there is one last consideration to address. To call something a risk is not to mean that either a good or a bad outcome might occur. The qualification “good” needs to be added in order to indicate that the situation is different from the usual instances of risk. If risks are normative because they give us reasons to avoid some harm, then a good or desirable risk means that the situation is special in some way and that the usual action one ought to take or the caution they ought to exercise is not necessarily warranted. Most of the time, the prescriptive sense of risk will likely refer to situations without the contrivances or background assumptions of the so-called good risks I have described.  195  Machlis and Rosa, 1990.  104  Finally, there is one other case to consider. Sometimes not taking a risk can result in a missed opportunity or in losing out on something that is positive or beneficial. This is an example of what Rescher calls an inverted risk. He explains that …it is useful to distinguish between an ordinary risk, namely the chance of misfortune, of having something bad happen, and the inverted risk of foregoing the occurrence of something good, of losing out on something positive. With such inverted risks no actual harm need enter the picture at all, the “loss” at issue can be a matter of lost opportunities alone—the quasi-negativity of a failure to realize “what might have been.” Such cases of potential regret (as it is sometimes called) turn on the prospect of missing a possible benefit: the finder of a lottery ticket only risks not winning. Whenever we ‘buy-in’ on a gamble we run the (direct) risk of losing our stake; but if we ‘opt out’ of the gamble we run the inverted risk of losing out on the prize.196 Foregoing the chance of something good might produce regret and lost opportunities, but on his view neither regret nor a missed opportunity are harmful or count as negative outcomes. This leads to the conclusion that “virtually any uncertain-outcome situation— even those in which no actual loss or injury is prospect—can be considered as involving the element of risk.”197 While such cases do seem to extend the range of situations that might include risk, it is not clear that this range is as broad as Rescher suggests and includes almost any choice or situation with an uncertain outcome. There are two issues to consider. First, an inverted risk does not seem to occur when outcomes are neutral. For instance, if a person opts out of a gamble where the prize is irrelevant or meaningless to them, (i.e. opting out of a children’s bet to win one thousand marbles), it is unlikely that they will experience regret over losing out on a prize that is of no value to them or consider this to be a missed opportunity. Foregoing the chance of something good (valued) is an inverted risk because it involves a lost opportunity or potential regret, but this is different from foregoing the 196 197  Rescher, p. 10. Rescher, p.10  105  chance of something neutral. Uncertain outcome situations, which involve the chance of some neutral event occurring, can therefore be excluded from those considered to involve risk. Second, if harm is narrowly construed, the conclusion Rescher draws seems to make sense, but there is reason to think that a missed opportunity or potential regret could indeed be harmful or undesirable to someone, especially in real life circumstances. If a person has aspirations or goals they hope to achieve, then foregoing the chance to realize them or the chance to work towards them negatively affects something they value. The inverted risk example seems to include those situations where what counts as a harm is a matter of losing something of value to a person. The range of situations involving risk can therefore still be understood as those uncertain events with the chance of some harm. Good risks (when they actually occur) and inverted risks highlight a relevant issue. Risks are negative since they involve the chance of harm but there are different types of harm that might be of concern to people. Risks can be desirable when they provide a sense of satisfaction, control or sense of accomplishment for a person: risky activities can therefore be good if they produce these feelings in someone. Rescher’s inverted risk identifies a situation where sometimes there may be reason to take a risk even though doing so involves the chance of some harm which perhaps ought to be avoided. Not taking the risk seems to involve a harm of a different sort, particularly if we have aspirations that are important to us. Both good and inverted risks suggest that a person may have more than one interest which might be threatened. Where the decision to act and not to act both involve some type of harm or loss, there are two risks to choose between which are the result of competing values.  106  ;"N$  5$C)EF3(-2)&$O-'/$P)(34-':$  An alternative explanation of the way in which risk prescribes action can be made through a comparison with morality. The claim I want to make is that risk is weakly normative. In contrast, we see that many moral rules are strongly normative in that they provide us with the motivation to act: these may be overridden, but only for special reasons or under extenuating circumstances. Violating a moral rule is difficult if one values morality. When a person fails to act morally, they often face censure, blame or perhaps even punishment from others or themselves, which further reinforces the normative force of morality. Risks, in contrast, provide us with the motivation for action in order to avoid some harm but they are more easily overridden by other concerns. For instance, it is possible that a risk must sometimes be taken in order to satisfy the demands of morality as in the case of putting one’s life at risk to save a stranger from a burning building. Additionally, when a person fails to avoid a risk, they might be accused of having bad judgment if enough information about the risk was known beforehand.198 Even if one takes a risk that they probably should not have taken and is harmed because of it, this may not grounds for of punishment or censure, but the person is still held accountable and might be thought of as foolish or of exercising poor judgment.199 We might even make assumptions about the extent to which we might be able to rely on or trust a person who takes unnecessary risks. A better analogy might therefore be that of the normative force of promise keeping or etiquette, both of which are more easily  198  But if there was no way of knowing about the risk they are not blamed. Recently people taking unnecessary risks such as backcountry exploration or skiing out of bounds have in some cases been fined or charged with the cost of their rescue. Such fines or charges amount to a form of punishment but are meant to be both a deterrent to others and to compensate for the high costs of such dangerous rescues. This is not the same kind of punishment we give someone who commits theft or murder where imprisonment is meant to specifically punish the transgressor. Compensation to the victim or to society is a secondary consideration. 199  107  overridden by other considerations but which nonetheless serve to prescribe behaviour and action. By weakly normative I mean to suggest that the actions prescribed by a risk are nonbinding or act as suggestions for action. Unlike moral norms that require good reasons for overriding, risks do not always compel one to action. In fact, most of the time it is more likely that a risk provides reasons for avoiding harm taken into consideration with other concerns when a person is deciding how to act. Since risks occur in many situations, we are constantly facing risky decisions yet it would be a mistake to think that we treat risks the way we treat moral norms. Thus, the way to best understand risks is as providing weak reasons for actions. For example, when faced with the choice of getting on a plane, one ought to consider the risks but since the average person understands that the risks are minimal, it is unlikely that the action prescribed (avoid the risk or at least take it into consideration) will have much force. On the other hand, we would probably find it understandable if a person opted not to get on a plane because they felt it was too risky. We might disagree but at least understand such reasoning. A risk is therefore weakly normative in the same way etiquette is weakly normative. It provides reasons for action but such reasons are not necessarily compelling. In fact, taking a risk can be rewarding if one has considered all (or many) possible outcomes and weighed various considerations against each other in a way that usually cannot be said of violating a moral norm. However, it remains that to call something a risk is to suggest there are reasons for taking whatever action is required to avoid a possible harm, but such reasons will depend on the severity, extent and character of the risk involved.  108  Ethical standards are normative because they prescribe a certain action or behaviour. These standards provide guidance for our decisions, our understanding and interpretation of the world. As Korsgaard explains, ethical standards …do not merely describe a way in which we in fact regulate our conduct. They make claims on us; they command, oblige, recommend, or guide…When I say that an action is right I am saying that you ought to do it; when I say that something is good I am recommending it as worthy of your choice. The same is true of the other concepts for which we seek philosophical foundations. Concepts like knowledge, beauty, and meaning, as well as virtue and justice, all have a normative dimension, for they tell us what to think, what to like, what to say, what to do, and what to be.200 When we make assertions about what is right or wrong we do not just describe but provide reasons for our assertions. In fact we must be able to back up our claims about why one action is right and another is wrong in order for morality to motivate our actions, otherwise it does not have any special status. Morality is special however, and it is distinct from other types of claims we might make. On the one hand, expressing a preference for A over B does not endorse A over B unless some other factor obtains, such as wanting to conform to or please the person expressing the preference. On the other hand, claiming that A is right is an endorsement of A and the reasons given for conferring ‘rightness’ give us reasons for thinking it is right as well and these reasons do not (or should not) depend on wanting to please or conform. It is important to note here is that the reasons provided by standards or principles attach directly to actions and cannot be represented merely as preferences over outcomes. For instance if I say I think stealing is wrong, I mean to say that there are convincing reasons to abstain from stealing, not that there are convincing reasons to prefer not  200  Korsgaard, The Sources of Normativity, 9.  109  stealing to stealing. The relationship between reasons and actions is direct for normative claims. When we make claims about what is risky and what is not, we are sometimes doing more than merely describing. In the choice of two options where option A has the chance of a bad outcome while option B does not, there is a difference between saying that option A is bad and saying that option A is risky. By saying that A is bad we might mean to express our preference about it and so further clarification is required to determine if our assertion of badness is descriptive or prescriptive. However, if way say that option A is risky, we mean to prescribe some type of action that allows us to avoid some possible harm, even if only weakly. Saying that something is risky provides a warning about acting in a particular way. If nothing else we mean to give pause to whomever is faced with the decision and this is not accomplished on a descriptive account of risk.  ;"Q$  ./0$P3''0($)6$I0(2F0,'-<0$  Descriptive theories can only weakly account for instances where people seem to make a mistake in their calculations of utility or costs and benefits because they assume that any emotional, or evaluative judgment is adequately incorporated into the algorithm. The descriptive account usually examines risk from a third-person perspective, which is often inadequate in explaining how people might actually understand a risk and how they might actually behave. Risk in the prescriptive sense is not limited in this way and can accommodate both third and first person perspectives. This closely parallels the type of reasoning involved in morality. Korsgaard explains that it is  110  easy to confuse the criteria of explanatory and normative adequacy. Both, after all, concern questions about how people are motivated to do the right thing and why people care about moral issues so deeply…Nevertheless the issue is not the same. The difference is one of perspective. A theory that could explain why someone does the right thing—in a way that is adequate from a third-person perspective—could nevertheless fail to justify the action from the agent’s own, first-person perspective, and so fail to support its normative claims.201 Korsgaard maintains that it is possible for a theory to provide an adequate explanation of why or how a person would act morally if they believe that theory. But this is insufficient for understanding what justifies the action. Asking what justifies the claims that morality makes on us is what she identifies as the normative question that must be considered from the first-person perspective because “you must place yourself in the position of an agent on whom morality is making a difficult claim.”202 Risk also requires this perspective for two reasons. First, the consequences of taking or incurring a risk are borne directly by a person and, knowing this, can have a psychological effect on what choice they will make. If risks were objective matters of fact, then the undesirability of the harm it might entail is also objective although there are many cases where this might not be so. Physical harm for one person might be intolerable while for another person it might not be as difficult to bear, even though both would agree that it is harmful to some degree. Second, risk does seem to make claims on us in a way that is similar to, though weaker than, morality, and this is reflected in the way we speak about risk. For instance, returning to the example of the weather, when the weatherperson on the news says there is a risk of rain or that there is a 30% risk of rain, it assumes that rain is negative in some way for most people, and furthermore, once you know there is a risk of rain, you ought to  201 202  Ibid., 14. Ibid., 16.  111  act in such a way as to avoid getting wet (the assumption being that this is something that most people worry about). It would be odd to talk about a risk of sunshine or cloudless skies since these are not usually bad things, but also because no further claim on our behaviour is warranted when the sun is out. If you go outside without a jacket or umbrella knowing there is a risk of rain, we might think you are silly or careless and you might have to bear the consequences. This is similar but much weaker to facing the censure you might experience if you were caught lying. One important difference in the case of risk is that often the context will determine what sort of action to take. Recently, there has been an increased concern over the amount of sun people are exposed to, especially during the summer months because of its correlation to skin cancer. Therefore, depending on the context and circumstance, if the sun is out and the sky is cloudless, some further action might also be warranted in order to protect oneself. If you go outside without sunscreen or take no precaution to avoid exposure, we again might think you are careless or silly. Risk can therefore change decisions in ways that are similar to how morality changes decisions, yet not quite as strongly since they are different from notions of right and wrong, or legal and illegal and demand less of us. There is no punishment if you fail to heed the warning of a risk except that you might incur the harm that you were warned about. At the same time, risk is not as weak as the rules of etiquette in prescribing action since the consequences of failing to do so do usually involve serious harm. It seems that risk falls somewhere in between these examples. Even if breaking the law in some way may produce great benefit to someone, the act remains a violation of the rule of law no matter how beneficial it might be. Breaking the rules of etiquette is merely impolite and  112  even though one might suffer unpleasant consequences, it does not cross the same kind of boundary or line of acceptability as breaking the law or doing something immoral. Making a risky choice can be seen as crossing some kind of line of acceptability. Singer describes the qualifications subjectivist views of ethics have made so that they do not reduce ethics simply to matters of feeling or desires. He explains that while they continue to take the view that ethical judgments are based on our desires, they will not allow just any desire to form this basis. Rather, they acknowledge that to count as ethical, desires must be passed through a screen that filters out those that do not meet certain conditions of impartiality and reasonableness.203 While I do not intend to comment on the content of his discussion concerning objectivist and subjectivist accounts of ethics, what I do find promising is his idea of a filter. Risk can also act like a filter in this way but again, not to the same degree. If we decide to follow rules of etiquette, for whatever reason, we will do so unless some other, more compelling rule or issue overrides it. Often, these overriding factors are moral ones although sometimes they might include our own desires or preferences. In the case of wanting to act morally however, etiquette will almost never override our actions (or reasons for actions) when we have reason to believe they are right or good. Morality outweighs or trumps etiquette in this way but it is balanced by other instances of morality or perhaps sometimes other principles. We will not follow rules of etiquette if doing so means we violate the rule of law for instance. If the rules of etiquette involve the chance of incurring some type of harm which is sufficient for us to want to avoid it, then risk trumps etiquette as well. Sometimes we are faced with situations where there are two possible options but both are risky. If both options involve the same kind of harm (i.e. physical injury), then it 203  Singer, “Introduction,” 10.  113  might be possible to make a choice by comparing their severity and types of injury. It is possible however, that a person can be faced with two options where one is risky because of the chance of injury, but the other is risky because of the chance of compromising one’s sense of morality. At other times, the choice between two options might involve having to take a risk in order to do the right or the good thing such as risking one’s physical well-being to save someone else. The choice you make will depend on your assessment of what counts as more important to you rather than simply on what will produce the least amount of harm. Just like Singer’s claim that desires must be filtered through the conditions of impartiality and reasonableness for the subjectivist account of ethics to succeed, there is a similar analogy for risk. When understood as a chance of losing something of value, or as the possibility that what is of value to a person will be negatively affected, the prescriptive sense of risk will necessarily make accommodation for the diversity of values a person can have which motivate them in different ways. I anticipate that these values are also things that can motivate our actions, and thus are not merely whims or desires but, like Singers’ conditions for subjectivists, are reasonable and action-guiding. This does include preferences and ambitions, however. Unlike descriptive accounts where a person’s values and preferences are introduced during the decision-making process involving risk, values and preferences are internal to the process in the normative account since they are internal to risk.  114  ;"R$  %44+2'(3'-)&$)6$3$7-28:$C/)-,0$  It is possible to compare the difference between the descriptive and normative concepts of risk by considering at what point one’s values influence a decision in the following way. The first diagram depicts a risky choice on the descriptive view and the second one shows the normative view.  Figure 1. Diagram of a Risky Choice on the Descriptive View  In this illustration, a person’s values influence the stage where risks are assessed in terms of harm and benefit. In this case, risks are not particularly special since they operate simply as yet another source of influence on our decisions, understanding and perceptions of possible outcomes given a particular choice. The action to take is determined by the product of weighing all the different harms and benefits that might be incurred, and what counts as a harm (or benefit) is assumed. Clearly, the diagram does not adequately encompass decisions that are of a moral nature or which have to do with following professional standards or legality. In those cases, the choice of action is more complicated and even though it might end up being the  115  result of weighing risks and benefits, it is not necessarily so. Moral, legal and professional concerns act like a filter on decision-making and introduce an added dimension of complexity. Sometimes the choices that are available to a person are in fact determined by the importance they place on following the rules of morality or the law. In this way, one’s values can determine which choices are available rather than merely being a factor in deciding how to choose between two options that are determined in some other way. If a person is faced with a choice between two options where one of the options involves the moral duty to keep a promise, the Descriptive Model of decision-making will also not be of much help. Some other process must be employed in order for a decision to be made and it may end up in a dilemma. In general, if we want to keep our promises, we cannot merely weigh the costs and benefits of possible outcomes for each option. Our actions must first pass through the filter of promise keeping (or ethics) and then an analysis of the possible consequences can proceed. If Option A means we must break our promise, then we ought not to choose that option unless there is some other consideration that outweighs it. But what outweighs a promise is not something like money or enjoyment. In fact, there are only a few sorts of things that allow us to break a promise (assuming we want to keep them in the first place) and often these are other ethical considerations. We might break our promises if doing so saves lives for instance, or reduces suffering and these are reasonable or understandable trade-offs. Breaking a promise in order to gain personally is not an acceptable reason. This process can be understood in the following matrix:  116  Figure 2. A Model of Choice involving Promise-breaking !!!!!!!!!!!!!!!!!!!!!!!!!!Option B  Filter: Promise Keeping!  !  Option A! Filter:  !!!!!!!!!No filter  Special process!  Choose Option B!  Choose Option A!  Employ Descriptive  Promise Keeping! No Filter!  Model!  When risks are normative, they have effects similar to those of moral standards (and principles like etiquette) on the decisions a person makes. If we say something is risky or is a risk, our intent is not to say that someone will have to do a cost-benefit analysis and make a rational decision in deciding which choice will produce the least amount of harm. Most cases of risk in the real world and outside hypothetical games or gambles do not necessarily involve one type of harm. What we mean to say is that there is the chance of some harm, which ought to be avoided, and which ought to be considered before or in some cases instead of, some other decision-making procedure takes place. As such, the normative account of risk can be represented as the following matrix:  117  Figure 3. The Normative Model of Decision Making Under Risk !!!!!!!!!!!!!!!!!!!!!!!!!!Option B  Filter: Risk!  !!!!!!!!!No filter !  Option A! Filter: Risk!  Special process!  Choose Option B!  No Filter!  Choose Option A!  Employ Descriptive Model!  In this case, the methods of making a decision when risk is descriptive such as cost benefit analysis are employed only for those cases where neither option involves a risk, on the assumption that risks are normative. Additionally, it shows that where one choice involves a risk and the other does not, then the decision is to choose the no-risk option since the decision begins with a reason for not choosing the risky option. The difference between this procedure and the one involving moral decisions is important since risks are weakly normative as I have explained. While there might be a reason not to choose the decision involving a risk when there is a non-risky option, it might also be more easily overridden by other considerations than the demands of morality will be. However, since risks can be understood not simply as costs and benefits but rather as a matter of what is of value to someone, then some process might occur which changes the possible range of options to choose from.  $  118  ;"#S$ C)&,4+2-)&$ I have provided two possible explanations for the prescriptive force of risk. First, I explained that since a risk is negative because it involves harm, it is reasonable to assume that avoiding harm is an end that a person might have. If we can say that what is rational is normative because it provides reasons for meeting one’s ends, then we could likewise say that a risk is indirectly normative, or perhaps not normative at all. In this explanation, when we call something a risk, we are assuming that one has as their end the reasonable desire to avoid harm. A risk is something one ought to avoid. However, as I noted, there are some limitations to this explanation, especially since having as an end the desire to avoid something unwanted does not provide much of an explanation for the responsibility we place on people who have taken a risk when they probably should not have. It might sometimes be irrational to take a risk, but this is not always the case. Additionally, since it is apparent that the context and circumstances of a risk, as well as what is of value to a person, will matter in determining that risk’s character (i.e. what sort of harm it might involve), then different people will make different decisions when faced with the same risk. Similarly, there might be differences in how one person decides to act in different contexts or circumstances in their lives. This is certainly inconsistent, but it is not necessarily irrational. The demands of rationality do not require that one neglect context and circumstance. The second explanation I proposed describes how risks might be (weakly) normative in a way that is similar to morality. The responsibility that we assign to people who do not heed the warning of a risk is better explained on this view, as is the claim that a risk might act as a filter on one’s decisions which can change what options or choices one will  119  have in making a decision. From this discussion, however, the perspective of the person making the decision figures prominently and suggests that risks, or at least the harm they may result in, are not always merely objective facts in the world. The perspective of the individual matters when attempting to understand how this account works. On the one hand, a risk is prescriptive because a risk is negative and involves some harm, while also carrying the implicit message that one ought to act in such a way so as to avoid this possible harm. On the other hand, it is not always clear what the harm is when we say that something is risky, and this will matter in understanding what kind of action we ought to take in order to avoid it. If risks are not merely objective facts about the world but are matters of a person’s evaluations, then the second explanation is the better one for the normative account. It is important not to overstate the normativity of risk. Saying that something is risky suggests we ought to think twice or get more information before making a choice but it is not always clear that something specific is recommended other than the avoidance of harm. Sometimes a risk entails taking an action and sometimes it entails avoiding an action, although in both instances the primary goal is to avoid some harm or negative outcome. In this way risk functions in a weakly normative way, similar in manner to keeping a promise. By not heeding the prescriptive action, you do not incur the same censure, condemnation or guilt that you might in the case of ethical standards or principles. However, you will face the chance of some kind of harm or negative outcome that you should want to avoid. In general, we ought to avoid those things that we think of as negative. Note here the one-dimensional nature of risks versus the complexity of moral claims; it is obvious that  120  a distinction must be made between risks and morality. Risks are not normative in the sense that moral claims are because they are much weaker in their justifications and they recommend (usually) one particular action. They are normative because they provide a weak justification for action but they are not as weak as the rules of etiquette. Thus, risks lie somewhere in between etiquette and moral claims. Risks are also much weaker in their normativity because they are uncertain. There is no guarantee that some harm might occur, and the recommendation to act is not strong. If you take a risk and suffer because of it, you might be held responsible: but this will often depend on the probability involved. Taking a very big risk with a high chance of things turning out badly is unwise, but taking a small risk is less so. In contrast, making a decision that involves doing something wrong like lying or stealing, is not uncertain. Unlike other phenomena, which are more easily categorized by a fact/value distinction, risks straddle this divide in an interesting way.204 On the one hand, risks require us to make evaluative judgments about the world in order to recognize the negative features of a risk. Clearly, this is different from evaluating the rightness or wrongness of an action or trying to determine what is good and what is bad in a moral sense. On the other hand, unlike purely evaluative phenomena, risks are unique in that they include a component that has little to do with evaluations. Assigning a chance or probability to the occurrence of some negative event usually does not require us to ascribe any additional features to them. Often, risks can be expressed explicitly in 204  This discussion does not take on the philosophical challenge of justifying the fact/value distinction but it is in part motivated by a view similar to Railton’s account where values are nonmoral and facts are hard because “they are part of a world that is causally responsible for our experience, a world most of whose features do not depend upon our conception of it or our aspirations in it. Reason, then, does not make facts hard; it finds them hard.” (Railton, Facts, Values and Norms, 45). I will use a common-use distinction between facts and values in the claim that risk straddles the divide although I recognize that there is something to be said for needing to justify the factualness of chance or probabilities. This is particularly true in those instances where no fixed numerical quantity is assigned to a risk. However, this is a discussion I have left out for now as I think it requires more attention than it can receive in this context.  121  numerical frequencies or percentages, but sometimes they are just expressed in terms of “high” and “low” or “likely” and “unlikely.” In the explicit case, the lack of evaluation is clear however in the second case it might be argued that these are evaluative categorizations to some extent. The descriptive view of risk incorporates values through rational choice theories such as EUT and SEU. Much of the research employing these theories involves studying acts of gambling where there are only two possible outcomes (investments, lotteries etc).205 However, real-world situations rarely have just two outcomes and since what people count as harmful is more fully understood as a matter of what is of value to them, many different sorts of values will compete in their decision-making. This is not to say that risk is comprised of threats to every value we might have. Some individuals will experience conflict between different values they might hold. Different people can be motivated in different ways by similar values. Again, it is necessary to make the distinction and to perhaps to draw a parallel with morality. Just as different moral claims have varying abilities to compel or motivate our actions and different people are motivated by different moral claims, the values in risk under a normative theory operate in a similar way. Finally, it might be argued that there is a further issue arising out of Rescher’s notion of inverted risks for the normative account. The prescriptive force of risk suggests that one should avoid the chance of harm. However, I have pointed out that missed opportunities and regret should be considered harmful or unwanted if they are possible outcomes of an uncertain event. Therefore, some situations exist where on one alternative choice, there might be harm in the sense of pain, illness or death, while on the 205  Lopes, “Between Hope and Fear,” 690.  122  other alternative choice, there might be harm in the sense of regret, lost opportunities or hope. The objection might be that there needs to be an explanation about why a prescription goes with one of these chances of harm but not the other, or why only one side counts as taking a risk but not the other. This objection can be answered in the following way. First, risk will prescribe avoiding the chance of harm where, as I have explained, harm is understood as whatever negatively affects something of value to a person. This means that in the case described, both options involve risk, which ought to be avoided, not one choice or the other. Therefore, both options also count as taking a risk. But the prescriptive sense of risk does not say anything about which thing of value ought to matter most to a person. On my view, risk is weakly normative, meaning that it can be overridden by other concerns and this can include other risks. The acceptability or unacceptability of a risk in such cases is always a comparative matter and relative to the options available; unacceptability is not absolute. Choosing to take the risk involving physical harm in order to avoid the risk of a missed opportunity is a decision based on what a person finds more acceptable or more tolerable. In such cases, it might be helpful to consider the measurable likelihood and severity of harm involved, but such information is not necessarily available or definitive. For some, if the chance of pain or suffering is worse than the chance of a missed opportunity, they will likely choose the option where pain or suffering is avoided. This does not mean that a missed opportunity is not also harmful to them. Furthermore, normative risk does not exclude other strategies for making a risky choice. It might still make sense to employ expected utility calculations to decide among the options when it is possible to assign values to both chance and harm. Often, as I have  123  explained, different types of harm are incommensurable so such strategies might not apply to many situations. However, to say that risk is prescriptive is not to say that there cannot be circumstances where more than one risk exists. In fact, this is part of the reason risk is complex and produces disagreement and debate.  124  5 Three Categories of Values ="#$  %&'()*+,'-)&$  The prescriptive or normative sense of risk suggests that when we say that something is “risky,” we mean that there is good reason to think there is the chance of some unwanted outcome and that some action ought to be taken to avoid it. In the previous chapter (Chapter 4), I provided some justification for thinking that a broader understanding of a risk, involving what is of value to us, was required to accommodate the different sorts of harm that it might involve. I then explained that action could be prescribed by suggesting it operated in a way that was similar to, but weaker than, morality. On this account the reasons for action are not simply the avoidance of harm, but more specifically the avoidance of a possible threat to what is of value to a person. Therefore, values can inform what counts as a risk rather than merely influence the evaluation of one risk with another. People may value many different sorts of things, however, so there is a good chance that a given situation might involve threats to more than one sort of value, which will result in a conflict. Empirical studies in risk perception have found that people often make mistakes in assessing what is more or less risky when compared with their calculated likelihoods.206 However these studies use estimates of harm or death for what counts as a risk. If people are concerned about other types of harm, such as being honest or meeting the goals they set for themselves, their perceptions of risky situations might be more complex than some of these studies assume. Fortunately, there has been some research which aims to investigate whether these other considerations actually do have an 206  Lichtenstein et al., “Judged Frequencies of Lethal Events.”  125  influence on people. The conclusions they draw indicate that there is more than one dimension of harm that is taken into account when a decision or choice involves risk, including concerns about betrayal and trust. I will briefly discuss some of the relevant empirical findings as additional support for the idea that risks are about what matters to people—about what they value. Avoiding a possible threat to something of value will not always require the same sort of action, however. While a risk may prescribe whatever action allows one to avoid the chance of harm, the type of action in question will vary with the type of value being threatened. This is especially true in cases where a risk involves threats to more than one type of value. In order to clarify what I mean by this, it is helpful to place different types of values into three broad categories. These categories are not meant to be definitive, but serve to roughly estimate those values that will require different or potentially conflicting types of actions in a risk situation. First, standard values (under risk) include physical well-being, money, etc. I use this category to refer to the usual types of risk that most of the technical, and some of the empirical, literature assume. Second, aspirational values are those things that we aspire to in life such as our career or personal goals, or our hopes and dreams. These aspirational values, partly informed by the writings of Davies and Lopes, are much like Rescher’s inverted risks I mentioned in Chapter 4. Finally, moral values include notions of goodness, justice, honesty, etc. Much of the groundwork for this chapter arose from empirical investigations, and I conducted one of my own as a preliminary assessment of the viability of the three categories of risk-values I proposed (see appendices). As a case study I chose the current  126  controversy surrounding the use of cognitive enhancers: this is an ideal topic since it involves the possibility of harmful physical side-effects, the inability to perform as well as others academically (and thereby failing to achieve certain goals), as well as the chance that their use amounts to cheating. I will provide a description of some of the results I have obtained which suggest that the research is worth further development.  ="!$  TEF-(-,34$U-&*-&H2$5@)+'$7-28$  There have been a number of empirical studies into various aspects of risk on decision-making and on the perception and reaction to risks. The findings have led to an increased appreciation for the way risk influences decisions and the different ways in which people interpret risky situations. Such studies have also led to the conclusion that people are often poor judges of risk and sometimes seem to make irrational choices due to judgmental biases, a reliance on various heuristics, overconfidence, or a desire for certainty.207 This has led some people such as Sunstein to argue that decisions concerning risks to society in general should be left up to the experts (e.g. risk analysts, scientists and regulators) rather than to the public.208 Many of these studies also show, however, that decisions involving risk involve a number of different concerns on the part of those affected rather than just the obvious worries about physical, environmental or economic losses.  207 208  Ibid. Sunstein, “Moral Heuristics and Risk.”  127  ="9$  10+(-2'-,2V$>-3202$3&*$P))*$  In one of the most widely known investigations, Kahneman and Tversky proposed the Prospect Theory of decisions involving gains and losses.209 They found that when choosing between a small certain gain and a larger possible gain we tend to be risk averse, whereas if the choice involves a smaller certain loss and a larger possible loss we usually are risk-seekers. In other words, we are less likely to take risks when we can gain something with certainty (even if it is something less valuable than what we would gain if we took a risk) but we are more likely to risk a large loss in hopes of avoiding any loss at all, rather than accept a smaller loss with certainty. According to Kahneman and Tversky, the prospect of a loss seems to provide a different motivation for our actions than the prospect of a gain. Kahneman and Tversky have also found that people are not necessarily risk-averse, but that they have an aversion to loss which is largely dependent on one’s point of reference; this means that our preferences can change when our reference points change.210 Tversky suggests that: Probably the most significant and pervasive characteristic of the human pleasure machine is that people are much more sensitive to negative than to positive stimuli…[T]hink about how well you feel today, and then try to imagine how much better you could feel…[T]here are a few things that would make you feel better, but the number of things that would you feel worse is unbounded.211 Similarly, Thaler conducted a study demonstrating that people who start out with additional money in their pocket will choose to gamble it (or take risks) whereas those  209  Kahneman and Tversky, “Prospect Theory.” Ibid.; Kahneman and Tversky, “Choices, Values and Frames”; Tversky, “The Psychology of Risk”; and Tversky and Kahneman, “Advances in Prospect Theory.” 211 Tversky, “The Psychology of Risk,” 75. 210  128  who start out with no additional money will be risk averse.212 Kahneman and Tversky also investigated the effect of framing, which produces inconsistencies in choices that they describe as a “failure of invariance”.213 Normally, rational people will prefer A to C if A is preferred to B and B is preferred to C. However, this changed when a particular situation was explained (i.e. framed) differently. For example, when patients were given a choice between radiation therapy and surgery for the treatment of lung cancer, more than 40% chose radiation when the question was put to them in terms of the risk of death while only 20% favoured radiation when the exact same underlying scenario was described in terms of a positive chance of survival. Kahneman and Tversky report that respondents …confronted with their conflicting answers are typically puzzled. Even after rereading the problems, they still wish to be risk averse in the “lives saved” versions; they will be risk seeking in the “lives lost” version…The moral of these results is disturbing. Invariance is normatively essential, intuitively compelling and psychologically unfeasible.214 Others also report the framing bias produced by uncertainty and risk.215 Risk affects how we understand a situation or problem but also how we behave and later come to think about it. The framing of a question or situation seems to focus the attention of respondents on entirely different interpretations even though the basic facts remain the same. In the above example, the risk of death and life expectancy provided two different frameworks for the same information, yet they elicited two different responses from people.  212  Thaler, Quasi-Rational Economics. See also Miller, Do the Ignorant Accumulate the Money? Kahneman and Tversky, “Choices, Values and Frames.” 214 Ibid., 343. 215 Baron, Morality and Rational Choice; Baron, Judgment Misguided; Redelmeier and Shafir, “Medical Decision making”; Bromley and Curley, “Individual Differences in Risk Taking.” 213  129  Slovic and his colleagues explain that the heuristics people rely on are even more problematic since they lead to overconfidence about judgments based on them.216 Overconfidence is also found in the estimations of risk by experts and scientists which suggests that there might not be good reason to exclude the public from decision making about societal risks. Studies have also demonstrated that in a high-risk situation a positive affect or mood can produce risk-averse behaviour (meaning that people are less likely to take a risk when they are in a good mood than they are when in a neutral or bad mood). Interestingly, the same study suggests that when the situation is considered low-risk, the opposite is true in that people will exhibit risk-seeking behaviour when they are experiencing a positive affect.217 The evidence therefore indicates that people do make mistakes or exhibit bias when making decisions involving risk, leading to seemingly irrational assessments or judgments. However, many of these studies assume a very narrow conception of risk and pay little attention to the fact that both the context and the subjective evaluations people make will influence their understanding and assessment of the risk involved. While the average person might rank the risk from nuclear power far higher than the actual likelihood of death (i.e. statistical probability) as Slovic and his colleagues demonstrated, the argument that perceived risks are different from real risks is not entirely convincing. The ‘real’ risk of nuclear power is a measure of the frequency of death, while the ‘perceived’ risks are laypeople’s concepts of risk which incorporate other considerations such as possible betrayal, severity of consequences, voluntariness 216 217  Slovic, Fischhoff and Lichtenstein, “Rating the Risks,” 66. Nygren, “Reacting to Perceived High and Low-Risk Win-Lose Opportunities in a Risky Decision-Making Task,”  73.  130  and dread.218 These subjective evaluations made by non-experts influence what counts as a risk for someone and also determine the options that are available to them. Further studies have provided evidence that, in fact, frequency of death, physical harm or even economic loss comprise just one set of concerns influencing decisions involving risk.  =";$  .(+2'V$>0'(3:34$3&*$W@4-H3'-)&$  Although people might often be biased in their assessment of risk due to inferential rules and heuristics, other research has proposed that what might seem like ‘mistakes’ in applying the laws of probability are in fact the result of values. People are not just concerned with physical harm, nor do they simply weigh costs and benefits, but they also worry about betrayal, social order and obligations to future generations, each of which appear to inform what counts as a risk to begin with.219 Baron argues that the effects of risk in decision-making are often misunderstood as acting on one’s intuition.220 His research indicates that people think that causing a harm (by taking a risk) is worse than not preventing it (not taking a risk). The same is true even in those instances where the outcome of not taking a risk, for instance by refusing to take an experimental drug for a serious disease, is more likely to result in harm than if a person took a risk by trying the experimental drug. People think that a death by ‘natural’ causes such as a disease is more acceptable than a death by ‘unnatural’ causes such as an  218  Slovic, Fischhoff and Lichtenstein, “Rating the Risks,” 70. The literature on risk perception is vast and the field is well established so making an argument challenging this distinction would require a thorough analysis and does not contribute to the normative account I want to provide. It might be the case then that the normative or prescriptive sense of risk refers only to perceived risk but this does not change the account or my description of it in any meaningful way. 219 Evans, “Normative and Descriptive Consequentialism”; Slovic, “Perception of Risk”; Tetlock, “The Consequences of Taking Consequentialism Seriously.”1994 220 Baron, “Tradoffs Among Reasons for Action.”  131  experimental drug, and even when the severity of harm is lessened from death to blindness, we find the same result.221 The act-omission bias is an interesting phenomenon where harms caused by omissions are considered preferable to equal or lesser harms caused by actions. In another study, Ritov and Baron also report that people are more likely to consider the risk of harm from choosing not to (omission) vaccinate against a disease less serious than the risk of any harm caused by getting vaccinated.222 There have been numerous accounts demonstrating that this bias is not confined to hypothetical scenarios and affects real life vaccination decisions by people.223 In explaining these results, Ritov and Baron conclude that “[w]e suspect that some people regard their bias toward harms of omission as a personal moral rule, not to be imposed on others who might not accept this rule, while other people regard it as a moral rule that must be adhered to even more strictly when making decisions that affect others.”224 Cushman et al also found that the omission bias is most likely the result of conscious reasoning, suggesting that the perceived potential harm involved in making decisions under risk is not merely a non-cognitive intuitive response but the result of an evaluation.225 Other research has also shown that decisions regarding risks can be affected by what is of value to people. Koehler and Gershoff found that while people will try to avoid risks in general, they will actually incur greater risks of physical harm to avoid the chance  221  Baron, “Tradeoffs Among Reasons for Action” See also Malm, “Killing, Letting Die and Simple Conflicts”; Feldman, Miyamoto and Loftus, “Are Actions Regretted More Tan Inactions?”; Ritov and Baron, “Protected Values and Omission Bias.” 222 Baron and Ritov, “Omission Bias, Individual Differences, and Normativity,” 75. 223 See Asch et al., “Do Physicians Have a Bias Towards Action?”; Baron, “The Effect of Normative Beliefs on Anticipated Emotions”; Haidt and Baron, “Social Roles and the Moral Judgment of Acts and Omissions”; Cohen and Pauker, “How Do Physicians Weigh Iatrongenic Compilations?” 224 Baron and Ritov, “Omission Bias, Individual Differences, and Normativity,” 84. 225 Cushman, Young and Hauser, “The Role of Conscious Reasoning and Intuition in Moral Judgment.”  132  of betrayal. They report that in a choice between two products where one includes a 2% chance of harm (i.e. injury) and the other has a 1% chance of harm but also a very small (0.0001%) chance of betrayal, people tended to choose the product without the chance of betrayal even though the risk of harm was higher. Such aversion to betrayal is even more pronounced in studies showing that people prefer a higher chance of dying in a car accident to a lower chance where the direct cause of death is an airbag malfunction; people clearly feel betrayed when a product designed to increase safety fails.226 Koehler and Gershoff explain that “the betrayal harms, unlike non-betrayal harms, involve violation of a protective trust” and that people will double their risk of death from accidents, injuries of diseases to avoid even a small chance of betrayal.227 Finally, other studies suggest that people consider it worse to take a risk knowing the chances of harm to others than to take a chance without such knowledge. When asked to decide how to punish a company that has sold a product that results in harm or death, it mattered to people whether the company made an attempt to assess the possible risks beforehand. In separate studies, both Tetlock and Vicusi found that a company was likely to receive more severe punishment if it had conducted a cost-benefit analysis before selling their product, even if a very high value was placed on human life.228 When no analysis was done, people felt that a less severe punishment was deserved. People therefore considered it worse to take a risk while knowing that people might get hurt, than it is to take a risk without such knowledge. This suggests that a risk is considered  226  Koehler and Gersoff, “Betrayal Aversion,” 163-164. Koehler and Gersoff, “Betrayal Aversion.” See also Rousseau et al, “Not So Different After All,” 251; Sunstein, “Moral Heuristics and Risk,” 165. 228 Tetlock, “Coping with Tradeoffs”; Viscusi, “Corporate Risk Analysis: A Reckless Act?” 227  133  bad because it might cause harm to others, but is considered to be even worse if it also violates a moral norm.  ="=$  52F-(3'-)&2$32$I)22-@40$13(E2$ Empirical investigations have shown that such violations of moral norms inform  what counts as a risk along with physical and economic harm or loss. But more theoretical psychometric approaches to risk argue that there is yet another type of evaluative consideration that ought to be accounted for: aspirations. If a person has aspirations or goals, then they will act in such a way as to achieve them (or to avoid failing to realize them). The importance of aspirations in risk arises from theoretical criticisms (rather than empirical investigations) of traditional decision theoretic measures such as expected utility analysis (EUT) and subjective utility analysis (SEU).229 Proponents of this view argue that the intuitive notions people have (i.e. that a risk is the chance of something bad occurring) do not seem to be adequately reflected in traditional decision theoretic measures even though they incorporate values. This problem is highlighted by the difficulty in assigning an expected utility to the value of belonging to a family, loving a partner or travelling to a new place, for example. Traditional measures of risk are challenged by the effect of our psychological intuitions as demonstrated in some of the studies already discussed. Davies, for instance, suggests an alternative risk theory in an attempt to more adequately address this fact. Our attitudes towards risk are generally negative and so people tend to be risk averse, yet he argues that it is necessary to view risks in terms of their possible threat to our aspirations. 229  As discussed in Chapter 2.2.4, risks can be measured and managed by employing some kind of rational strategy such as calculations of expected utility (EUT) or subjective expected utility (SEU).  134  In this way then, to be risk averse is to be averse to the possibility of not achieving our goals, whatever those might be. Furthermore, this concept of aspiration is extended “to a continuum of aspiration levels… [where] rational choice requires minimizing the probability of not achieving the aspiration level for all possible choices of aspiration level simultaneously.”230 Employing aspirations in this way resolves some of the problems with attempting to understand how people order their preferences, since aspirations exist on a continuum.231 The way our aspirations affect our perceptions of risk is important to understanding how we ought to think about risk in general, and this finds further grounding in Lopes. In her descriptive theory of decision making under risk, an aspiration level can include survival, peer-related benchmarks and regrets about options not taken, but is typically “a situational variable that reflects the opportunities at hand (“What can I get?”) as well as the constraints imposed by the environment (“What do I need?”).”232 Her aim is to reconcile the features of both motivational and psychometric theories of risk, which she argues will eliminate some of the limitations each theory has on its own. Psychological studies also indicate that one’s aspirations, and the probability of achieving them, are part of the calculus involved when a person is faced with choosing among risky options.233 These accounts support the view that risks involve the chance of losing something of value since aspirations are more than just harm, injury or loss. Lopes defines aspirations in a fairly broad and generalized manner, but suggests that these entail the possibility of  230  Davies, “Rethinking Risk Attitude,” 186. The success of Davies’ theory is beyond the scope of my project, but I think the foundation of his claims demonstrate that there is something unsatisfying in assuming there is only one way to understand how people are motivated to avoid different types of harm. 232 Lopes, “Between Hope and Fear,” 701. 233 Ibid.; Lopes and Oden, “The Role of Aspiration Level in Risky Choice”; Payne, Laughhunn and Crumm, “Further Tests of Aspiration Level Effects in Risky Choice Behaviour”; Davies, “Reethinking Risk Attitude.” 231  135  losing something one either needs or wants (as opposed to merely a loss or harm tout court). Davies goes further and suggests that aspirations are in fact the goals we have where goals are those things that we value. I have not provided a detailed account of either of these arguments, but the suggestion that one’s aspirations can inform one’s understanding of risk is convincing. For Lopes and Davies, all risks are aspirational since aspirations encompass a multitude of goals, ranging from survival to regret. Although including the criterion of aspiration is congruous with our intuitions about risk, I find that categorizing something like survival, or the desire to avoid injury, as aspirations is problematic or at the very least a bit odd. If a person has to make a decision involving a risk of death, something is missing from the account claiming that the avoidance of such a risk is the result of a desire to minimize the chance of not achieving the goal of survival. The account is not necessarily wrong, but it would be unusual to characterize the desire to survive as an aspiration. At the same time, if a person has aspirations of great success, then a decision involving the chance of not achieving this goal is reasonably considered a risk. The two cases are not quite the same, and this makes it clear that reducing everything to aspirations is problematic. Introducing aspirations in this manner supports the view that risks are not simply variables in expected utility calculations, even when values and preferences are accounted for. Risks seem simple enough given that they are composed of some chance or likelihood and some kind of negative feature in the form of harm or loss. Since they are usually negative, risks can be said to be normative because they include the implicit message that one ought to act in order to avoid possible harm. What counts as a harm is best described as whatever threatens what is of value to people. There is empirical  136  evidence demonstrating that risks can also threaten what is of moral value such as one’s sense of trust, justice or obligation which are not always overridden by the threat of physical injury. Aspirations and hopes also comprise a set of values that seem to be different in yet another way which is distinct from both the threat of physical injury and betrayal for instance. A complication arises, however, since the action one ought to take will depend on the value that might be negatively affected. In those instances where more than one sort of harm is possible, there will be a conflict both in determining which value is more important and what action one ought to take. In order to avoid the chance of betrayal, for instance, one might have to act in a way that involves a higher chance of economic loss. What might therefore look like risk-seeking behaviour on a narrow conception where it is assumed that losing money is the harm to be avoided may in fact be risk-aversion if some other source of harm is of more concern. Three very broad categories of what I will call risk values can therefore be used to distinguish the types of actions that a risk could prescribe. These categories are not meant to be definitive, and there likely will be overlap from one category to another. Standard risk values include one’s health, well-being, freedom from pain, and money. Aspirational risk values are those things that we aspire to in life such as our career or personal goals, our hopes and our dreams. Finally, there are moral risk values, which include notions of goodness, justice, honesty, etc.  137  ="B$  X'3&*3(*$7-28$A34+02$  Generally speaking, events or decisions that lead to the possibility of death, physical suffering (injury or disease), loss of money or time are thought to be risky. This includes anything that involves a decrease of some sort in a person’s present state of health or mind, or the loss of something they already own or have. What makes the possibility of contracting a disease a risk, for example, is the fact that it decreases the state of one’s health, even if that person is already ill. Similarly, the unwanted side effects of taking a new drug, complications from surgery or the death of a loved one are also risks to be included in this category. Some actions involve risks in this category because they make the loss of money or the ability to earn a living through employment possible. Losing money that you already have, and presumably value for whatever reason, will be more or less harmful depending on how much you have to begin with and how much will be lost. Losing a job means the loss of the ability to sustain oneself: without a job, it is not possible to meet basic needs or the necessities of life in a given society. Sustaining oneself is obviously something of value to most people since the consequences of failing may be severe and become worse over time. Note, however, that this category does not include concerns such as the loss of status, self-esteem or goal fulfillment. Being very successful is not necessary to meeting our basic needs: part of the reason I have labeled this category as ‘standard’ is to reflect the sorts of risks that technical analyses are confined to. While it may be recognized that losing a job can be detrimental to one’s self-esteem, such concerns are rarely considered distinct from economic self-sustainability.234  234  Inhaber, “Risk with Energy from Conventional and Nonconventional Sources.”  138  Thus, risks to these sorts of values involve threats to our basic sense of well-being and nothing more. They involve the possibility of losing reasonable degrees of health, functionality and contentment, or in other words, whatever compromises or diminishes a ‘normal’ state of being. No further claim is being made about what a normal state of being might be, particularly since it will vary widely from person to person and culture to culture, which is why the stipulation of reasonableness is made. This description of harm is not limited to immediate concerns, but can also apply to future ones. For instance, someone carrying the gene for Huntington’s disease is said to experience a harm since he or she will suffer the effects of this disease later in life despite living for up to thirty years with no symptoms. In this case, the individual does not experience a decrease in his or her health at the present time but can reasonably expect to in the future. These concerns comprise the standard category of risk-values since they are most commonly associated with those situations or events we identify as risky. Our physical well-being and our freedom from pain or loss are obviously things that we value. Technical views of risk, such as those in risk analysis, begin with the assumption that physical harm, illness, injury, death, and economic losses are the sorts of harm that ought to be measured, compared and mitigated. A standard risk-value is therefore best understood as anything that threatens what we typically associate with risks in general. This is obviously a very broad category, but it is meant primarily to serve as a contrast with the other two categories of values or concerns that I describe next.  139  ="N$  52F-(3'-)&34$7-28$A34+02$  We are often faced with risks that are not adequately captured by the standard category just described. A second type of value is therefore necessary, and the work of Davies and Lopes goes a long way in defining this second category. While standard risks are the normal sorts of risks we are most familiar with, they involve the loss of something of value that is not directly tied to one’s goals. The risks of death, injury, illness, or bankruptcy exemplify standard risks, although it could be argued that people have aspirations for survival, good health, friendship and financial stability. However, aspirations are those things we strive for and are not things we have already attained. In Chapter 4 I briefly discussed Rescher’s suggestion that when we opt out of a gamble, we run the inverted risk of losing out on something good or desirable. On his view, “[w]ith such inverted risks no actual harm need enter the picture at all, the “loss” at issue can be a matter of lost opportunities alone—the quasi-negativity of a failure to realize ‘what might have been.’”235 I suggested that a missed opportunity or regret could indeed count as a harm for someone. While consistent with Rescher’s overall claim that risks are ontological facts about the world that do not involve subjective ideas about the character of harm, this notion of an inverted risk does not mean, as he claims, that almost any uncertain outcome situation can be a risk. The intuition that the chance of missing out on something beneficial can also be a harm is perhaps better understood in terms of aspirations. A person’s hopes and goals can range from the mundane (getting to class on time) to the life defining (becoming an astronaut). The more important or valued these aspirations  235  Rescher, Risk, 10.  140  are, the more likely that failing to realize them will be seen as harmful or undesirable. A person who has aspirations to gain social status, become rich, walk on the moon or win an Olympic medal will experience the chance of not achieving these goals as risks. Let us return to Rescher’s example of an inverted risk, which involves a decision to either opt in to a gamble and face the risk of losing, or to opt out of the gamble and face missing out on some potential benefit. If the person faced with this choice has the aspiration to become rich and to win this particular bet, then both options entail a risk. This is not about a single inverted risk, but about the possibility of harm to two different things of value; the money one already has and the aspiration to attain some goal.236 It will often be the case that these two categories of risk-values, aspirational and standard, are involved in a single situation or decision, such as the use of artificial enhancements or any type of behaviour that might compromise our health, safety or well being. For instance, many professional athletes have aspirations to excel in their fields and to attain a certain level of competitiveness and achievement, such as competing or winning at internationally recognized events. It is well known that the desire to win has resulted in an increased use of steroids and other means of performance enhancement, a number of which involve the possibility of unwanted or harmful side effects and have been made illegal by most sporting associations and competitions. As a result, some athletes are faced with a very difficult decision. On one hand, if they choose to take some form of performance enhancer, they might experience physical harm or expulsion from the sport if caught. On the other hand, if they choose not to take an enhancer, they might find themselves unable to compete and therefore fail to achieve what they have  236  Admittedly, this is a weak example but it helps clarify the difference from Rescher’s view.  141  already devoted much of their lives to (this is particularly true if other athletes are taking performance enhancers). The tension in this situation might be mischaracterized or misunderstood merely as part of an individual’s cost-benefit analysis. What looks like a desire to increase the net benefit of a decision, such as using some means of enhancement, can also be understood as the desire to avoid a risk; in this case, the risk of unmet (perceived) potential and the failure to fulfill one’s aspirations of whatever sort. It is important to note that part of the harm is due to a loss of what one hopes the future will be since there is no guarantee of goal fulfillment. Taking steroids to avoid the chance of losing a gold medal does not mean that a win is inevitable or even likely. If a win was guaranteed, then there is no risk involving the hope of winning, just a risk involving physical side effects. Threats to one’s aspirations involve the loss of something one values, where what is valued is directly tied to one’s goals. In order to avoid the chance of unfulfilled hopes and aspirations, the action one ought to take will sometimes require taking the chance of some other harm such as physical injury or breaking the law or cheating. If it is important to an athlete that they win ‘fair and square’ by operating within the rules, then this third category of values should be considered.  ="Q$  P)(34$7-28$A34+02  A third type of value is relevant to this account of what types of actions a risk prescribes. 237 I have argued that a person might value their health, well-being and ability  237  The assumption here is that we value morality. I recognize that in order for this category to avoid hopeless circularity I would need to justify a particular moral perspective. However, as a categorical label I only mean to suggest that when people are faced with the threat of compromising their sense of what is right, wrong etc, they may understand that to be a risk as I have described it.  142  to meet their basic needs, and at the same time also value various goals which drive some of their behaviour. However, the desire to do the right thing, to act in a just and fair manner, to be honest and in general to simply be a good person often motivate our actions and are obviously of value as well. We value these qualities in others and ourselves. A person’s moral (risk) values have a significant influence on how they act and the decisions they make, and these are distinct from both standard and aspirational values. Arguably, we could say that a person has aspirations to be a moral person or to fulfill the requirements morality might make of us and so there is no distinction to be made between moral and aspirational risks. On this view, moral values could be understood as rarefied aspirational values. For some people it is likely that a threat to their sense of morality will be considered worse than the threat to their career goals for instance, but they are nonetheless merely part of a person’s aspirations in life. The problem with this interpretation, however, is that there is often a conflict between things we value (such as success) and acting according to a set of moral principles or rules, suggesting that they are not the same types of values at all. Even though we might have different aspirations that might conflict, such as wanting to be financially wealthy and have a very large family, these are both desires and goals which are best described as aspirations. Neither of them is necessary to our survival, but both serve to enhance our lives and sense of well-being. One of these two aspirations might be sacrificed in order to meet the other one, but there is always the possibility that the sacrificed goal could be achieved at a later time or under different circumstances. In contrast, conflicts between aspirations and morals require sacrifice as well, but this sacrifice is usually an exclusive  143  either/or decision. We can aspire to have financial wealth and also value social justice, but the tension between the two is difficult to resolve regardless of the circumstances. This suggests that risks to our moral values are substantively different from those to our aspirational values. It is necessary to make an important clarification about this category. The claim I have made is that a risk is about what matters to people since it negatively affects what is of value to them. The three categories I have outlined are meant to explain the relationship between risk and action. For those who place value on being a good person, telling the truth and being honest, it will matter when some situation or set of circumstances arise which involve the chance that these values will be threatened. However, I do not mean to suggest that people can in fact take moral risks, nor am I making any claims about moral luck.238 Whether a person is in fact acting morally in these cases is not as important as whether or not they will: a) understand that a risk exists when something of moral importance (i.e. honesty) is threatened and b) act in whatever way allows them to avoid this possible harm. There are relatively few cases involving moral risks of this type, since most of the time a decision involving the possibility of breaking a promise or cheating is not really a risk. If I have to be dishonest in order to win a gold medal, there is no uncertainty about the fact that I will be doing something I think is wrong. The choice then, is between taking a risk and doing something wrong. However, there are exceptions such as the studies cited earlier involving the chance of betrayal. In these scenarios, the betrayal was not certain, but the chance of betrayal was still considered undesirable enough that people chose to take a bigger risk of physical or economic harm. There is clearly a difference in 238  See Nagel, “Moral Luck.”  144  the action recommended by these two threats to different values. Other examples of the moral risk values I mean to describe can occur with relatively new and emerging technologies or medical breakthroughs. It has not yet been determined, for instance, whether the non-therapeutic use of cognitive enhancers will amount to cheating or breaking the rules. This is the topic of the case study I conducted which I will describe in the next section. In part, this survey-based investigation allowed me to frame the risks of cognitive enhancer use in these three categories in order to determine if they had different influences on the responses elicited.  ="R$  C320$X'+*:J$C)H&-'-<0$T&/3&,0(2$  One of the advantages of technical perspectives is that they usually characterize risks narrowly in terms of the chance of injury, death, economic losses or the sorts of harm that can be measured. It is much more complicated to investigate risks when they are understood in terms of what is valued by people. As I have explained, some of the empirical research in psychology and risk perception suggests that people exhibit behaviour that is seemingly irrational because they often fail to make the probabilistically ‘right’ decision, while other studies reveal that this is irrational only when little concern is paid to values. For the account of normative risk I have proposed, a risk is always something to be avoided since it is always something negative (although there are different types and degrees). There might be a difference, however, in the action prescribed when what is threatened is our aspiration to achieve some goal as opposed to our desire to avoid physical harm. Determining whether risks can prescribe different actions might be  145  possible through an empirical study. I pursued a preliminary investigation using a realworld case study (cognitive enhancers) which seemed to involve all three of the value categories I have described, hoping to find some indication that these categories were viable for future research.  ="#S$ C)&'(-@+'-)&2$)6$TEF-(-,34$70203(,/$$ Conclusions from empirical research have informed the work of a number of philosophers. I will provide a short description of some examples in ethics where this has been insightful. John Doris, for instance, appeals to evidence arising from social psychological experiments to argue that the virtue theory tradition in ethics, which places character at the centre of moral thinking and action, does not seem viable given the results of such empirical evidence. According to him, the notion of character posited by virtue theorists is “confounded by the extraordinary situational sensitivity observed in human behavior.”239 There is, he contends, much evidence in the empirically derived information to suggest that as much as we might aspire to act with good character, we are often unable to do so given the influence situational factors have on how we behave.240 Doris asserts that there is good reason for looking at the pragmatic reality of how people think and behave in order to contribute substantively to ethics. Doris argues that social science and psychology demonstrate how influential situational pressures can be on one’s moral behaviour: The experimental record suggests that situational factors are often better predictors of behaviour than personal factors, and this impression is reinforced by careful examination of behaviour outside the confines of the laboratory. In very 239 240  Doris, Lack of Character, 15. Ibid., 18.  146  many situations it looks as though personality is less than robustly determinative of behaviour. To put things crudely, people typically lack character.241 Both moral psychology and social science broach a number of issues that the typical philosopher addresses in a theoretical and often abstract way through the use of thought experiments or analytic reflection. It is of course important to know what we ought to do, but a problem exists for such prescriptive theories if we discover that the action recommended is very difficult or impossible for the average person. While I do not intend to address all of Doris’s claims here, what is most applicable in his account is the crossdisciplinary understanding of ethics he seems to advocate. Additionally, he argues that a situationist can show that people are often unaware of the various influences on their behaviour and actions. For example, Milgram famously demonstrated the widespread human tendency to obey authorities, even when doing so contravenes our own ethics.242 The Stanford prison experiments of Zimbardo showed that ordinary, decent people can do very bad things given the right set of circumstances.243 Other studies have shown that a person’s affective mood can be a significant factor in his or her decision to help someone in need or distress.244 Social intuitionists like Haidt go so far as to suggest that our moral judgments are the result of our immediate intuitive responses rather than acts of reasoning.245 Haidt bases his thesis on a number of experiments in which he found that people only reasoned about decisions after having made them: “When faced with a social demand for a verbal justification, one becomes a lawyer trying to build a case rather than a judge searching for  241  Ibid., 2. Milgram, Obedience to Authority. 243 Zimbardo et al, “The Mind is a Formidable Jailer.” 244 Issen and Levin, “Effect of Feeling Good on Helping”; Baron, “Biases in the Quantifiable Measurement of Values for Public Decisions.” 245 Haidt, “The Emotional Dog and its Rational Tale.” 242  147  the truth.”246 This, he argues, suggests that our sense of what is right and wrong in the world may seem to be rationally derived when in fact we simply rationalize our immediate intuitions after the fact. These kinds of argument are relevant for two reasons. First, they provide some support to the idea that there is value in employing social scientific methods to better understand ethics. Second, some of these studies clearly demonstrate the complex web of influences that explain our behaviour.  ="##$ ./0$7-282$)6$C)H&-'-<0$T&/3&,0(2$ New developments in neuroscience, neurotechnology and related disciplines are allowing for an increased ability to alter cognitive function in a number of ways. The prospect of enhanced cognitive capacities has led to intense discussions by both scientists and ethicists. There is already a growing body of work addressing the more salient issues of safety and moral acceptability. Unlike the debates about genetic enhancements, which have produced controversies and revealed a general sense of unease on the public’s part, the reaction to cognitive enhancing drugs and procedures is still relatively unknown. Cognitive enhancement is different from genetic enhancement in terms of access, availability, and the promise of possible benefits: the result is a completely different set of challenges. Obviously, the benefits cognitive enhancements could confer are significant and extremely appealing in societies such as ours where the demand to perform—and to outperform the competition—is all pervasive. What is not yet clear in  246  Ibid., 814.  148  this broader ethical discussion is the degree to which people will accept the use of such enhancements and how they will understand the risks inherent to such enhancements. Therapeutic breakthroughs in neuroscience have led to a number of beneficial treatments for those suffering from diseases or disorders of the mind, but these therapies also have effects on people without such conditions. Most significantly, there is evidence that some therapies are effective in both treating illnesses and also in enhancing mental capacities in the general population. Improvements in memory, concentration, and general mental performance are prized by people whose careers require such attributes, as well as by those who are experiencing declines caused by stress, fatigue, and the normal effects of aging. There are also added pressures, particularly in Western society broadly defined, to outperform one’s colleagues, classmates or associates in order to remain competitive or to attain one’s aspirations. Competition is a driving force in many situations: modern society values mental capacity and aptitudes, and this makes the ability to gain an edge very alluring. Such widespread interest has lead to greater discussion by both scientists and ethicists since the “prospect of neurocognitive enhancement raises many issues about what is safe, fair and otherwise morally acceptable.”247 We have recently seen other examples of advances in science leading to the prospect of enhancement, such as those promised by genetic and genomic research. But such genetic enhancements, which involve genetic screening, the selection of desirable traits or interventions producing direct genetic change, are still largely unrealized. In the case of cognitive enhancements, however, some forms are already available and being used by those who know about and have access to them. 247  Farah et al., “Neurocognitive Enhancement,” 421.  149  Interestingly enough, some of the newer and pressing issues arising from the field of neuroethics are actually the result of research and therapies that are not particularly recent. For instance, pharmacological developments over the last four or five decades have provided us with drugs such as methylphenidate (Ritalin) to treat attention-deficit hyperactivity disorder (ADHD) which can also be used for its stimulant effects in those without ADHD. While Ritalin and knowledge of its stimulant-like effects are not new, its increased use for non-therapeutic effects is becoming more and more common.248 Farah et al suggest that such drugs can also be used to enhance memory, performance, mood, appetite, libido and sleep.249 The benefits are obvious enough. As with other types of drugs or treatments, however, there are also a number of risks involved. Typically, the types of risks attributed to drugs or medical interventions are expressed in terms of potential physical harm or side-effects that might occur from their use. What makes cognitive enhancers such an ideal topic for my purposes is the different sorts of values that may also be threatened by them. It is possible to use the three categories of risk-values I have described to capture many of the issues and concerns that are raised by such cognitive enhancement. First, there are the familiar and most obvious risks involving physical harm such as unwanted physical side effects, including those, which do not necessarily appear right away, and which might entail premature decline in cognitive function. Additionally, there are the usual risks associated with taking any pharmacological treatment which are  248  The reasons for the increase in use of Ritalin and similar drugs for off-label use are numerous, but I will not go into detail regarding the associated literature. For the purposes of this discussion, the important thing is the increased use and acceptance of the drug over the past ten years. See Glannon, “Neuroethics”; Iles and Racine, “Neuroethics—From Neurotechnology to Healthcare”; Racine, Van Der Loos and Iles, “Internet Marketing of Neuroproducts”; Nagel and Neubauer, “A Framework to Systematize Positions in Neuroethics”; Sahakian and Morein-Zamir, “Professor’s Little Helper.” 249 Farah et al, “Neurocognitive Enhancement,” 421.  150  mitigated to some degree by regulatory mechanisms but which, as we know, are not foolproof means to guarantee safety. In fact, the safety of these drugs used for cognitive enhancement is one of the issues of concern often cited in the neuroethics literature.250 While such risks are among the more obvious ones since we value our health and well being and tend to avoid those situations where our health is compromised, other values are also threatened by the use of cognitive enhancements.251 Unlike interventions that produce obvious physical effects on the body such as the use of steroids to enhance physical performance or cosmetic surgery to enhance physical appearance, cognitive enhancers do not produce any obvious physical signs of use. Enhanced memory or attention span is not easily recognized and can reasonably be explained by the exertion of more effort or a change in attitude, making their use effectively invisible to others. There is therefore less chance for social sanction or stigmatization, and this makes the temptation to use these drugs even greater if it is possible to get an edge over one’s competition without being found out (even though there are still physical risks). It is possible, therefore, that some harm might result from someone’s decision not to use cognitive enhancers. If a person has aspirations to excel in their chosen career, for example, and this requires outstanding cognitive performance, then it can be a risk not to take advantage of whatever means are available to achieve such aspirations. University students who aspire to achieving high grades, or who are competing for scholarships or placement in professional school, might consider it harmful to miss out on getting an advantage (actual or not). When more people use enhancers to improve their  250  Chatterjee, “The Promise and Predicament of Cosmetic Neurology.” Other issues, such as coercion, distributive justice, personhood and ethical problems are treated separately from safety concerns 251 Note that while the example of cognitive enhancers seem to involve all three types of risk-values, not every case of risk is normative nor involve all three types.  151  performance, it becomes even more risky to opt out since it might be difficult to remain competitive. Finally, there are risks which potentially threaten one’s moral values. In most professional (and some amateur) sports, steroid use is explicitly banned and anyone caught using them faces some sort of punishment for cheating. To date, there is no such formalized set of rules for the use of drugs or other methods to enhance one’s cognitive capacity even though their use is clearly analogous to steroids. Achieving a high mark on a test because one puts hours of study and effort into mastering the material is generally thought to be a much more satisfying and perhaps morally praiseworthy accomplishment than getting the high mark by cheating. If a person values such moral victories, then they are also putting themselves at risk by choosing to use some means of enhancing their memory or concentration in order to perform better. Since it is not clear whether using drugs like Ritalin actually is cheating, opting to use them involves the chance of doing something dishonest. Caffeine, which is a substance widely consumed by a significant portion of the population, has similar, although less potent, stimulant effects on one’s performance so it is often used as a comparison to claim that cognitive enhancers are not cheating. However, caffeine is rarely consumed in a high enough concentration to have significant effects, nor is it consumed in secret in order to confer some advantage not obvious to others. Ritalin, on the other hand, is different in terms of its availability, its acceptance and the degree to which it produces beneficial effects. Thus, a person who values ‘honest’ work in order to achieve their goals and fair competition takes a risk of acting dishonestly if she decides to use cognitive enhancers.  152  ="#!$ X+(<0:$)&$7-282 I recently conducted a survey in which respondents were asked about some of the most prevalent issues to date in the literature (such as the use of enhancers for therapy vs. 252  enhancement, natural vs. unnatural stimulants, identity, and social access).  Respondents were divided up into four groups, and each one received the same questions, which were framed in four different ways: standard risks to health, benefits, threats to one’s sense of morality, and threats to one’s aspirations. The preliminary results suggest that people are not discouraged from using cognitive enhancers even when the question is framed exclusively in terms of possible harmful side effects. Although only some of the results were statistically significant, there were some interesting differences suggesting possible avenues for future research. The first two questions asked respondents about the acceptability of taking two different sorts of stimulants to improve alertness and performance: natural (coffee) and unnatural (Ritalin). Not surprisingly, when the possible physical harm was emphasized, people were less likely to find this acceptable whether the stimulant was coffee or Ritalin. Coffee is a daily staple for many, so it was not surprising to see that people were more negative about its acceptability when they were made aware of its harmful physical effects. Most people responded that it was acceptable to take coffee to improve performance when the question focused on aspirational and moral harms, and the same result held true when only benefits were described. When a less familiar stimulant was substituted for coffee (Ritalin), the differences between responses flattened out across all four frameworks. In general, there were 252  A description of the survey design and the questions I asked can be found in Appendix A. A summary of some preliminary results can be found in Appendix B.  153  roughly equal numbers of positive and negative responses to the acceptability of Ritalin before an interview or exam regardless of the type of harm or benefit it might involve. The most positive responses, however, were from people who read about possible harm to their aspirations. This lack of significant difference in the question about cognitive enhancers is telling because the traditional understanding of risk predicts that people would have responded more favourably to the benefits frame and less so when possible physical harm was pointed out. Also, if the subject of the question was unfamiliar, one would have expected the responses to be generally more negative, but this is not what occurred. If cognitive enhancers were not unfamiliar to the respondents, then we would have expected the results to mirror those in the question about coffee: but this is not what we see either. One possible explanation for the lack of response to the framing for Ritalin is that the change in topic brings out the competing values that people apply to new situations. There was also an interesting difference in a question concerning the therapeutic use of cognitive enhancers. More people found it acceptable to use these drugs when someone was suffering from an illness or disorder leading to cognitive deficits when the benefits of doing so were described. It was also more acceptable when the chance of not meeting one’s aspirations was explained. However, it was less acceptable when the question discussed the possibility that even therapeutic use of these drugs might produce a lack of fairness. This suggests that there could be a difference between acting to avoid an aspirational risk and acting to avoid a moral risk, although further refinement of the question is necessary.  154  Finally, people were more likely to say that social access to cognitive enhancing drugs should be limited if they were told of the possible physical harm, when compared to the other three frames. There was a significant difference however, when moral harm was emphasized. In this case people were more likely to say that the drugs should be widely available to everyone. Overall, this survey has suggested that the most similar results are produced when possible harm to one’s aspirations is emphasized and when benefits are emphasized. Responses to questions framed in terms of moral harms were also similar to those framed in term of benefits, except when it came to therapeutic uses of the drug and social access. Further research would be invaluable in getting a better understanding of these differences.  ="#9$ C)&,4+2-)&$ The assumption that people make poor or irrational judgments about risk is challenged by some of the empirical studies I have outlined. While people might rely on various heuristics in reasoning that can produce bias and overconfidence, it seems that in many cases they are simply being influenced by other concerns and values. Apart from the physical or economic harm, which make a risk unwanted, the threat of betrayal or not meeting one’s goals or desires are also undesirable consequences or outcomes. I proposed that the types of values that might provide reasons for action could be loosely arranged in three categories: standard, aspirational and moral. A risk is prescriptive because it carries the message to reduce undesirable effects. In some cases, however, different sorts of actions can be required to reduce different sorts of harm. To tell  155  someone that taking a drug like Ritalin is a risk might mean that they ought not to take it because it involves physical harm, but it might also mean that they ought to take it if it involves the chance of not meeting a goal. The neuroethics survey also suggests that we should not assume people are only concerned with physical harm and benefits. Such findings could be of great relevance when policy-makers seek out public opinion concerning new technologies or procedures involving risks. For instance, if a person is asked about the use of Ritalin as a performance enhancer given its physical risks, they might find it acceptable if they are attending less to physical side effects than to achieving their goals through their performance. The result looks like an acceptance of physical risk, but in fact the explanation is that the respondent considers her or his aspirations more important than physical safety. On the view that risks are threats to what is of value and that there a different type of values, it might sometimes be the case that people are taking one risk in order to avoid another. The prescriptive sense of risk can therefore be further enhanced by the recognition that avoiding harm might be the same reason for action, but the action that one ought to take might vary depending on the type of harm at issue. Studies which show that people sometimes exhibit what is thought to be risk-seeking or potentially irrational behaviour (by choosing a higher chance of physical injury or even death to avoid a minimal chance of betrayal, for instance) suggest that the prescriptive sense of risk is inconsistent and vague. However, this assumes a very narrow understanding of harm and what might count as a risk for a person. If a risk is normative because it recommends whatever action is required to avoid a harm, and the notion of harm is whatever negatively affects what is of value to a person, then some of this  156  inconsistency disappears. There is no reason to think that what I need to do in order to avoid the possibility of failing to meet my career aspirations will be the same thing I need to do in order to avoid physical injury. In fact, there will often be more than one risk involved in a decision and the resolution might very well be a matter of determining what matters more, or what has more value to me. The empirical study on cognitive enhancers, while not meant to be definitive, suggests that there is in fact some difference in how people respond to risks involving the chance of physical harm, not meeting or achieving their goals, and in cheating or doing something wrong. The results suggest that it is worth pursing further research on this topic and in developing the risk frames.  157  6  Conclusion The scholarly literature on risk is extensive and includes various theoretical  conceptions of and strategies for determining risk, probabilistic theories to measure and quantify it, empirical investigations into how people perceive and are influenced by various sorts of risk, and practical applications of all this information in decisions and policies affecting society. It is therefore a difficult topic to explore without inadvertently challenging one or more entrenched ideas. Often, in addressing a certain set of concerns, another set arises to further complicate any claims one hopes to make. However, such problems are a natural result of risk’s inherent complexity and the need for a diversity of perspectives to truly understand it. The primary assumption of many such perspectives is that risk is only a descriptive concept because it refers to the possibility that some undesirable or harmful state of reality will result from an activity or event. I used some illustrations at the beginning of this thesis to suggest that to say that something is a risk is to imply that one ought to avoid or minimize the chance of harm,. A risk can be both a descriptive and a normative concept. To some, the normative implication of risk might seem to be obvious since the chance of physical harm, injury, or death is undesirable. What is not obvious, however, is that the harm a risk involves can be understood as simply as this. An event, outcome or choice is risky because of the recognition that it may involves something bad, undesirable or harmful. Without this recognition, a risk is an uncertain event but it may produce good, bad or neutral consequences. Some types of harm, however, might be objective in the way that primary goods are objective, because they are what every rational person might want to avoid. 158  Even when there seems to be consensus over what counts as a harm, it is still a harm to someone. To say that a baseball flying in the air in your direction is a risk is to say that it might cause you harm. We mean that it is a risk to you because you will likely get injured or hurt if it hits you. To say that an asteroid flying through space in Jupiter’s direction is a risk is to say that it might cause damage or some harm if it hits. In this case, we mean that it is a risk to Jupiter on our view, and not that we think planets are things that can be harmed. In addition, there is no reason to think that what is bad or undesirable for one person will always be bad or undesirable for another. Some people are very concerned with their reputation while others are less concerned and may not even have a reputation to speak of. Similarly, physical pain for one person might be intolerable while another person might be able to endure quite a bit of pain. The early historical emphasis on the measurable and quantitative aspects of risk led to a tendency to omit or underestimate the role of subjective evaluations or circumstances. Bernoulli was one of the first to suggest that risk was not always objectively harmful since people would evaluate the consequences of a decision or action in different ways. A poor man, for instance, would find losing a small amount of money more harmful than a wealthy man would. He recognized that even if the facts are the same for everyone affected, the utility (desirability, usefulness, satisfaction etc) of the outcomes or the situation is not. It was also discovered that people do not consistently follow the laws of probability, since they often make gambles and pay for insurance, both of which are losing propositions. Arrow, for example, concluded that some risks are considered to be  159  worth taking because they afford a degree of entertainment (such as going to a casino) while others are not because of the severity of the consequences (losing one’s house in a fire without having paid for insurance) if they occur. In present day risk analysis, subjective evaluations are thought to have little bearing on determining how risky a new technology, process or product is. Typically, a risk is the measure of annual probabilities of fatality, but it might also measure injuries, illness, economic loss and environmental damage. These are ‘real’ risks, while the risks that are matters of context and subjective evaluations are merely perceptions. Obviously, it is useful to know the likelihood that a technological application, health measure or product will result in serious injury or harm. Most risk analyses are therefore attempts to measure specific sorts of harm. But there is a problem with defining risk simply as the chance of a harm of this sort, especially given the assumption that they are somehow objective and not also matters of evaluation. This objective perspective of risk is very influential despite countervailing historical tendencies, as well as more modern work in socially and culturally based approaches that challenge it. Rescher’s philosophical account of risk assessment begins with his claim that risk is an ontological category; it is a fact about the world that has nothing to do with a person’s recognition or judgment. He is quick to point out, however, that subjective evaluations matter, but only insofar as they are required to weigh one risk against another or in determining how tolerable or acceptable they might be. Beginning with the simple notion that a risk has two ineliminable components, chance and harm, Rescher adopts the risk analysts’ view that what is harmful or negative can be assumed.  160  If risks are objective matters of fact, where both chance and harm are independent of what people think about them, a risk could still be said to provide people with reasons to act. A risk involves the chance of harm or something undesirable and so one ought to avoid or minimize whatever might lead to this effect. It could even be argued that in fact risk is not normative but that rationality is, and that a risk simply describes or points out those things in the world that it is rational to avoid. The descriptive sense of risk is therefore all that this view requires because it points out that a possible harm exists but does not, or does not need to be understood to, carry an implicit message about what one ought to do. However, as I argued, subjective evaluations are not limited to comparisons between risks but can in fact determine what is a risk for someone. Values should be incorporated more directly into how a risk is understood and, in doing so, can be incorporated into explaining how it can prescribe action. Of course, this does not change the fact that there might still be reason to avoid something potentially harmful. But it does suggest that a richer explanation can be provided. An alternative, and perhaps more comprehensive account of the normative concept of risk involves drawing a parallel with morality. This draws attention to the fact that a person might be held accountable, at least in part, for knowingly taking a risk in much the same way that person can be held accountable when she or he violate moral norms. A risk is normative since it provides us with reasons to act in order to avoid something harmful or undesirable. What counts as harmful for a given individual is whatever negatively affects something that he or she values, such as physical well being or aspirations. Risk can therefore be said to change decisions in ways that are similar to how  161  morality changes decisions, yet not quite as strongly since they are different from notions of right and wrong (or legal and illegal) and demand less of us. There is no punishment if you fail to heed the warning of a risk except that you might incur the harm that you were warned about. At the same time, risk is not as weak as the rules of etiquette in prescribing action since the consequences do usually involve serious harm. It seems that risk falls somewhere in between these examples. Rather than ascribing our values to the various options of a risky choice, the account I provided suggests that risk acts like a filter in the way that morality does since values are internal to risk. On this view, risks are weakly normative because they can be overridden by other concerns. This explanation, as a comparison to morality, more adequately captures what we mean when we call something a risk but do more than just describe. The action one ought to take to avoid harm will depend on what type of harm is at issue. Often, a risky situation will involve more than one type of harm, or more specifically, it will involve a threat to more than one type of value. A number of empirical investigations have shown that people employ various heuristics leading to biases and misconceptions in situations involving risk. However, other studies indicate that people do no have a homogenous conception of harm, but in fact attend to the threat of betrayal, injustice and obligation. Both Lopes and Davies provide the further claim that our hopes and aspirations inform our understanding of risk since they can be threatened. However, on their view, all risks are aspirational; I argued that this is problematic and seems to water down what it means to aspire to something. There is a distinction between aspiring to excel academically or to pursue a particular profession and aspiring to avoid injury or illness.  162  Finally, many of the empirically based studies of risk find that people are either poor judges of risk, perceive risks differently than experts, and sometimes make irrational decisions. As I have pointed out, however, many of these studies begin with the assumption that a risk is an objective and usually measurable fact. When a non-expert ranks nuclear power as the number one risk, while experts rank it 20th, the mistake is thought to be due to misinformation. There is reason to think that in a given situation, more than one type of value will be threatened. Furthermore, the sort of action a risk prescribes in order to avoid a harm is not necessarily going to be the same in every instance, but will often be a matter of what sort of value is being threatened, or what sort of harm it involves. For example, a desire to avoid the risk of betrayal might actually lead someone to take a risk involving injury. It may not always be possible to detect such a situation, particularly given the prevalence of the narrow or descriptive view of a risk simply as the chance of some harm, usually meaning injury, death or the loss of money. When a risk is understood as a matter of what is of value to people, and as both a normative and a descriptive concept, choosing an option with a higher degree of physical injury or financial loss might not be just irrationality or poor judgment; it might be that avoiding a risk involving one type of harm is more important than avoiding a risk involving another type of harm. Based on these ideas, I proposed three very broad categories of values that, when threatened, might prescribe different actions in a given situation. Risks can involve threats to the standard sorts of values such as the desire to avoid physical harm, aspirational values such as career goals or athletic achievement, and moral values such as one’s sense of justice or honesty. In exploring the issues surrounding the use of cognitive  163  enhancers, it became apparent that should the usual sort of public opinion survey be conducted, it would only deal with the possible physical risks of these drugs and treatments would be emphasized. However, this could be problematic because the use of such enhancements might also pose risks to one’s aspirations and sense of honesty or fairness. It would therefore be difficult to draw conclusions about people’s actual attitudes or response to the risks involved without taking into account these other sources of potential harm. I was able to incorporate a preliminary test of these risk value categories. The results, while not definitive, do suggest that there are differences in how people understood risks involving physical harm, the achievement of goals, and honesty and justice. Furthermore, there are some refinements that I could make in order to more clearly frame the questions in terms of risk. From many of the textual comments participants left, it seems that people were motivated to avoid risks involving each of the three values in different ways. At times, they even seemed to be aware that in avoiding the possibility of dishonesty, they were in fact taking a risk which threatened their aspirations or goals. It might be possible to design an experiment to bring out these differences more clearly. In exploring the normative side of risk, I hope to have contributed to a richer understanding of risk in general. In addition, I wanted to provide further detail to those instances where to call something a risk is to do more than merely describe events in the world. I have not yet been convinced that non-expert judgments of risk are simply due to misinformation or irrationality, or that a distinction can be made between ‘real’ and ‘perceived’ risks. These two issues seemed connected in an interesting way, which might be because they can be explained in part by the normative concept of risk. Risks are not  164  easily defined, nor are they always just a matter of weighing harms and benefits. In order to make assessments about what it means when a person either takes or avoids a risk of some sort, it is important to determine what sort of harm they might be worried about. Often situations will involve possible harm to more than one thing of value and therefore, in order to avoid the chance of harm of one type, it might be necessary to take the chance of some other type of harm. This does not mean, therefore, that the failure to respond to the calculated probabilities of technological harm is necessarily irrational, but might indicate that some other consideration outweighs it. The aim of this dissertation has been to provide structure to the notion that risks can prescribe action. This is not a claim I set out to make or wanted to prove from first principles about risk but instead was an attempt to further explain what seems to already occur in the way we speak about risks. To call something a risk is to both describe events in the world and to prescribe action. I have developed this account by addressing four different aspects of risk. First, I showed through its historical evolution how risk has been adopted by scientists and technicians as a narrowly construed concept, which is challenged by more recent research suggesting that risk is more than simple probabilistic assessments of hazards. Second, I addressed Rescher’s philosophically motivated account of risk, which he feels underpins the technological or standard view of risk and grounds it in the notion that risks are objective matters of fact in the world. I argued against the idea that risk can be understood ontologically but summarizing Thompson’s response to Rescher and providing reasons to think that an epistemological understanding is more convincing. Third I provided structure to the explanation that a risk could prescribe action and how it might do so by drawing a parallel with moral norms and  165  etiquette rather than relying merely on rationality. Finally, I incorporated some of the research by psychometric theories of risk that propose challenges to the standard view and elaborated on these claims by demonstrating that there are at least three categories of values that risks might threaten. Given these different sorts of values, there is therefore reason to think that the actions prescribed to avoid them will not be the same in all cases. This is meant to do some work in addressing the idea that when people make decisions about risk which do not match up with those of the experts, that they are not merely irrational or misinformed.  166  Bibliography Ahmad, Rana, Jennifer Bailey, and Peter Danielson. “A Comparative Analysis of an Innovative Ethical Tool.” Public Understanding of Science (2008). Ahmad, Rana, Zosia Bornik, Peter Danielson, H. Dowlatabadi, Ed Levy, Holly Longstaff, and Jennifer Wilkin. “A Web-Based Instrument to Model Social Norms: Nerd Design and Results.” Journal of Integrated Assessment 6, no. 2 (2006). Arrow, Kenneth. Theory of Risk-Bearing. Chicago: Markham Publishing Company, 1971. Asch, D., J. Baron, J. Hershey, H. Kunreuther, J. Mesazaros, I. Ritove, and M. Spranca. “Do Physicians Have a Bias Toward Action?” Medical Decision Making 14 (1994): 183. Baram, M. “Cost-Benefit Analysis: An Inadequate Basis for Health, Safety, and Environmental Regulatory Decisionmaking.” Ecology Law Quarterly 8 (1980): 473-531. Baron, Jonathan. “Tradeoffs Among Reasons for Action.” Journal for the Theory of Social Behaviour 16, 2 (1986): 173-195. ———. “The Effects of Normative Beliefs on Anticipated Emotions.” Journal of Personality and Social Psychology 64 (1992): 347-355. ———. Morality ad Rational Choice. Boston: Kluwer Academic Publishers, 1993. ———. “Biases in the Quantifiable Measurement of Values for Public Decisions.” Psychological Bulletin 122 (1997): 72-88. ———. Judgment Misguided: Intuition and Error in Public Decision Making. New York: Oxford University Press, 1998. Baron, Jonathan, and I. Ritov. “Omission Bias, Individual Differences, and Normality.” Organizational Behaviour and Human Decisions 94 (2004): 74-85. Beck, Ulrich. Risk Society: Towards a New Modernity. London: Sage, 1992. Bernoulli, Daniel. “Exposition of a New Theory of the Measurement of Risk.” Econometrica 22 (1954): 123-136. Bernstein, Peter L. Against the Gods: The Remarkable Story of Risk. New York: John Wiley and Sons, 1998.  167  Bromley, P. and S.P. Curley. “Individual Differences in Risk Taking.” In Risk-Taking Behaviour, edited by F. Yates, 87-132. New York: Wiley, 1992. Chatterjee, Anjan. “The Promise and Predicament of Cosmetic Neurology.” Journal of Medical Ethics 32 (2006): 110-113. Cohen, G. J., and S. G. Pauker. “How Do Physicians Weigh Iatrongenic Compilations?.” Journal of General Internal Medicine 9 (1994): 20-23. Conover, W.J. Practical Nonparametric Statistics. Second Edition. New York: John Wiley & Sons, 1980. Cranor, Carl. “Towards a Non-Consequentialist Approach to Acceptable Risks.” In Philosophical Perspectives, edited by Tim Lewens. New York: Routledge, 2007. Cushman, F., L. Young, and M. Hauser. “The Role of Conscious Reasoning and Intuition in Moral Judgment.” Psychological Science 17, no. 12 (2006): 1082-1089. Danielson, Peter, Rana Ahmad, Zosia Bornik, H. Dowlatabadi, and Ed Levy. “Deep, Cheap and Improvable: Dynamic Democratic Norms and the Ethics of Biotechnology.” Journal of Philosophical Research Special Supplement Ethics and the Life Sciences (2007): 315-326. David, F. N. Games, Gods and Gambling. Hafner Publishing Company, 1962. Davies, Greg B. “Rethinking Risk Attitude: Aspiration as Pure Risk.” Theory and Decision 61 (2006): 159-190. Denney, David. Risk and Society. Thousand Oaks, California: Sage, 2005. Doris, John. Lack of Character. New York: Cambridge University Press, 2002. Douglas, Mary. Risk and Blame. London: Routledge, 1992. Douglas, Mary, and Aaron Wildavsky. Risk and Culture. Berkeley: University of California Press,1982. Ericson, R. V., and Aaron Doyle. Risk and Morality. Toronto: University of Toronto Press, 2003. Evans, J. “Normative and Descriptive Consequentialism.” Behavioral and Brain Sciences 17 (1994): 15-16. Ewald, F. “Two Infinities of Risk.” In The Politics of Everyday Fear, edited by B. Massumi, 221-228. Minneapolis, Minnesota: University of Minnesota Press, 1993.  168  Farah, Martha, et al. “Neurocognitive Enhancement: What Can We Do and What Should We Do?” Neuroscience 5 (2004): 421-425. Feldman, Fred. “Actual Utility: The Objection from Impracticality and the Move to Expected Utility.” Philosophical Studies 129: 49-79. Feldman, J., J. Miyamoto and E. Loftus. “Are Actions Regretted More Than Inactions?” Organizational Behavior and Human Decision Processes 78 (1999): 232-255. Fischhoff, Baruch et al. Acceptable Risk. Cambridge, UK: Cambridge University Press, 1981. Fischhoff, Baruch, Stephen R. Watson and Chris Hope, “Defining Risk.” In Readings in Risk, edited by Thedore S. Glickman and Michael Gough. Washington, D.C.: Resources for the Future, 1990. Franklin, J. “Introduction.” In The Politics of the Risk Society, edited by J. Franklin. Cambridge: Polity, 1998. Funtowicz, S. O., and J. R. Ravetz. “Three Types of Risk Assessment and the Emergence of Post-Normal Science.” In Social Theories of Risk, edited by S. Krimsky and D. Golding, 251-274. Westport, CT: Praeger-Greenwood, 1992. Furedi, Frank. Culture of Fear. Washington, D.C.: Cassell Publishers, 1997. Garland, David. “The Rise of Risk.” In Risk and Morality , edited by R. V Ericson and Aaron Doyle, 48-86. Risk and Morality. Toronto: University of Toronto Press, 2003. Giddens, A. The Consequences of Modernity. Cambridge: Polity Press, 1990. Gigerenzer, Gerd. The Empire of Chance: How Probability Changed Science and Everyday Life. New York: Cambridge University Press, 1989. Gillette, C., And J. Krier. “Risk, Courts and Agencies.” University of Pennsylvania Law Review 38 (1990): 1077-1199. Glannon, Walter. “Neuroethics.” Bioethics 20,1 (2006): 37-52. Glickman, Theodore, and Michael Gough, eds. Readings in Risk. Washington, D.C.: Resources for the Future, 1990. Gross, J.L., and S. Rayner. Measuring Culture. New York: Columbia University Press, 1985.  169  Hacking, Ian. The Emergence of Probability. New York: Cambridge University Press, 1975. ———. “Risk and Dirt.” In Risk and Morality, edited by R. V Ericson and Aaron Doyle, 22-47. Risk and Morality. Toronto: University of Toronto Press, 2003. Haidt, Jonathan. “The Emotional Dog and Its Rational Tale: A Social Intuitionist Approach to Moral Judgment.” Psychological Review 108, no. 4 (2001): 814-834. Haidt, J., and J. Baron. “Social Roles and the Moral Judgment of Acts and Omissions.” European Journal of Socialo Psychology 26 (1996): 201-218. Hansson, Sven Ove. “Philosophical Perspectives on Risk.” Techne 8, no. 1 (2004): 1-25. Hyman, E.L., and B. Stiftel. Combining Facts and Values in Environmental Impact Assessment. Boulder, CO: Westview Press, 1988. Iles, Judy, and Eric Racine. “Neuroethics - From Neurotechnology to Healthcare.” Cambridge Quarterly of Healthcare Ethics 16 (2007): 125-127. Inhaber, H. “Risk with Energy from Conventional and Nonconventional Sources.” Science 203 (1979): 718-723. Issen, A. and F. Levin. “Effect of Feeling Good on Helping: Cookies and Kindness.” Journal of Personality and Social Psychology 21 (1972): 384-388. Jasanoff, Sheila. “The Songliness of Risk.” Environmental Values 8 (1999): 135-152. Kahneman, Daniel, and Amos Tversky. “On the Psychology of Prediction.” Psychological Review 80 (1973): 237-251. ———. “Prospect Theory: An Analysis of Decision Under Risk.” Econometrical 47, no. 2 (1979): 263-291. ———. “Choices, Values and Frames.” American Psychologist 39, no. 4 (1984): 342347. Kasperson, R., et al. “The Social Amplification of Risk. A Conceptual Framework.” Risk Analysis 8,2: 177-187. Knight, Frank. Risk, Uncertainty and Profit. New York: Century Press, 1964. Koehler, J., and A. Gersoff. “Betrayed Aversion: When Agents of Protection Become Agents of Harm.” Organizational Behavior and Human Decision Processes 90 (2003): 244-261.  170  Korsgaard, Christine M. The Sources of Normativity. New York: Cambridge University Press, 1996. Krimsky, S., and D. Golding. Social Theories of Risk. Westport, CT: PraegerGreenwood, 1992. Lichtenstein, S., P. Slovic, B. Fischhoff, M. Layman, and B. Combs. “Judged Frequency of Lethal Events.” Journal of Experimental Psychology: Human Learning and Memory 4 (1978): 551-578. Longino, Helen. Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton: Princeton University Press, 1990. Lopes, Lola. “Some Thoughts on the Psychological Conception of Risk.” Journal of Experimental Psychology 9 (1983): 137-144 ———. “Between Hope and Fear: The Psychology of Risk.” In Research on Judgment and Decision Making, edited by William Goldstein and Robin Hogarth, 681-720. New York: Cambridge University Press, 1987. Lopes, Lola, and Gregg C. Oden. “The Role of Aspiration Level in Risky Choice: A Comparison of Cumulative Prospect Theory and SP/A Theory.” Journal of Mathematical Psychology 43 (1999): 286-313. Luhman, Niklas. Risk: A Sociological Theory. New York: Aldine de Gruyter, 1993. Lupton, Deborah. Risk. New York: Routledge, 1999. Machlis, G.E and E.A. Rosa. “Desired Risk: Broadening the Social Amplification of Risk Framework.” Risk Analysis 10 (1990): 161-168. Malm, H. “Killing, Letting Die and Simple Conflicts.” Philosophy and Public Affairs 18 (1990): 238-258. Milgram, S. Obedience to Authority. New York: Harper and Row, 1974. Miller, Edward. “Do the Ignorant Accumulate the Money?” Working Paper, University of New Orleans, 1995. Morgan, M.G. “Choosing and Managing Technology-Induced Risks.” In Readings in Risk, edited by T.S. Glickman and M. Gough, 5-15. Washington: Readings for the Future, 1990. Munier, Bertrand, and Mark Machina. “Introduction.” In Models and Experiments in Risk and Rationality, edited by Bertrand Munier and Mark Machina. Boston: Kluwer Academic, 1994.  171  Nagel, Saskia, and Nicolas Neubauer. “A Framework to Systematize Positions in Neuroethics.” Essays in Philosophy 6, no. 1 (2005): 1-15. Nagel, Thomas. “Moral Luck.” In Mortal Questions, edited by Thomas Nagel. Cambridge: Cambridge University Press, 1978. Neumann, John Von, and Oskar Morgenstern. Theory of Games and Economic Behavior. Princeton, New Jersey: Princeton University Press, 1944. Nygren, Thomas. “Reacting to Perceived High and Low-Risk Win-Lose Opportunities in a Risky Decision-Making Task: Is It Framing or Affect or Both?” Motivation and Emotion 22, no. 1 (1998): 73-98. Parfit, Derek. “Future Generations: Further Problems.” Philosophy and Public Affairs 11, no. 2 (1982): 113-172. Payne, J.W., D. Laughhunn and R. Crum. “Further Tests of Aspiration Level Effects in Risky Choice Behavior.” Management Science 27, 8 (1981): 953-958. Pidgeon, N., C. Hood, and D. Jones. “Risk Perception.” In Risk: Analysis, Perception and Management, edited by The Royal Society Study Group, 89-134. London: The Royal Society, 1992. Pratt, J.W. “Risk Aversion in the Small and in the Large.” Econometrics 32 (1964): 122136. Racine, Eric, HZ Adriaan Van Der Loos, and Judy Iles. “Internet Marketing of Neuroproducts: New Practices and Healthcare Policy Challenges.” Cambridge Quarterly of Healthcare Ethics 16 (2007): 181-194. Railton, Peter. Facts, Values and Norms: Essays Toward a Morality of Consequence. Cambridge: Cambridge University Press, 2003. Rasmussen, Norman. “The Application of Probabilistic Risk Assessment Techniques to Energy Technologies.” Annual Review of Energy 6 (1981): 123-138. Rawls, John. A Theory of Justice. Cambridge, Massachusetts: Belknap Press, 1999. Redelmeier, D. and E. Shafir. “Medical Decision Making in Situations That Offer Multiple Alternatives.” Journal of the American Medical Association 273, 4 (1995): 302-305. Renn, Ortwin. “Risk Perception and Risk Management: A Review.” Risk Abstracts 7, 1 (1990): 1-9.  172  ———. “Three Decades of Risk Research: Accomplishments and New Challenges.” Journal of Risk Research 1,1 (1998): 49-71. Rescher, Nicholas. Risk: A Philosophical Introduction to the Theory of Risk Evaluation and Management. Washington: University Press of America, 1983. Ritov, I., and J. Baron. “Protected Values and Omission Bias.” Organization Behavior and Human Decisions Processes 79 (1999): 79-94. Rodricks, Joseph and Michael R. Taylor. “Applications of Risk Assessment to Food Safety Decision Making.” In Readings in Risk, edited by Theodore S. Glickman and Michael Gough. Washington, D.C.: Resources for the Future, 1900. Rousseau, D. et al. “Not So Different After All: A Cross-Discipline View of Trust.” Academy of Management Review 23 (1998): 393-404. Ruck, Bayerische, ed. Risk is a Constant. Munich: Knesebeek, 1993. Sahakian, Barbara, and Sharon Morein-Zamir. “Professor's Little Helper.” Nature 450, no. 7173 (2008): 1157. Savage, L.J. The Foundations of Statistics. New York: Wiley, 1954 Schmidtz, David. Rational Choice and Moral Agency. Princeton, New Jersey: Princeton University Press, 1995. Schrader-Frechette, Kristen. Risk Analysis and Scientific Method. Hinghma, MA: D. Reidel Publishing Company, 1985. ———. Risk and Rationality. Berkeley: University of California Press, 1991. Singer, Peter. “Introduction.” In Ethics, edited by Peter Singer. Oxford: Oxford University Press, 1994. Slovic, Paul. “Perception of Risk.” Science 236 (1980): 280-285. ———. The Perception of Risk. Sterling, VA: Earthscan Publications, Ltd., 2000. Slovic, Paul, and B. Fischhoff and S. Lichtenstein. “Rating the Risks.” In Readings in Risk, edited by Theodore Glickman and Michael Gough, editors. Washington DC: Resources for the Future, 1990. Smith, Martin. “Mad Cows and Mad Money: Problems of Risk in the Making and Understanding of Policy.” The British Journal of Politics and International Affairs 6 (2004): 312-332.  173  Starr, Chauncey. “Social Benefit versus Technological Risk.” Science 165 (1969): 12321238. Starr, Chauncey, and Chris Whipple. “Risks of Risk Decisions.” Science 208 (1980): 1114-1119. Stigler, S. M. The History of Statistics: The Measurement of Uncertainty Before 1900. Cambridge, Massachusetts: Belknap Press, 1986. Sumner, William Graham. Folkways: A Study of the Sociological Importance of Usages, Manners, Customs, Mores, and Morals. New York: New American Library, 1940. Sunstein, Cass. “Moral Heuristics and Risk.” In Risk: Philosophical Perspectives, edited by Tim Lewens, 156-170. New York: Routledge, 2007. Taylor-Gooby, Peter, and Jens Zinn. Risk in Social Science. Toronto: Oxford University Press, 2006. Tetlock, P. “The Consequences of Taking Consequentialism Seriously.” Behavioral and Brain Sciences 17 (1994): 31-32. ———. “Coping with Tradeoffs.” In Elements of Reason: Cognition, Choice and the Bounds of Rationality, edited by A. Lupin, S. Popkin and M. McCubbins. Cambridge: Cambridge University Press, 2000. Thaler, Richard H. Quasi-Rational Economics. New York: Russel Sage Foundation, 1991. Thompson, Judith Jarvis. Rights, Restitution and Risk. Cambridge, Massachusetts: Harvard University Press, 1986. Thompson, Paul. “Risking or Being Willing: Hamlet and the DC-10.” The Journal of Value Inquiry 19,4 (1985): 301-310. ———. “The Philosophical Foundations of Risk.” The Southern Journal of Philosophy XXIV, no. 2 (1986): 278. Thompson, Paul, and W. R. Dean. “Competing Conceptions of Risk.” Risk: Health, Safety and Environment 7 (1996): 361-384. Todhunter, Isaac. A History of the Mathematical Theory of Probability from the Time of Pascal to that of LaPlace. New York: G.E. Stechert & Co., 1931. Tversky, Amos. “The Psychology of Risk.” In Quantifying the Market Risk Premium Phenomenon for Investment Decision Making, edited by William F Sharpe, 73-77. Charlottesville, Virginia: The Institute of Chartered Financial Analysts, 1990.  174  Tversky, Amos, and Daniel Kahneman. “Advances in Prospect Theory: Cumulative Representation of Uncertainty.” Journal of Risk and Uncertainty 5, no. 4 (1992): 297-323. Van Loon, Joost. Risk and Technological Culture. New York: Routledge, 2002. Viscusi, W.K. “Corporate Risk Analysis: A Reckless Act?” Stanford Law Review 52 (2000): 547-597. Wilson, Richard. “Analyzing the Daily Risks of Life.” In Readings in Risk, edited by Theodore S. Glickman and Michael Gough. Washington, D.C.: Resources for the Future, 1990. Wynne, B. “Public Perceptions of Risk.” In The Urban Transportation of Irradiated Fuel, edited by J. Aurrey, 246-259. London: Macmillan, 1984. ———. “Risk and Social Learning: Reification to Engagement.” In Social Theories of Risk, edited by S. Krimsky and D. Golding, 275-300. Westport, CT: PraegerGreenwood, 1992. Zimbardo, P., W. Banks, C. Haney and D. Jaffee. “The Mind is a Formidable Jailer: A Pirandellian Prison.” New York Times Magazine (April 8, 1973). Zinn, Jens. Social Theories of Risk and Uncertainty: An Introduction. Malden, MA: Blackwell, 2008.  175  Appendix A: Survey Methodology253 Introduction and Hypotheses As part of a larger project, this empirical study was conducted through the use of a survey to investigate public attitudes concerning the use of cognitive enhancers and some of the related ethical challenges that they create. To supplement my account of normative risk, I designed part of the survey to investigate whether the different categories of values I have proposed are detectable when used to frame risk. The emerging field of neuroethics seemed like an ideal one for my purposes since the topic of cognitive enhancement is controversial, ethically challenging and fairly new making it more attractive to potential participants. More importantly however, the use of drugs to improve memory and performance involve risks, which are still largely unknown. Aside from physical harm, using these drugs is still so new that it is unclear whether it amounts to cheating and are not yet widely accessible to everyone. Cognitive enhancement involves employing pharmaceuticals and new technologies like deep brain stimulation to enhance cognitive functions like memory, recall, retention of information and concentration. It proves to be an ideal test case for my theory since it can involve both benefits in terms of cognitive performance, as well as risks to one’s health, career status, and sense of morality. Additionally, the survey was designed on the NERD (Norms Evolving in Response to Dilemmas) platform which was developed by the NERD research team at the W. Maurice Young Centre for Applied Ethics at the University of British Columbia. One of the benefits of an online methodology is that no in-person recruitment was necessary, hence 253  The Neuroethics survey can be found online at: www.yourviews.ubc.ca  176  minimizing potential bias from having to perform or meet perceived expectations of recruiters. Furthermore, the anonymity of participants was ensured, but the data analysis was subsequently more efficient and accurate, while the number of participants was far greater than in the typical face-to-face methods used to test similar hypotheses. For the purposes of this work, I chose to analyze the data of 200 participants although continued to collect responses beyond this benchmark in order to ensure further analysis. The hypothesis was first, that questions emphasizing physical risks would produce more negative results than those emphasizing possible benefits. Second, when questions were framed in terms of possible threats to one’s achievements and aspirations, as well as one’s sense of honesty, the responses would be different.  The Survey Platform254 As explained, the design of this empirical test is based on a novel survey platform developed by a research team at the W. Maurice Young Centre for Applied Ethics at the University of British Columbia. Originally its purpose was to test the hypothesis that as people were presented with more difficult ethical dilemmas, their adherence to a particular set of norms of behaviour and thinking would lessen and in fact shift as the dilemmas became more difficult to resolve.255 To address some of the concerns surrounding traditional methods of public consultation on biotechnology, the research team used focus groups and surveys to gain a better understanding, not only of public attitudes on defined topics such as salmon  254  Portions of this section describing the Survey platform are excerpts from a recent article by Ahmad et al., Public Understanding of Science, 2008. 255 Ahmad et al, “A Web-Based Instrument to Model Social Norms”; Danielson et al, “Deep, Cheap and Improvable.”  177  genomics, but also of what elements are important in creating effective public consultation tools. NERD (Norms Evolving in Response to Dilemmas) was devised in part to address the problems with traditional surveys, focus groups and other methods of public consultation, by bridging the gap between perceived public opinion and actual responses, and also as an experimental tool to test the hypothesis that people’s norms are not static. NERD is an online, easily accessible survey platform that is easily adapted to accommodate a variety of issues in ethics and applied ethics. It provides the framework into which a researcher can explore any number of empirical questions as well as to address issues of a specific nature and within a particular context as I have used it here. For instance, one of the first iterations of the NERD-type surveys attempts to understand the normative influences on people’s reactions to the introduction of genetic technology in human health application. It offers respondents the opportunity to answer a carefully constructed set of decision problems with the option to seek information about the issues or technical details relevant to each question from five “advisors”. The first NERD survey was on human health and genomics and the second was on salmon genomics, both of which focused on different, though equally controversial sets of issues. Subsequent topics have included the ethical issues surrounding the use of animals in research, as well as an investigation into the effect of group conformity on people’s ethics over a broad range of issues. One of the benefits of using this particular format is the ease with which one can adapt their research interests to the platform while obtaining data quite efficiently and at minimal cost. More importantly though, is the way this particular platform allows for a  178  survey design which attempts to elicit the types of responses that a normative theory or risk anticipates. Finally, while much of this analysis will focus on the quantitative results in order to prove or disprove the hypotheses, there will also be some substantive qualitative results to draw from as well. Each of the questions asks participants to comment on the question or issue at hand which provides them with the opportunity to contribute more substantively than a traditional survey. Finally, as explained, the survey is online which enhances my ability to distribute to a broad audience from a wide range of geographical areas. While this isn’t a necessary part at this stage of the research, it is another feature that is worth exploring fully in future research since normative risk should not be unduly influenced by cultural or social factors.  Survey Structure The neuroethics survey is designed to elicit people’s ethical evaluations about the use of cognitive enhancers. It is structured around the current debate in neuroethics around three salient issues that arise from the literature: natural and unnatural interventions, therapy vs. enhancement, and social access. There are other issues that make this a field of great interest for ethicists but they move the discussion away from risk and into the meta-ethical worries about what constitutes good or right action. As such these three categories provide the basis from which the three risk frameworks are constructed and are meant to present participants with the major issues in question.  179  A brief description of each category is provided below: 1. Natural vs. Unnatural People make a distinction between the natural and the unnatural. In general natural interventions are perceived to be much less troubling in general, and less risky than unnatural interventions. For example, few people raise questions about an athlete who trains at high altitudes in order to increase the oxygen-carrying capacity of their blood, but many concerns are raised if an athlete injects themselves with erythropoietin (EPO) to achieve a similar increase in oxygencarrying capacity. The ‘natural’ method of training is much more acceptable than the ‘unnatural’ method.256  2. Therapy vs. Enhancement. There is an important distinction to be made between the use of drugs or medical interventions that are meant to be therapeutic and those that are meant only to enhance. In general, therapeutic interventions are those that restore, rectify or address a particular problem caused by illness, injury or disease. When a person suffers from a decrease (or increase) in normal functioning, a loss of function or some undesirable change, interventions are meant to get them back to normal, or as close to normal as possible.257 An enhancement, however, does not try to correct a problem with functioning or a deficit caused by illness, injury or disease. Instead it ‘improves’ upon whatever may already be acceptable but which 256  This also tends to have something to do with the amount of work we perceive ought to be required to perform well. This is a significant part of the neuroethics literature. 257 Note here ‘normal’ is not meant to designate some kind of objective standard but merely as a designator of whatever level of functioning a person feels is tolerable or good enough. It is not meant to suggest that there is in fact a ‘normal’ level of functioning but seeks to contrast with whatever state elicits a therapeutic intervention.  180  a person wants to improve for a particular reason. Rhinoplasty to fix a broken nose caused by a car accident is considered therapeutic, while it is considered an enhancement in the case of someone who wants to have the same nose as their favorite celebrity for instance. As with the distinction between the natural and unnatural, most people are less likely to have trouble condoning therapeutic interventions while those meant to enhance are considered more problematic.  3. Social Access Another issue is the social accessibility of resources that both treat and enhance. Some people argue that in order to ensure fairness and justice in a society, resources should be accessible to everyone. In the case of drugs which are meant to treat illnesses, many people argue that these should be widely available. The worry with cognitive enhancers, however, is that they might only be available to those with the resources to pay for access to them which creates an imbalance in society.  In order to keep the structure of the survey simple, the three risk frameworks do not include issues of identity, a potential fourth issue, although it is included in the survey without an associated type of risk. The issue of risks to one’s identity is a topic for further study.  181  Risk Value Frames The survey consists of fifteen questions about cognitive enhancers and related issues as described. In order to test the hypothesis about the different types of risk, the survey is stratified into four streams or groups. Each participant is assigned one of four distinct pathways and this allows a comparison across the groups. Each pathway involves a different combination of risk frameworks involving the first five questions. All participants receive identical versions of the last ten questions, which is meant to see if any of the effects from the risk frames carry over into the more general, non-risk questions. Therefore, as a participant enters the survey, they are assigned one of the four pathways. Each pathway asks the five questions in alternating risk frameworks. In additional to standard, aspirational and moral risk, benefits were an additional category added to provide contrast and to show that its effects are quite separate from that of risk. Since many people assume that risky decisions are made by merely the weighing costs and benefits of possible outcomes, this category was included to further test this claim. The first two questions ask about natural vs. unnatural stimulants, the second two questions ask about the differences between memory therapy and memory enhancement, and the fifth question involves the issues of social accessibility of cognitive enhancers. The risk frameworks, and benefits are applied to each of these categories as follows in Table A.1.  182  Table A.1 Patterns of risk frames for Questions 1-5 Pattern 1  Question Number 1,2  Topic Natural vs. Unnatural Stimulants  Framework Standard  3,4  Memory therapy vs. Enhancement  Aspirational  5 2  1,2 3,4  Benefits Social Access Natural vs. Unnatural Stimulants Memory therapy vs. Enhancement  5 3  1,2 3,4  1,2 3,4  Benefits Moral  Social Access Natural vs. Unnatural Stimulants Memory therapy vs. Enhancement  5 4  Aspirational  Benefits Moral Standard  Social Access Natural vs. Unnatural Stimulants Memory therapy vs. Enhancement  5  Moral Standard Aspirational  Social Access  This alternating pattern of risk frames was chosen to provide a more robust test of the risk framework than simply using one particular framework for each pathway (i.e. all questions in pattern 1 would be standard risks, pattern 2 would be aspirational risks etc). If each of the different patterns used only a single frame (i.e. all questions were framed in  183  terms of standard risks), then it would be more difficult to draw conclusions about the actual effect the different sorts of risk have on how people evaluated the question. That there might have been a different pattern of responses would have shown a much weaker association than I was hoping to show and any conclusions that could be drawn from it would be less convincing. The idea then is to put the framework to test within each pattern of answers to see if an individual responds differently to a different type of risk. If a person responds more negatively to a question involving aspirational risk than to a question involving moral risk, then it is more likely that they are attending to the type of risk rather than appealing to some immeasurable factor or influence. Since my hypothesis is that risk is normative and informed by what a person values, then there should be a difference in the way they respond when different values are put at risk.  Survey Questions As described, the survey consists of a total of fifteen questions. The first five questions for each version of the survey involve the risk frameworks. The following ten questions are more general in content and are not explicitly framed in terms of any particular risk. They are designed to be relevant to the topic and ask the participants questions about other enhancement-type strategies for learning such as the use of tutors and private schools, as well as what their views on access to health care in general, distributive justice and identity. These general questions follow those questions using the risk frameworks in order to avoid priming the participants or diluting the effect of the risk frameworks in the first five  184  questions. Additionally, they are less vital to the primary hypothesis but serve to provide potentially interesting information about general attitudes about some of the major challenges to the use of cognitive enhancers. They also provide an opportunity for participants to consider the issues posed from a broader social level and the sort of implications they might have in the future. Part of the analysis will consider whether there are any marked differences in the general attitudes of participants for these last ten questions however nothing significant is anticipated. Finally, these general questions were also an attempt to explore whether the neuroethics literature concerning the use of cognitive enhancement does capture those issues that are salient to the ethical worries of the public. If they do, then more work ought to be done to explore each of them. If they do not, however, then they ought to be reconsidered or reevaluated. The following tables contain the first five questions as they appeared in the survey and are grouped according to the type of value emphasized and thus each of the frames used.  185  Table A.2 Question 1 framed in each of the four categories. Type of Risk Frame Standard  Question 1  Drinking coffee with caffeine is associated with side-effects such as increases in several cardiovascular disease risk factors as well as anxiety upon withdrawal, sleep disturbances, dehydration causing headaches and toxicity at high doses. Do you think it is acceptable to drink coffee before an exam or interview in order to improve your alertness and to perform at your best? Aspirational Performing well on an important exam or interview requires preparation, study, and effort. Even if a person puts in all the required effort and spends days in preparation, they may not be able to do as well as someone who is more alert and focused. There are some stimulants that can improve both alertness and concentration. Do you think it is acceptable to drink coffee before an exam or interview in order to increase your alertness and to perform at your best? Moral In a fair competition, it is important that no one has an advantage or disadvantage that others do not. If someone has an unfair advantage we say that they are cheating, or their behaviour is wrong and we might question their character. For example, taking a substance with stimulant-like effects before an exam or interview may give that person an unfair advantage over others. Do you think it is acceptable to drink coffee before an exam or interview in order to increase your alertness and to perform at your best? Benefits Drinking coffee with caffeine is associated with decreases in diseases like diabetes, Parkinson’s and some cancers and may also reduce headaches, cavities and lowers the chance of heart disease. The stimulant effect of caffeine in coffee is also associated with improvements in both athletic ability and in alertness and concentration. Do you think it is acceptable to drink coffee before an exam or interview in order to increase your alertness and to perform at your best?  186  Table A.3 Question 2 framed in each of the four categories. Type of Risk Frame Standard  Question 2  The use of a stimulant drug like Ritalin is associated with side-effects such as heart-related problems leading to sudden death, strokes and heart attacks in those with pre-existing conditions, psychiatric problems, anxiety as well as addiction and withdrawal effects. Do you think it is acceptable to take a drug like Ritalin before an exam or interview in order to increase your alertness and to perform at your best? Aspirational Performing well on an important exam or interview requires preparation, study, and effort. Even if a person puts in all the required effort and spends days in preparation, they may not be able to do as well as someone who is more alert and focused. There are some stimulants that can improve both alertness and concentration Do you think it is acceptable to take a drug like Ritalin before an exam or interview in order to increase your alertness and to perform at your best? Moral In competition, it is important that no one has an advantage or disadvantage that others do not. If someone has an unfair advantage we might say that they are cheating, or their behaviour is wrong and we may question their character. Taking a substance with stimulant-like effects before an exam or interview may give that person an unfair advantage over those who do not. Do you think it is acceptable to take a drug like Ritalin before an exam or interview in order to increase your alertness and to perform at your best? Benefits The use of a drug such as Ritalin is associated with improvements in abnormal behaviours in people with Attention Deficit Hyperactivity Disorder (ADHD) as well as self-esteem and social function. In people without ADHD the stimulant effect of Ritalin is associated with increased wakefulness, focus and attentiveness. Do you think it is acceptable to take a drug like Ritalin before an exam or interview in order to increase your alertness and to perform at your best?  187  Table A.4 Question 3 framed in each of the four categories. Type of Risk Frame Standard  Question 3  The use of some new classes of drugs to treat some of the effects of Alzheimer’s and Parkinson’s diseases is very new and still being tested. However, some of the early indications suggest these drugs are associated with side effects such as abnormal sleepiness, nausea, headaches and interferences with some types of memory. Other effects are still unknown. Do you think it is acceptable to take these drugs to treat symptoms of a disease or disorder, such as short-term memory loss? Aspirational Performing well on an important exam or interview requires preparation, study, and effort. Even if a person puts in all the required effort and spends days in preparation, they may not be able to do as well as someone who is able to remember more information or detail. There are some drugs available that can improve your memory. Do you think it is acceptable to take these drugs to treat symptoms of a disease or disorder such as short-term memory loss? Moral In competition, it is important that no one has an advantage or disadvantage that others do not. If someone has an unfair advantage we might say that they are cheating, their behaviour is wrong and we may question their character. Taking a substance with stimulant-like effects before an exam or interview will give that person an unfair advantage over those who do not. Do you think it is acceptable to take these drugs to treat symptoms of a disease or disorder, such as short-term memory loss? Benefits The use of new classes of drugs to treat some of the diseases/disorders causing memory impairments is still being researched. However, some of the early indications suggest these drugs are associated with improvements in short-term memory, alertness and concentration in everyone regardless of their health. Do you think it is acceptable to use these drugs to treat symptoms of a disease or disorder, such as short-term memory loss?  188  Table A.5 Question 4 framed in each of the four categories. Type of Risk Frame Standard  Question 4  The use of some new classes of drugs to treat some of the effects of Alzheimer’s and Parkinson’s diseases is very new and still being tested. However, some of the early indications suggest these drugs are associated with side effects such as abnormal sleepiness, nausea, headaches and interferences with some types of memory. Other effects are still unknown. Do you think it is acceptable for people without a disease or disorder to take drugs to improve short-term memory in order to perform better? Aspirational Performing well on an important exam or interview requires preparation, study, and effort. Even if a person puts in all the required effort and spends days in preparation, they may not be able to do as well as someone who is able to remember more information or detail. There are some drugs that can improve your memory. Do you think it is acceptable for people without a disease or disorder to take drugs that improve short-term memory in order to perform better? Moral In competition, it is important that no one has an advantage or disadvantage that others do not. If someone has an unfair advantage we might say that they are cheating, their behaviour is wrong and we may question their character. Taking a substance with stimulant-like effects before an exam or interview will give that person an unfair advantage over those who do not. Do you think it is acceptable for people without a disease or disorder to take drugs to improve short-term memory in order to perform better? Benefits The use of new classes of drugs to treat some of the diseases/disorders causing memory impairments is still being researched. However, some of the early indications suggest these drugs are associated with improvements in short-term memory, alertness and concentration in everyone regardless of their health. Do you think it is acceptable for people without a disease or disorder to take drugs to improve their short-term memory in order to perform better?  189  Table A.6 Question 5 framed in each of the four categories. Type of Risk Frame  Question 5  Standard The use of cognitive enhancers has been associated with heart-related problems leading to sudden death, strokes and heart attacks in those with preexisting conditions, psychiatric problems, anxiety as well as addiction and withdrawal effects. Other side effects include interference with some types of memory and some other cognitive functions although more testing is needed. Few long-term side effects have been confirmed since these drugs are so new. Do you think that cognitive enhancing drugs should be widely available to anyone who wants them regardless of income, occupation, or age? Aspirational Despite the nature of an individual’s goals or aspirations, success usually requires hard work, dedication, focus and effort. Often achievement depends on outperforming others especially when they are pursuing similar goals. In this type of competitive environment failing to keep up with others may significantly compromise all the hard work that a person might have dedicated many years of their lives to. This is especially true if some of the people you must compete against are using cognitive enhancing drugs. Do you think that cognitive enhancing drugs should be widely available to anyone who wants them regardless of income, occupation, or age? Moral In competition, it is important that no one has an advantage or disadvantage that others do not. If someone has an unfair advantage we might say that they are cheating, their behaviour is wrong and we may question their character. Widespread use of cognitive enhancing drugs may create inequality in society and could possibly result in our inability to recognize such an injustice. Do you think that cognitive enhancing drugs should be widely available to anyone who wants them regardless of income, occupation, or age? Benefits Using cognitive enhancing drugs such as Ritalin or Modafinil can produce increases in mental alertness, memory, focus and concentration making everyday and work-related tasks easier and improving stress management. Do you think that cognitive enhancing drugs should be widely available to anyone who wants them regardless of income, occupation, or age?  190  Survey Responses Following the well-established Nerd survey platform, the neuroethics survey allows participants to choose from a standard Likert response scale ranging from Strong Agreement to Strong Disagreement with a Don’t Know/Can’t Answer option. As well as radial buttons to record this responses, an optional comment box appears at the end of each set of responses which allows individuals to comment, justify, reason or offer more explanations about their answers or the questions in general. Although this particular investigation will not formally analyze these qualitative responses, they nonetheless provide a very rich source of further research. Since the survey is designed to elicit responses to various types of risk, it is anticipated that participants will make use of the opportunity to elaborate on their responses and provide a good source of contextual data for analysis. Arguably, some of the more revealing insight into what respondents have to say about the various questions posed will be found in the textual comments. In the future this might result in a more thorough account of how people respond to the different type of risk frameworks but such analysis is not in the domain of this chapter. My aim at this point is to conduct a preliminary analysis of the data generated in relation to my thesis and to determine future avenues of research.  Recruitment Email invitations were sent to potential participants including: various interest groups such as the UBC Core for Neuroethics, colleagues, friends, and other professional and personal contacts. Additionally a press release was issued by the university’s public affairs office which has campus-wide exposure. A banner ad was placed in an online 191  newspaper, The Tyee, which ran for the month of October. The survey was also advertised in a printed press release across the University of British Columbia and at the Inaugural Event for the UBC Core in Neuroethics. For the purposes of this study, a target of one hundred participants was set in order to generate a solid set of data within a reasonable length of time. Given the usability of the survey platform and the limited resources for advertising, this number was quite modest.  192  Appendix B: Survey Results258 Survey Results The results of the survey will primarily focus on the hypothesis that different risk frameworks will have different effects on participants’ responses. Since only the first five of the fifteen questions involve risk, they will be the focus of the rest of this chapter. All of the results reported here are for questions 1 to 5 exclusively. The survey is administered online which allows for quick and efficient analysis of the data. The optional text box at the bottom of each question allows participants to provide further comment/explanation and can be easily accessed without the need for transcription. Only data from completed surveys were used in the analysis (i.e., all questions were answered). Since the Likert scale was used in the standard multiple choice questions, and the aim was to test for differences from one group to another, the non-parametric Kruskal-Wallis test was used for significance using SPSS (Version 15.0).259 When determining average time spent on each survey, extreme outliers taking more than 60 minutes or less than one minute to complete the survey were excluded; however, this did not have a considerable effect on the average time. Additionally, the analysis was conducted with the data collected from the first 200 participants.260  258  The Neuroethics survey can be found online at: www.yourviews.ubc.ca The Kruskal-Wallis test is an extension of the Mann-Whitney test for two or more independent samples and does not assume a normal distribution (Conover, Practical Nonparametric Statistics, 229-237). When the results are significant (p < 0.05), it indicates a significant difference between at least two of the sample medians. The test does not indicate whether just two or more than two groups differ from each other. However, in this analysis I will include both significant results and results that are close to significance (p <0.06) when it obviously differs from other results. 260 Although the survey continues to generate interest and now has close to 300 participants. 259  193  Demographics Like all the surveys designed for the NERD platform, the Neuroethics Survey collected demographic information when the participant registers on the site. Table B.1 provides a summary of this information from those who responded and shows that participants for this study were fairly young (63% were between 19 and 40), well educated (97% had some university education) and were relatively evenly split between male and female participants although there were slightly more males than females. Additionally, approximately 50% of participants were from Canada. The others were from various countries with the greatest proportion being from the US but also including the UK and Europe.  194  Table B.1 Summary of voluntary demographic information Topic Number of participants Participants contributing demographic information (%) Gender (%)  Category  Female Male Age (%) 19 – 29 30 – 39 40 – 49 50 – 59 Above 60 Education (%) Secondary School College/University Other Country of Residence (%) Canada USA UK Other261 Means of learning about the Email invitation survey (%) from researcher Friend/colleague Searching on the internet Online Newspaper Other  Results 200 67 45 55 40 28 14 10 8 2 97 ~1 50 17 8 25 ~38 34 ~4 2 20  Responses by Risk Frame Although the survey was designed to account for order effects, that element of the design goes beyond the scope of this project, which is not meant to provide a purely empirical argument. Therefore the results reported are based on the premise that there is  261  Other countries included: Australia, Bulgaria, Chile, Egypt, Germany, Hong Kong, India, Ireland, Italy, Mexico, Netherlands, New Zealand, Nigeria, Norway, Poland, Portugal, Romania, South Africa, Spain, Sweden, Switzerland, Turkey, and Yugoslavia.  195  reason to think of risk defined in terms of what a person values. The following is a summary of the way respondents answered the questions based on the type of risk framework they were expressed in. Overall, analyzing and comparing the results shows that respondents were affected by the risk framing in some instances. The following figures (Fig. B.1 to Fig B.4) represent the responses given to questions 1 through 5 based on the type of risk they were framed in. Figure B.1 Responses to Questions 1 to 5 in the Standard Risk Value frame  196  Figure B.2 Responses to Questions 1 to 5 in the Aspirational Risk Value frame  Figure B.3 Responses to Questions 1 to 5 in the Moral Risk Value frame  197  Figure B.4 Responses to Questions 1 to 5 in the Benefit frame  Significant Differences in Responses Using the Kruskal-Wallis test it was found that there were six instances of significant differences between the responses given by participants based on the framing of the question they received. Additionally, an identifiable trend is discernable which suggests there is further opportunity to refine this experiment. Three questions in particular produced the most difference in responses: Questions 1, 3 and 5.  Question 1: The Acceptability of Coffee The first two questions in the survey were contextualized within the issue of natural vs. artificial ‘enhancers’. Question 1 asked participants about whether they thought taking  198  coffee in order to perform well on a test or interview was acceptable.262 The question was presented in four different ways, each highlighting either one of the risk types, or simply the benefits of drinking coffee. The results of the survey showed that there were significant results between standard risks and aspirational risks (p = 0.045), standard risks and moral risks (p = 0.03) and standard risks and benefits (p = 0.0007). In each case, those participants who received the question in the standard risk frame were less likely to Strongly Agree/Agree than those in each of the other cases. There was no difference, however, when the same question was asked about Ritalin. First, those respondents who read the question in framed by aspirational risks found it significantly more acceptable than those who received the standard risk frame. Additionally, when the standard risk frame was compared to moral risks for the same question, it was found that respondents faced with the moral risks associated with drinking coffee were more likely to Agree rather than to Strongly Agree. Not surprisingly, there was also a significant difference between respondents who received Question 1 in terms of standard Risks and in terms of benefits. However the respondents were more likely to Strongly Agree that coffee was acceptable when framed in terms of standard Risks while those receiving the benefits frame were more likely to Agree and less likely to Strongly Agree.  262  Coffee is a widely available and affordable substance, and its use is pervasive in our society. The fact that caffeine has stimulant-like effects is also widely known and in fact often motivates the consumption of coffee.  199  Question 3: The Acceptability of Therapeutic uses of Cognitive Enhancers The next two questions (3 and 4) were contextualized within the issue of using enhancements for therapy or for non-therapy purposes. For Question 3, which asked about the acceptability of taking drugs to treat some of the symptoms associated with a disease or disorder causing some type of cognitive deficit (i.e. memory loss), there were two instances of significant difference. When comparing the moral Risk frame vs. the benefits frame, it was found more people Strongly Agreed within the context of moral risks (p = 0.06). In the benefits frame, there was in general a slightly more negative pattern of response although overall more people chose Agree than they did in the moral risk frame (p = 0.008). When the same question was framed in terms of the moral or aspirational risks, people were more likely to Agree, rather than Strongly Agree for aspirational risks and there were more people who disagreed or chose the Don’t Know/Can’t Answer option. In the case of non-therapeutic use of cognitive enhancers, there were no significant differences.  Question 5: The Availability of Cognitive Enhancers This question asked respondents about their views concerning the availability of cognitive enhancers. For this question, there was a significant difference between standard risks and moral risks (p = 0.02). When the question emphasized standard risks, responses were more negative than when the Moral risks were emphasized. Moral risks produced a near-even split amongst both positive and negative answer-choices, which turned out to be a unique response pattern in the first five questions.  200  Results Summary As the above figures demonstrate, there does appear to be a difference in how participants responded to some of the questions based on the type of risk they were framed in. These differences are found to be statistically significant in five cases.263 Table B.2 is a summary of the significant results. In Question 1, which asked participants about whether they thought it was acceptable to drink coffee before an exam or interview but was framed in terms of the different risk types, the most pronounced results were found. When the question was framed in standard risks (i.e. risks to health), responses were less positive than when the question was asked in terms of aspirational risks (i.e. risks to one’s ability to achieve their goals). Additionally, the responses to the aspirational frame elicited more strongly negative answers as well which was not seen in the standard risk frame. The other difference also occurs in question one between the standard risks and benefits although this was unsurprising. Those responses in the benefits frame were far more positive than those in the standard frame. However, there was also a difference between the moral risk frame and benefits. Again, responses to the benefits were more positive than those to the moral risks. The other significant difference occurs in Question 3, which asked people about whether they thought it was acceptable to take drugs such as cognitive enhancers to treat the symptoms of a disease or illness. When the question was framed in terms of moral risks (i.e. risks to one’s sense of fairness), responses were more positive than those from the aspirational risk. 263  Statistical significance was determined using the Kruskal-Wallace test which is standard for Likert scales and for a p-value of <0.05. However, I included the one result with a p value of 0.06.  201  Table B.2 Summary of Significant Results Question  Topic  Number  1  Type of risk frame  P-value  Significance  0.045  Yes  0.03  Yes  0.0007  Yes  0.06  Possibly  0.008  Yes  0.02  Yes  compared  Acceptability of coffee  Standard vs. Aspirational  1  Acceptability of coffee  Standard vs. Moral  1  Acceptability of Coffee  Standard vs. Benefits  3  3  5  Therapeutic use of  Moral vs.  cognitive enhancers  Benefits  Therapeutic use of  Aspirational vs.  cognitive enhancers  Moral  Accessibility of cognitive Standard vs. enhancers  Moral  202  Discussion The results from this preliminary empirical study suggest that there is a difference between responses when they are framed in the three types of risk I have described. This provides some support to the claims I have made in previous chapters and it suggests that there is a promising avenue for further investigation. In general there were a few identifiable trends which challenge the standard view of risk.264 Standard Risks •  As expected, standard Risks had the overall effect of producing more negative than positive responses, particularly for Question 1, while benefits produced more positive than negative responses.265 However, this was not true for all of the questions and the effect diminished after the first question.  Aspirational Risks •  There was a difference in the way people responded to the acceptability of coffee when framed as a threat to one’s aspirations. When compared to the standard risks, even though more people were likely to agree more strongly that coffee was acceptable, there was also more disagreement in the aspirational risk framing.  •  In the case of using cognitive enhancers for therapeutic purposes (i.e. to treat a disease or disorder leading to memory loss), those faced with aspirational Risks were less likely to Strongly Agree that this was acceptable. This suggests that respondents did feel that their use, even therapeutically, was a risk to their own aspirations by creating an unfair advantage. Another effect was that these risks  264  Although these are perhaps not statistically significant. Here I take a positive response to be more Strongly Agrees compared to Agrees or more agreement in general to disagreement. A negative response is thus fewer Strongly Agrees compared to Agrees, or more Disagrees in one risk value frame as compared with the others. 265  203  produced somewhat of an overall neutral response when respondent were asked about the accessibility of cognitive enhancers. There were roughly equal numbers of both agreement and disagreement suggesting again that there was a risk to some people’s aspirations. Moral Risks •  Moral risks also produced some interesting effects. Those respondents who received this framing were more likely to strongly agree to the widespread availability of cognitive enhancers than those who read about standard risks. This is not too surprising but it suggests that the coffee, a very familiar stimulant, is resistant to the kind of moral framing given here. It might be necessary to use a much stronger method of framing in this case although the moral risks are probably quite minimal.  •  Also, in the therapeutic use of enhancers, more people were likely to Agree or Strongly Agree to the acceptability of such use when framed as a moral risk whereas few people Strongly Agreed to this when framed as an aspirational risk. This is a promising result since it suggests that, even though the use of an enhancing drug might give a person with a disease or disorder an advantage, it would be equally unfair to expect such a person to compete without it. This question was designed specifically to challenge a person’s sense of fairness and justice.  204  Appendix C: UBC Research Ethics Board Certificate of Approval  205  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0068844/manifest

Comment

Related Items